[jira] [Resolved] (CASSANDRA-6916) Preemptive opening of compaction result

2014-04-24 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-6916.


Resolution: Fixed

committed

> Preemptive opening of compaction result
> ---
>
> Key: CASSANDRA-6916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6916
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
>  Labels: performance
> Fix For: 2.1 beta2
>
> Attachments: 6916-stock2_1.mixed.cache_tweaks.tar.gz, 
> 6916-stock2_1.mixed.logs.tar.gz, 6916.fixup.txt, 
> 6916v3-preempive-open-compact.logs.gz, 
> 6916v3-preempive-open-compact.mixed.2.logs.tar.gz, 
> 6916v3-premptive-open-compact.mixed.cache_tweaks.2.tar.gz
>
>
> Related to CASSANDRA-6812, but a little simpler: when compacting, we mess 
> quite badly with the page cache. One thing we can do to mitigate this problem 
> is to use the sstable we're writing before we've finished writing it, and to 
> drop the regions from the old sstables from the page cache as soon as the new 
> sstables have them (even if they're only written to the page cache). This 
> should minimise any page cache churn, as the old sstables must be larger than 
> the new sstable, and since both will be in memory, dropping the old sstables 
> is at least as good as dropping the new.
> The approach is quite straight-forward. Every X MB written:
> # grab flushed length of index file;
> # grab second to last index summary record, after excluding those that point 
> to positions after the flushed length;
> # open index file, and check that our last record doesn't occur outside of 
> the flushed length of the data file (pretty unlikely)
> # Open the sstable with the calculated upper bound
> Some complications:
> # must keep running copy of compression metadata for reopening with
> # we need to be able to replace an sstable with itself but a different lower 
> bound
> # we need to drop the old page cache only when readers have finished



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/2] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-24 Thread marcuse
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3d03b9b5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3d03b9b5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3d03b9b5

Branch: refs/heads/trunk
Commit: 3d03b9b5ac9e345d13b19f4909e24d5e8b8e0dd0
Parents: bcb3f47 99de2ff
Author: Marcus Eriksson 
Authored: Fri Apr 25 08:22:23 2014 +0200
Committer: Marcus Eriksson 
Committed: Fri Apr 25 08:22:23 2014 +0200

--
 .../cassandra/db/compaction/CompactionTask.java  |  3 ++-
 .../cassandra/db/compaction/SSTableSplitter.java |  4 ++--
 .../apache/cassandra/io/sstable/SSTableRewriter.java | 15 +++
 3 files changed, 19 insertions(+), 3 deletions(-)
--




[1/2] git commit: CASSANDRA-6916 followup, make sure offline split works

2014-04-24 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk bcb3f4713 -> 3d03b9b5a


CASSANDRA-6916 followup, make sure offline split works


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/99de2ff6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/99de2ff6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/99de2ff6

Branch: refs/heads/trunk
Commit: 99de2ff6f60f95addc0ba6c1313d0200ce6fd512
Parents: 159e6da
Author: belliottsmith 
Authored: Fri Apr 25 08:20:11 2014 +0200
Committer: Marcus Eriksson 
Committed: Fri Apr 25 08:21:52 2014 +0200

--
 .../cassandra/db/compaction/CompactionTask.java  |  3 ++-
 .../cassandra/db/compaction/SSTableSplitter.java |  4 ++--
 .../apache/cassandra/io/sstable/SSTableRewriter.java | 15 +++
 3 files changed, 19 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/99de2ff6/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index 77dc7b0..c1c5504 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@ -218,7 +218,8 @@ public class CompactionTask extends AbstractCompactionTask
 
 Collection oldSStables = this.sstables;
 List newSStables = writer.finished();
-cfs.getDataTracker().markCompactedSSTablesReplaced(oldSStables, 
newSStables, compactionType);
+if (!offline)
+cfs.getDataTracker().markCompactedSSTablesReplaced(oldSStables, 
newSStables, compactionType);
 
 // log a bunch of statistics about the result and save to system table 
compaction_history
 long dTime = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99de2ff6/src/java/org/apache/cassandra/db/compaction/SSTableSplitter.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/SSTableSplitter.java 
b/src/java/org/apache/cassandra/db/compaction/SSTableSplitter.java
index 67705e0..6b9f161 100644
--- a/src/java/org/apache/cassandra/db/compaction/SSTableSplitter.java
+++ b/src/java/org/apache/cassandra/db/compaction/SSTableSplitter.java
@@ -67,7 +67,7 @@ public class SSTableSplitter {
 @Override
 protected CompactionController 
getCompactionController(Set toCompact)
 {
-return new SplitController(cfs, toCompact);
+return new SplitController(cfs);
 }
 
 @Override
@@ -85,7 +85,7 @@ public class SSTableSplitter {
 
 public static class SplitController extends CompactionController
 {
-public SplitController(ColumnFamilyStore cfs, 
Collection toCompact)
+public SplitController(ColumnFamilyStore cfs)
 {
 super(cfs, CompactionManager.NO_GC);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99de2ff6/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
index 2dfefc4..553993a 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
@@ -37,6 +37,21 @@ import 
org.apache.cassandra.db.compaction.AbstractCompactedRow;
 import org.apache.cassandra.db.compaction.OperationType;
 import org.apache.cassandra.utils.CLibrary;
 
+/**
+ * Wraps one or more writers as output for rewriting one or more readers: 
every sstable_preemptive_open_interval_in_mb
+ * we look in the summary we're collecting for the latest writer for the 
penultimate key that we know to have been fully
+ * flushed to the index file, and then double check that the key is fully 
present in the flushed data file.
+ * Then we move the starts of each reader forwards to that point, replace them 
in the datatracker, and attach a runnable
+ * for on-close (i.e. when all references expire) that drops the page cache 
prior to that key position
+ *
+ * hard-links are created for each partially written sstable so that readers 
opened against them continue to work past
+ * the rename of the temporary file, which is deleted once all readers against 
the hard-link have been closed.
+ * If for any reason the writer is rolled over, we immediately rename and 
fully expose the completed file in the DataTracker.
+ *
+ * On abort we restore the original lower bounds to the exi

git commit: CASSANDRA-6916 followup, make sure offline split works

2014-04-24 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 159e6dabb -> 99de2ff6f


CASSANDRA-6916 followup, make sure offline split works


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/99de2ff6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/99de2ff6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/99de2ff6

Branch: refs/heads/cassandra-2.1
Commit: 99de2ff6f60f95addc0ba6c1313d0200ce6fd512
Parents: 159e6da
Author: belliottsmith 
Authored: Fri Apr 25 08:20:11 2014 +0200
Committer: Marcus Eriksson 
Committed: Fri Apr 25 08:21:52 2014 +0200

--
 .../cassandra/db/compaction/CompactionTask.java  |  3 ++-
 .../cassandra/db/compaction/SSTableSplitter.java |  4 ++--
 .../apache/cassandra/io/sstable/SSTableRewriter.java | 15 +++
 3 files changed, 19 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/99de2ff6/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index 77dc7b0..c1c5504 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@ -218,7 +218,8 @@ public class CompactionTask extends AbstractCompactionTask
 
 Collection oldSStables = this.sstables;
 List newSStables = writer.finished();
-cfs.getDataTracker().markCompactedSSTablesReplaced(oldSStables, 
newSStables, compactionType);
+if (!offline)
+cfs.getDataTracker().markCompactedSSTablesReplaced(oldSStables, 
newSStables, compactionType);
 
 // log a bunch of statistics about the result and save to system table 
compaction_history
 long dTime = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - start);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99de2ff6/src/java/org/apache/cassandra/db/compaction/SSTableSplitter.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/SSTableSplitter.java 
b/src/java/org/apache/cassandra/db/compaction/SSTableSplitter.java
index 67705e0..6b9f161 100644
--- a/src/java/org/apache/cassandra/db/compaction/SSTableSplitter.java
+++ b/src/java/org/apache/cassandra/db/compaction/SSTableSplitter.java
@@ -67,7 +67,7 @@ public class SSTableSplitter {
 @Override
 protected CompactionController 
getCompactionController(Set toCompact)
 {
-return new SplitController(cfs, toCompact);
+return new SplitController(cfs);
 }
 
 @Override
@@ -85,7 +85,7 @@ public class SSTableSplitter {
 
 public static class SplitController extends CompactionController
 {
-public SplitController(ColumnFamilyStore cfs, 
Collection toCompact)
+public SplitController(ColumnFamilyStore cfs)
 {
 super(cfs, CompactionManager.NO_GC);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99de2ff6/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
index 2dfefc4..553993a 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
@@ -37,6 +37,21 @@ import 
org.apache.cassandra.db.compaction.AbstractCompactedRow;
 import org.apache.cassandra.db.compaction.OperationType;
 import org.apache.cassandra.utils.CLibrary;
 
+/**
+ * Wraps one or more writers as output for rewriting one or more readers: 
every sstable_preemptive_open_interval_in_mb
+ * we look in the summary we're collecting for the latest writer for the 
penultimate key that we know to have been fully
+ * flushed to the index file, and then double check that the key is fully 
present in the flushed data file.
+ * Then we move the starts of each reader forwards to that point, replace them 
in the datatracker, and attach a runnable
+ * for on-close (i.e. when all references expire) that drops the page cache 
prior to that key position
+ *
+ * hard-links are created for each partially written sstable so that readers 
opened against them continue to work past
+ * the rename of the temporary file, which is deleted once all readers against 
the hard-link have been closed.
+ * If for any reason the writer is rolled over, we immediately rename and 
fully expose the completed file in the DataTracker.
+ *
+ * On abort we restore the original lower b

[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-24 Thread dbrosius
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/159e6dab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/159e6dab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/159e6dab

Branch: refs/heads/trunk
Commit: 159e6dabbbec4850fe23d54923b3ffc12d75ef58
Parents: f5fd02f 86382f6
Author: Dave Brosius 
Authored: Thu Apr 24 23:26:30 2014 -0400
Committer: Dave Brosius 
Committed: Thu Apr 24 23:26:30 2014 -0400

--
 src/java/org/apache/cassandra/cql3/statements/SelectStatement.java | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/159e6dab/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--



[1/3] git commit: remove dead local vars

2014-04-24 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk bd6431323 -> bcb3f4713


remove dead local vars


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86382f64
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86382f64
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86382f64

Branch: refs/heads/trunk
Commit: 86382f6427803854e24e7ae198f2292e1b9edf09
Parents: 871a603
Author: Dave Brosius 
Authored: Thu Apr 24 23:25:29 2014 -0400
Committer: Dave Brosius 
Committed: Thu Apr 24 23:25:29 2014 -0400

--
 src/java/org/apache/cassandra/cql3/statements/SelectStatement.java | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/86382f64/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 2652b29..60ed763 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -1451,8 +1451,6 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 //   1) we're in the special case of the 'tuple' notation 
from #4851 which we expand as multiple
 //  consecutive slices: in which case we're good with 
this restriction and we continue
 //   2) we have a 2ndary index, in which case we have to 
use it but can skip more validation
-boolean hasTuple = false;
-boolean hasRestrictedNotTuple = false;
 if (!(previousIsSlice && restriction.isSlice() && 
((Restriction.Slice)restriction).isPartOfTuple()))
 {
 if (hasQueriableIndex)



[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-24 Thread dbrosius
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bcb3f471
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bcb3f471
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bcb3f471

Branch: refs/heads/trunk
Commit: bcb3f4713bbae10906a4ffead0b8e80c181a9af6
Parents: bd64313 159e6da
Author: Dave Brosius 
Authored: Thu Apr 24 23:27:05 2014 -0400
Committer: Dave Brosius 
Committed: Thu Apr 24 23:27:05 2014 -0400

--
 src/java/org/apache/cassandra/cql3/statements/SelectStatement.java | 2 --
 1 file changed, 2 deletions(-)
--




[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-24 Thread dbrosius
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/159e6dab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/159e6dab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/159e6dab

Branch: refs/heads/cassandra-2.1
Commit: 159e6dabbbec4850fe23d54923b3ffc12d75ef58
Parents: f5fd02f 86382f6
Author: Dave Brosius 
Authored: Thu Apr 24 23:26:30 2014 -0400
Committer: Dave Brosius 
Committed: Thu Apr 24 23:26:30 2014 -0400

--
 src/java/org/apache/cassandra/cql3/statements/SelectStatement.java | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/159e6dab/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--



[1/2] git commit: remove dead local vars

2014-04-24 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 f5fd02f45 -> 159e6dabb


remove dead local vars


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86382f64
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86382f64
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86382f64

Branch: refs/heads/cassandra-2.1
Commit: 86382f6427803854e24e7ae198f2292e1b9edf09
Parents: 871a603
Author: Dave Brosius 
Authored: Thu Apr 24 23:25:29 2014 -0400
Committer: Dave Brosius 
Committed: Thu Apr 24 23:25:29 2014 -0400

--
 src/java/org/apache/cassandra/cql3/statements/SelectStatement.java | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/86382f64/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 2652b29..60ed763 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -1451,8 +1451,6 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 //   1) we're in the special case of the 'tuple' notation 
from #4851 which we expand as multiple
 //  consecutive slices: in which case we're good with 
this restriction and we continue
 //   2) we have a 2ndary index, in which case we have to 
use it but can skip more validation
-boolean hasTuple = false;
-boolean hasRestrictedNotTuple = false;
 if (!(previousIsSlice && restriction.isSlice() && 
((Restriction.Slice)restriction).isPartOfTuple()))
 {
 if (hasQueriableIndex)



git commit: remove dead local vars

2014-04-24 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 871a6030b -> 86382f642


remove dead local vars


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86382f64
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86382f64
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86382f64

Branch: refs/heads/cassandra-2.0
Commit: 86382f6427803854e24e7ae198f2292e1b9edf09
Parents: 871a603
Author: Dave Brosius 
Authored: Thu Apr 24 23:25:29 2014 -0400
Committer: Dave Brosius 
Committed: Thu Apr 24 23:25:29 2014 -0400

--
 src/java/org/apache/cassandra/cql3/statements/SelectStatement.java | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/86382f64/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 2652b29..60ed763 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -1451,8 +1451,6 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 //   1) we're in the special case of the 'tuple' notation 
from #4851 which we expand as multiple
 //  consecutive slices: in which case we're good with 
this restriction and we continue
 //   2) we have a 2ndary index, in which case we have to 
use it but can skip more validation
-boolean hasTuple = false;
-boolean hasRestrictedNotTuple = false;
 if (!(previousIsSlice && restriction.isSlice() && 
((Restriction.Slice)restriction).isPartOfTuple()))
 {
 if (hasQueriableIndex)



[jira] [Created] (CASSANDRA-7090) Add ability to set/get logging levels to nodetool

2014-04-24 Thread Jackson Chung (JIRA)
Jackson Chung created CASSANDRA-7090:


 Summary: Add ability to set/get logging levels to nodetool 
 Key: CASSANDRA-7090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7090
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jackson Chung
Priority: Minor
 Attachments: logging.diff

While it is nice to use logback (per #CASSANDRA-5883) and with the autoreload 
feature, in some cases ops/admin may not have the permission or ability to 
modify the configuration file(s). Or the files are controlled by puppet/chef so 
it is not desirable to modify them manually.

There is already an existing operation for setLoggingLevel in the 
StorageServuceMBean , so that's easy to expose that to the nodetool

But what was lacking was ability to see the current log level settings for 
various loggers. 

The attached diff aims to do 3 things:
# add JMX getLoggingLevels --> return a map of current loggers and the 
corresponding levels
# expose both getLoggingLevels and setLoggingLevel to nodetool. In particular, 
the setLoggingLevel behave as follows:
#* If both classQualifer and level are empty/null, it will reload the 
configuration to reset.
#* If classQualifer is not empty but level is empty/null, it will set the level 
to null for the defined classQualifer
#* The logback configuration should have < jmxConfigurator /> set

The diff is based on the master branch which uses logback, soit is not 
applicable to 2.0 or 1.2. (2.1 is ok) Though it would be nice to have the same 
ability for 2.0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6694) Slightly More Off-Heap Memtables

2014-04-24 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980585#comment-13980585
 ] 

Pavel Yaskevich commented on CASSANDRA-6694:


I have pushed allocation pools and minor refactoring to [my 
branch|https://github.com/xedin/cassandra/compare/CASSANDRA-6694], also 
addressed some of the problems from [~iamaleksey]'s comment expect concerns 
about CellName implementation for AbstractNativeCell which are mutual. 

> Slightly More Off-Heap Memtables
> 
>
> Key: CASSANDRA-6694
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6694
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>  Labels: performance
> Fix For: 2.1 beta2
>
>
> The Off Heap memtables introduced in CASSANDRA-6689 don't go far enough, as 
> the on-heap overhead is still very large. It should not be tremendously 
> difficult to extend these changes so that we allocate entire Cells off-heap, 
> instead of multiple BBs per Cell (with all their associated overhead).
> The goal (if possible) is to reach an overhead of 16-bytes per Cell (plus 4-6 
> bytes per cell on average for the btree overhead, for a total overhead of 
> around 20-22 bytes). This translates to 8-byte object overhead, 4-byte 
> address (we will do alignment tricks like the VM to allow us to address a 
> reasonably large memory space, although this trick is unlikely to last us 
> forever, at which point we will have to bite the bullet and accept a 24-byte 
> per cell overhead), and 4-byte object reference for maintaining our internal 
> list of allocations, which is unfortunately necessary since we cannot safely 
> (and cheaply) walk the object graph we allocate otherwise, which is necessary 
> for (allocation-) compaction and pointer rewriting.
> The ugliest thing here is going to be implementing the various CellName 
> instances so that they may be backed by native memory OR heap memory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[4/4] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-24 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bd643132
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bd643132
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bd643132

Branch: refs/heads/trunk
Commit: bd64313231a91f879697450998ca5cffa49bcc2f
Parents: 16bb16e f5fd02f
Author: Aleksey Yeschenko 
Authored: Fri Apr 25 03:43:29 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Apr 25 03:43:29 2014 +0300

--
 CHANGES.txt |  3 +-
 .../cassandra/cache/RefCountedMemory.java   |  7 ++-
 .../cassandra/cache/SerializingCache.java   | 53 +++-
 3 files changed, 48 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd643132/CHANGES.txt
--



[3/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-24 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/cache/RefCountedMemory.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5fd02f4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5fd02f4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5fd02f4

Branch: refs/heads/trunk
Commit: f5fd02f453cd06573b7a4a70c03f6435727705b8
Parents: ab87f83 871a603
Author: Aleksey Yeschenko 
Authored: Fri Apr 25 03:42:53 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Apr 25 03:42:53 2014 +0300

--
 CHANGES.txt |  3 +-
 .../cassandra/cache/RefCountedMemory.java   |  7 ++-
 .../cassandra/cache/SerializingCache.java   | 53 +++-
 3 files changed, 48 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5fd02f4/CHANGES.txt
--
diff --cc CHANGES.txt
index 11630e7,73c5034..9c78e33
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,59 -1,4 +1,58 @@@
 -2.0.8
 +2.1.0-beta2
 + * Increase default CL space to 8GB (CASSANDRA-7031)
 + * Add range tombstones to read repair digests (CASSANDRA-6863)
 + * Fix BTree.clear for large updates (CASSANDRA-6943)
 + * Fail write instead of logging a warning when unable to append to CL
 +   (CASSANDRA-6764)
 + * Eliminate possibility of CL segment appearing twice in active list 
 +   (CASSANDRA-6557)
 + * Apply DONTNEED fadvise to commitlog segments (CASSANDRA-6759)
 + * Switch CRC component to Adler and include it for compressed sstables 
 +   (CASSANDRA-4165)
 + * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451)
 + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
 + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
 + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
 + * Fix ABTC NPE and apply update function correctly (CASSANDRA-6692)
 + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)
 + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742)
 + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705)
 + * Scrub should not always clear out repaired status (CASSANDRA-5351)
 + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446)
 + * Fix ClassCastException for compact table with composites (CASSANDRA-6738)
 + * Fix potentially repairing with wrong nodes (CASSANDRA-6808)
 + * Change caching option syntax (CASSANDRA-6745)
 + * Fix stress to do proper counter reads (CASSANDRA-6835)
 + * Fix help message for stress counter_write (CASSANDRA-6824)
 + * Fix stress smart Thrift client to pick servers correctly (CASSANDRA-6848)
 + * Add logging levels (minimal, normal or verbose) to stress tool 
(CASSANDRA-6849)
 + * Fix race condition in Batch CLE (CASSANDRA-6860)
 + * Improve cleanup/scrub/upgradesstables failure handling (CASSANDRA-6774)
 + * ByteBuffer write() methods for serializing sstables (CASSANDRA-6781)
 + * Proper compare function for CollectionType (CASSANDRA-6783)
 + * Update native server to Netty 4 (CASSANDRA-6236)
 + * Fix off-by-one error in stress (CASSANDRA-6883)
 + * Make OpOrder AutoCloseable (CASSANDRA-6901)
 + * Remove sync repair JMX interface (CASSANDRA-6900)
 + * Add multiple memory allocation options for memtables (CASSANDRA-6689)
 + * Remove adjusted op rate from stress output (CASSANDRA-6921)
 + * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
 + * Serialize batchlog mutations with the version of the target node
 +   (CASSANDRA-6931)
 + * Optimize CounterColumn#reconcile() (CASSANDRA-6953)
 + * Properly remove 1.2 sstable support in 2.1 (CASSANDRA-6869)
 + * Lock counter cells, not partitions (CASSANDRA-6880)
 + * Track presence of legacy counter shards in sstables (CASSANDRA-6888)
 + * Ensure safe resource cleanup when replacing sstables (CASSANDRA-6912)
 + * Add failure handler to async callback (CASSANDRA-6747)
 + * Fix AE when closing SSTable without releasing reference (CASSANDRA-7000)
 + * Clean up IndexInfo on keyspace/table drops (CASSANDRA-6924)
 + * Only snapshot relative SSTables when sequential repair (CASSANDRA-7024)
 + * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
 + * fix cassandra stress errors on reads with native protocol (CASSANDRA-7033)
 + * Use OpOrder to guard sstable references for reads (CASSANDRA-6919)
 + * Preemptive opening of compaction result (CASSANDRA-6916)
 +Merged from 2.0:
- 2.0.8
   * Set JMX RMI port to 7199 (CASSANDRA-7087)
   * Use LOCAL_QUORUM for data reads at LOCAL_SERIAL (CASSANDRA-6939)
   * Log a warning for large batches (CASSANDRA-6487)
@@@ -124,9 -78,7 +123,11 @@@ Merged from 1.2
   * Schedule schema

[2/4] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-04-24 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/871a6030
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/871a6030
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/871a6030

Branch: refs/heads/trunk
Commit: 871a6030bc6ed475e652e67a8631338010c607dc
Parents: b9bb2c8 72203c5
Author: Aleksey Yeschenko 
Authored: Fri Apr 25 03:19:17 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Apr 25 03:19:17 2014 +0300

--
 CHANGES.txt |  2 +
 .../cassandra/cache/RefCountedMemory.java   |  7 ++-
 .../cassandra/cache/SerializingCache.java   | 53 +++-
 3 files changed, 49 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/871a6030/CHANGES.txt
--
diff --cc CHANGES.txt
index 0b6aeaa,b3470bf..73c5034
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -12,61 -10,11 +12,63 @@@ Merged from 1.2
   * Fix CQLSH parsing of functions and BLOB literals (CASSANDRA-7018)
   * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
   * Ensure that batchlog and hint timeouts do not produce hints 
(CASSANDRA-7058)
 - * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
+  * Always clean up references in SerializingCache (CASSANDRA-6994)
++ * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
  
  
 -1.2.16
 +2.0.7
 + * Put nodes in hibernate when join_ring is false (CASSANDRA-6961)
 + * Avoid early loading of non-system keyspaces before compaction-leftovers 
 +   cleanup at startup (CASSANDRA-6913)
 + * Restrict Windows to parallel repairs (CASSANDRA-6907)
 + * (Hadoop) Allow manually specifying start/end tokens in CFIF 
(CASSANDRA-6436)
 + * Fix NPE in MeteredFlusher (CASSANDRA-6820)
 + * Fix race processing range scan responses (CASSANDRA-6820)
 + * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821)
 + * Add uuid() function (CASSANDRA-6473)
 + * Omit tombstones from schema digests (CASSANDRA-6862)
 + * Include correct consistencyLevel in LWT timeout (CASSANDRA-6884)
 + * Lower chances for losing new SSTables during nodetool refresh and
 +   ColumnFamilyStore.loadNewSSTables (CASSANDRA-6514)
 + * Add support for DELETE ... IF EXISTS to CQL3 (CASSANDRA-5708)
 + * Update hadoop_cql3_word_count example (CASSANDRA-6793)
 + * Fix handling of RejectedExecution in sync Thrift server (CASSANDRA-6788)
 + * Log more information when exceeding tombstone_warn_threshold 
(CASSANDRA-6865)
 + * Fix truncate to not abort due to unreachable fat clients (CASSANDRA-6864)
 + * Fix schema concurrency exceptions (CASSANDRA-6841)
 + * Fix leaking validator FH in StreamWriter (CASSANDRA-6832)
 + * Fix saving triggers to schema (CASSANDRA-6789)
 + * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 + * Fix accounting in FileCacheService to allow re-using RAR (CASSANDRA-6838)
 + * Fix static counter columns (CASSANDRA-6827)
 + * Restore expiring->deleted (cell) compaction optimization (CASSANDRA-6844)
 + * Fix CompactionManager.needsCleanup (CASSANDRA-6845)
 + * Correctly compare BooleanType values other than 0 and 1 (CASSANDRA-6779)
 + * Read message id as string from earlier versions (CASSANDRA-6840)
 + * Properly use the Paxos consistency for (non-protocol) batch 
(CASSANDRA-6837)
 + * Add paranoid disk failure option (CASSANDRA-6646)
 + * Improve PerRowSecondaryIndex performance (CASSANDRA-6876)
 + * Extend triggers to support CAS updates (CASSANDRA-6882)
 + * Static columns with IF NOT EXISTS don't always work as expected 
(CASSANDRA-6873)
 + * Fix paging with SELECT DISTINCT (CASSANDRA-6857)
 + * Fix UnsupportedOperationException on CAS timeout (CASSANDRA-6923)
 + * Improve MeteredFlusher handling of MF-unaffected column families
 +   (CASSANDRA-6867)
 + * Add CqlRecordReader using native pagination (CASSANDRA-6311)
 + * Add QueryHandler interface (CASSANDRA-6659)
 + * Track liveRatio per-memtable, not per-CF (CASSANDRA-6945)
 + * Make sure upgradesstables keeps sstable level (CASSANDRA-6958)
 + * Fix LIMIT with static columns (CASSANDRA-6956)
 + * Fix clash with CQL column name in thrift validation (CASSANDRA-6892)
 + * Fix error with super columns in mixed 1.2-2.0 clusters (CASSANDRA-6966)
 + * Fix bad skip of sstables on slice query with composite start/finish 
(CASSANDRA-6825)
 + * Fix unintended update with conditional statement (CASSANDRA-6893)
 + * Fix map element access in IF (CASSANDRA-6914)
 + * Avoid costly range calculations for range queries on system keyspaces
 +   (CASSANDRA-6906)
 + * Fix SSTable not released if stream session fails (CASSANDRA-6818)
 + * Avoid build failure 

[1/4] git commit: Fix CFMetaData#getColumnDefinitionFromColumnName()

2014-04-24 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 16bb16ed2 -> bd6431323


Fix CFMetaData#getColumnDefinitionFromColumnName()

patch by Benedict Elliott Smith; reviewed by Aleksey Yeschenko for
CASSANDRA-7074


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72203c50
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72203c50
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72203c50

Branch: refs/heads/trunk
Commit: 72203c503767618ba89c6ed03c0ed091dc6e701b
Parents: 9359b7a
Author: belliottsmith 
Authored: Fri Apr 25 03:01:41 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Apr 25 03:14:56 2014 +0300

--
 CHANGES.txt |  1 +
 .../cassandra/cache/SerializingCache.java   | 52 +++-
 2 files changed, 42 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72203c50/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 69e9d37..b3470bf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -11,6 +11,7 @@
  * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
  * Ensure that batchlog and hint timeouts do not produce hints (CASSANDRA-7058)
  * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
+ * Always clean up references in SerializingCache (CASSANDRA-6994)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72203c50/src/java/org/apache/cassandra/cache/SerializingCache.java
--
diff --git a/src/java/org/apache/cassandra/cache/SerializingCache.java 
b/src/java/org/apache/cassandra/cache/SerializingCache.java
index c7430d2..58da56b 100644
--- a/src/java/org/apache/cassandra/cache/SerializingCache.java
+++ b/src/java/org/apache/cassandra/cache/SerializingCache.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.cache;
 import java.io.IOException;
 import java.util.Set;
 
+import com.google.common.base.Throwables;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -92,7 +93,7 @@ public class SerializingCache implements ICache
 }
 catch (IOException e)
 {
-logger.debug("Cannot fetch in memory data, we will failback to 
read from disk ", e);
+logger.debug("Cannot fetch in memory data, we will fallback to 
read from disk ", e);
 return null;
 }
 }
@@ -119,6 +120,7 @@ public class SerializingCache implements ICache
 }
 catch (IOException e)
 {
+freeableMemory.unreference();
 throw new RuntimeException(e);
 }
 return freeableMemory;
@@ -177,7 +179,17 @@ public class SerializingCache implements ICache
 if (mem == null)
 return; // out of memory.  never mind.
 
-RefCountedMemory old = map.put(key, mem);
+RefCountedMemory old;
+try
+{
+old = map.put(key, mem);
+}
+catch (Throwable t)
+{
+mem.unreference();
+throw Throwables.propagate(t);
+}
+
 if (old != null)
 old.unreference();
 }
@@ -188,7 +200,17 @@ public class SerializingCache implements ICache
 if (mem == null)
 return false; // out of memory.  never mind.
 
-RefCountedMemory old = map.putIfAbsent(key, mem);
+RefCountedMemory old;
+try
+{
+old = map.putIfAbsent(key, mem);
+}
+catch (Throwable t)
+{
+mem.unreference();
+throw Throwables.propagate(t);
+}
+
 if (old != null)
 // the new value was not put, we've uselessly allocated some 
memory, free it
 mem.unreference();
@@ -202,24 +224,32 @@ public class SerializingCache implements ICache
 if (old == null)
 return false;
 
+V oldValue;
+// reference old guy before de-serializing
+if (!old.reference())
+return false; // we have already freed hence noop.
+
+oldValue = deserialize(old);
+old.unreference();
+
+if (!oldValue.equals(oldToReplace))
+return false;
+
 // see if the old value matches the one we want to replace
 RefCountedMemory mem = serialize(value);
 if (mem == null)
 return false; // out of memory.  never mind.
 
-V oldValue;
-// reference old guy before de-serializing
-if (!old.reference())
-return false; // we have already freed hence noop.
+boolean success;
 try
 {
- oldValue = deserialize(old);
+success = map.replace(key, old, mem);
  

[3/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-24 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/cache/RefCountedMemory.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5fd02f4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5fd02f4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5fd02f4

Branch: refs/heads/cassandra-2.1
Commit: f5fd02f453cd06573b7a4a70c03f6435727705b8
Parents: ab87f83 871a603
Author: Aleksey Yeschenko 
Authored: Fri Apr 25 03:42:53 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Apr 25 03:42:53 2014 +0300

--
 CHANGES.txt |  3 +-
 .../cassandra/cache/RefCountedMemory.java   |  7 ++-
 .../cassandra/cache/SerializingCache.java   | 53 +++-
 3 files changed, 48 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5fd02f4/CHANGES.txt
--
diff --cc CHANGES.txt
index 11630e7,73c5034..9c78e33
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,59 -1,4 +1,58 @@@
 -2.0.8
 +2.1.0-beta2
 + * Increase default CL space to 8GB (CASSANDRA-7031)
 + * Add range tombstones to read repair digests (CASSANDRA-6863)
 + * Fix BTree.clear for large updates (CASSANDRA-6943)
 + * Fail write instead of logging a warning when unable to append to CL
 +   (CASSANDRA-6764)
 + * Eliminate possibility of CL segment appearing twice in active list 
 +   (CASSANDRA-6557)
 + * Apply DONTNEED fadvise to commitlog segments (CASSANDRA-6759)
 + * Switch CRC component to Adler and include it for compressed sstables 
 +   (CASSANDRA-4165)
 + * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451)
 + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
 + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
 + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
 + * Fix ABTC NPE and apply update function correctly (CASSANDRA-6692)
 + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)
 + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742)
 + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705)
 + * Scrub should not always clear out repaired status (CASSANDRA-5351)
 + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446)
 + * Fix ClassCastException for compact table with composites (CASSANDRA-6738)
 + * Fix potentially repairing with wrong nodes (CASSANDRA-6808)
 + * Change caching option syntax (CASSANDRA-6745)
 + * Fix stress to do proper counter reads (CASSANDRA-6835)
 + * Fix help message for stress counter_write (CASSANDRA-6824)
 + * Fix stress smart Thrift client to pick servers correctly (CASSANDRA-6848)
 + * Add logging levels (minimal, normal or verbose) to stress tool 
(CASSANDRA-6849)
 + * Fix race condition in Batch CLE (CASSANDRA-6860)
 + * Improve cleanup/scrub/upgradesstables failure handling (CASSANDRA-6774)
 + * ByteBuffer write() methods for serializing sstables (CASSANDRA-6781)
 + * Proper compare function for CollectionType (CASSANDRA-6783)
 + * Update native server to Netty 4 (CASSANDRA-6236)
 + * Fix off-by-one error in stress (CASSANDRA-6883)
 + * Make OpOrder AutoCloseable (CASSANDRA-6901)
 + * Remove sync repair JMX interface (CASSANDRA-6900)
 + * Add multiple memory allocation options for memtables (CASSANDRA-6689)
 + * Remove adjusted op rate from stress output (CASSANDRA-6921)
 + * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
 + * Serialize batchlog mutations with the version of the target node
 +   (CASSANDRA-6931)
 + * Optimize CounterColumn#reconcile() (CASSANDRA-6953)
 + * Properly remove 1.2 sstable support in 2.1 (CASSANDRA-6869)
 + * Lock counter cells, not partitions (CASSANDRA-6880)
 + * Track presence of legacy counter shards in sstables (CASSANDRA-6888)
 + * Ensure safe resource cleanup when replacing sstables (CASSANDRA-6912)
 + * Add failure handler to async callback (CASSANDRA-6747)
 + * Fix AE when closing SSTable without releasing reference (CASSANDRA-7000)
 + * Clean up IndexInfo on keyspace/table drops (CASSANDRA-6924)
 + * Only snapshot relative SSTables when sequential repair (CASSANDRA-7024)
 + * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
 + * fix cassandra stress errors on reads with native protocol (CASSANDRA-7033)
 + * Use OpOrder to guard sstable references for reads (CASSANDRA-6919)
 + * Preemptive opening of compaction result (CASSANDRA-6916)
 +Merged from 2.0:
- 2.0.8
   * Set JMX RMI port to 7199 (CASSANDRA-7087)
   * Use LOCAL_QUORUM for data reads at LOCAL_SERIAL (CASSANDRA-6939)
   * Log a warning for large batches (CASSANDRA-6487)
@@@ -124,9 -78,7 +123,11 @@@ Merged from 1.2
   * Schedul

[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-04-24 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/871a6030
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/871a6030
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/871a6030

Branch: refs/heads/cassandra-2.1
Commit: 871a6030bc6ed475e652e67a8631338010c607dc
Parents: b9bb2c8 72203c5
Author: Aleksey Yeschenko 
Authored: Fri Apr 25 03:19:17 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Apr 25 03:19:17 2014 +0300

--
 CHANGES.txt |  2 +
 .../cassandra/cache/RefCountedMemory.java   |  7 ++-
 .../cassandra/cache/SerializingCache.java   | 53 +++-
 3 files changed, 49 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/871a6030/CHANGES.txt
--
diff --cc CHANGES.txt
index 0b6aeaa,b3470bf..73c5034
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -12,61 -10,11 +12,63 @@@ Merged from 1.2
   * Fix CQLSH parsing of functions and BLOB literals (CASSANDRA-7018)
   * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
   * Ensure that batchlog and hint timeouts do not produce hints 
(CASSANDRA-7058)
 - * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
+  * Always clean up references in SerializingCache (CASSANDRA-6994)
++ * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
  
  
 -1.2.16
 +2.0.7
 + * Put nodes in hibernate when join_ring is false (CASSANDRA-6961)
 + * Avoid early loading of non-system keyspaces before compaction-leftovers 
 +   cleanup at startup (CASSANDRA-6913)
 + * Restrict Windows to parallel repairs (CASSANDRA-6907)
 + * (Hadoop) Allow manually specifying start/end tokens in CFIF 
(CASSANDRA-6436)
 + * Fix NPE in MeteredFlusher (CASSANDRA-6820)
 + * Fix race processing range scan responses (CASSANDRA-6820)
 + * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821)
 + * Add uuid() function (CASSANDRA-6473)
 + * Omit tombstones from schema digests (CASSANDRA-6862)
 + * Include correct consistencyLevel in LWT timeout (CASSANDRA-6884)
 + * Lower chances for losing new SSTables during nodetool refresh and
 +   ColumnFamilyStore.loadNewSSTables (CASSANDRA-6514)
 + * Add support for DELETE ... IF EXISTS to CQL3 (CASSANDRA-5708)
 + * Update hadoop_cql3_word_count example (CASSANDRA-6793)
 + * Fix handling of RejectedExecution in sync Thrift server (CASSANDRA-6788)
 + * Log more information when exceeding tombstone_warn_threshold 
(CASSANDRA-6865)
 + * Fix truncate to not abort due to unreachable fat clients (CASSANDRA-6864)
 + * Fix schema concurrency exceptions (CASSANDRA-6841)
 + * Fix leaking validator FH in StreamWriter (CASSANDRA-6832)
 + * Fix saving triggers to schema (CASSANDRA-6789)
 + * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 + * Fix accounting in FileCacheService to allow re-using RAR (CASSANDRA-6838)
 + * Fix static counter columns (CASSANDRA-6827)
 + * Restore expiring->deleted (cell) compaction optimization (CASSANDRA-6844)
 + * Fix CompactionManager.needsCleanup (CASSANDRA-6845)
 + * Correctly compare BooleanType values other than 0 and 1 (CASSANDRA-6779)
 + * Read message id as string from earlier versions (CASSANDRA-6840)
 + * Properly use the Paxos consistency for (non-protocol) batch 
(CASSANDRA-6837)
 + * Add paranoid disk failure option (CASSANDRA-6646)
 + * Improve PerRowSecondaryIndex performance (CASSANDRA-6876)
 + * Extend triggers to support CAS updates (CASSANDRA-6882)
 + * Static columns with IF NOT EXISTS don't always work as expected 
(CASSANDRA-6873)
 + * Fix paging with SELECT DISTINCT (CASSANDRA-6857)
 + * Fix UnsupportedOperationException on CAS timeout (CASSANDRA-6923)
 + * Improve MeteredFlusher handling of MF-unaffected column families
 +   (CASSANDRA-6867)
 + * Add CqlRecordReader using native pagination (CASSANDRA-6311)
 + * Add QueryHandler interface (CASSANDRA-6659)
 + * Track liveRatio per-memtable, not per-CF (CASSANDRA-6945)
 + * Make sure upgradesstables keeps sstable level (CASSANDRA-6958)
 + * Fix LIMIT with static columns (CASSANDRA-6956)
 + * Fix clash with CQL column name in thrift validation (CASSANDRA-6892)
 + * Fix error with super columns in mixed 1.2-2.0 clusters (CASSANDRA-6966)
 + * Fix bad skip of sstables on slice query with composite start/finish 
(CASSANDRA-6825)
 + * Fix unintended update with conditional statement (CASSANDRA-6893)
 + * Fix map element access in IF (CASSANDRA-6914)
 + * Avoid costly range calculations for range queries on system keyspaces
 +   (CASSANDRA-6906)
 + * Fix SSTable not released if stream session fails (CASSANDRA-6818)
 + * Avoid build 

[1/3] git commit: Fix CFMetaData#getColumnDefinitionFromColumnName()

2014-04-24 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 ab87f8334 -> f5fd02f45


Fix CFMetaData#getColumnDefinitionFromColumnName()

patch by Benedict Elliott Smith; reviewed by Aleksey Yeschenko for
CASSANDRA-7074


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72203c50
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72203c50
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72203c50

Branch: refs/heads/cassandra-2.1
Commit: 72203c503767618ba89c6ed03c0ed091dc6e701b
Parents: 9359b7a
Author: belliottsmith 
Authored: Fri Apr 25 03:01:41 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Apr 25 03:14:56 2014 +0300

--
 CHANGES.txt |  1 +
 .../cassandra/cache/SerializingCache.java   | 52 +++-
 2 files changed, 42 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72203c50/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 69e9d37..b3470bf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -11,6 +11,7 @@
  * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
  * Ensure that batchlog and hint timeouts do not produce hints (CASSANDRA-7058)
  * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
+ * Always clean up references in SerializingCache (CASSANDRA-6994)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72203c50/src/java/org/apache/cassandra/cache/SerializingCache.java
--
diff --git a/src/java/org/apache/cassandra/cache/SerializingCache.java 
b/src/java/org/apache/cassandra/cache/SerializingCache.java
index c7430d2..58da56b 100644
--- a/src/java/org/apache/cassandra/cache/SerializingCache.java
+++ b/src/java/org/apache/cassandra/cache/SerializingCache.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.cache;
 import java.io.IOException;
 import java.util.Set;
 
+import com.google.common.base.Throwables;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -92,7 +93,7 @@ public class SerializingCache implements ICache
 }
 catch (IOException e)
 {
-logger.debug("Cannot fetch in memory data, we will failback to 
read from disk ", e);
+logger.debug("Cannot fetch in memory data, we will fallback to 
read from disk ", e);
 return null;
 }
 }
@@ -119,6 +120,7 @@ public class SerializingCache implements ICache
 }
 catch (IOException e)
 {
+freeableMemory.unreference();
 throw new RuntimeException(e);
 }
 return freeableMemory;
@@ -177,7 +179,17 @@ public class SerializingCache implements ICache
 if (mem == null)
 return; // out of memory.  never mind.
 
-RefCountedMemory old = map.put(key, mem);
+RefCountedMemory old;
+try
+{
+old = map.put(key, mem);
+}
+catch (Throwable t)
+{
+mem.unreference();
+throw Throwables.propagate(t);
+}
+
 if (old != null)
 old.unreference();
 }
@@ -188,7 +200,17 @@ public class SerializingCache implements ICache
 if (mem == null)
 return false; // out of memory.  never mind.
 
-RefCountedMemory old = map.putIfAbsent(key, mem);
+RefCountedMemory old;
+try
+{
+old = map.putIfAbsent(key, mem);
+}
+catch (Throwable t)
+{
+mem.unreference();
+throw Throwables.propagate(t);
+}
+
 if (old != null)
 // the new value was not put, we've uselessly allocated some 
memory, free it
 mem.unreference();
@@ -202,24 +224,32 @@ public class SerializingCache implements ICache
 if (old == null)
 return false;
 
+V oldValue;
+// reference old guy before de-serializing
+if (!old.reference())
+return false; // we have already freed hence noop.
+
+oldValue = deserialize(old);
+old.unreference();
+
+if (!oldValue.equals(oldToReplace))
+return false;
+
 // see if the old value matches the one we want to replace
 RefCountedMemory mem = serialize(value);
 if (mem == null)
 return false; // out of memory.  never mind.
 
-V oldValue;
-// reference old guy before de-serializing
-if (!old.reference())
-return false; // we have already freed hence noop.
+boolean success;
 try
 {
- oldValue = deserialize(old);
+success = map.replace(key, o

[1/2] git commit: Fix CFMetaData#getColumnDefinitionFromColumnName()

2014-04-24 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 b9bb2c886 -> 871a6030b


Fix CFMetaData#getColumnDefinitionFromColumnName()

patch by Benedict Elliott Smith; reviewed by Aleksey Yeschenko for
CASSANDRA-7074


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72203c50
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72203c50
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72203c50

Branch: refs/heads/cassandra-2.0
Commit: 72203c503767618ba89c6ed03c0ed091dc6e701b
Parents: 9359b7a
Author: belliottsmith 
Authored: Fri Apr 25 03:01:41 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Apr 25 03:14:56 2014 +0300

--
 CHANGES.txt |  1 +
 .../cassandra/cache/SerializingCache.java   | 52 +++-
 2 files changed, 42 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72203c50/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 69e9d37..b3470bf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -11,6 +11,7 @@
  * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
  * Ensure that batchlog and hint timeouts do not produce hints (CASSANDRA-7058)
  * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
+ * Always clean up references in SerializingCache (CASSANDRA-6994)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72203c50/src/java/org/apache/cassandra/cache/SerializingCache.java
--
diff --git a/src/java/org/apache/cassandra/cache/SerializingCache.java 
b/src/java/org/apache/cassandra/cache/SerializingCache.java
index c7430d2..58da56b 100644
--- a/src/java/org/apache/cassandra/cache/SerializingCache.java
+++ b/src/java/org/apache/cassandra/cache/SerializingCache.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.cache;
 import java.io.IOException;
 import java.util.Set;
 
+import com.google.common.base.Throwables;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -92,7 +93,7 @@ public class SerializingCache implements ICache
 }
 catch (IOException e)
 {
-logger.debug("Cannot fetch in memory data, we will failback to 
read from disk ", e);
+logger.debug("Cannot fetch in memory data, we will fallback to 
read from disk ", e);
 return null;
 }
 }
@@ -119,6 +120,7 @@ public class SerializingCache implements ICache
 }
 catch (IOException e)
 {
+freeableMemory.unreference();
 throw new RuntimeException(e);
 }
 return freeableMemory;
@@ -177,7 +179,17 @@ public class SerializingCache implements ICache
 if (mem == null)
 return; // out of memory.  never mind.
 
-RefCountedMemory old = map.put(key, mem);
+RefCountedMemory old;
+try
+{
+old = map.put(key, mem);
+}
+catch (Throwable t)
+{
+mem.unreference();
+throw Throwables.propagate(t);
+}
+
 if (old != null)
 old.unreference();
 }
@@ -188,7 +200,17 @@ public class SerializingCache implements ICache
 if (mem == null)
 return false; // out of memory.  never mind.
 
-RefCountedMemory old = map.putIfAbsent(key, mem);
+RefCountedMemory old;
+try
+{
+old = map.putIfAbsent(key, mem);
+}
+catch (Throwable t)
+{
+mem.unreference();
+throw Throwables.propagate(t);
+}
+
 if (old != null)
 // the new value was not put, we've uselessly allocated some 
memory, free it
 mem.unreference();
@@ -202,24 +224,32 @@ public class SerializingCache implements ICache
 if (old == null)
 return false;
 
+V oldValue;
+// reference old guy before de-serializing
+if (!old.reference())
+return false; // we have already freed hence noop.
+
+oldValue = deserialize(old);
+old.unreference();
+
+if (!oldValue.equals(oldToReplace))
+return false;
+
 // see if the old value matches the one we want to replace
 RefCountedMemory mem = serialize(value);
 if (mem == null)
 return false; // out of memory.  never mind.
 
-V oldValue;
-// reference old guy before de-serializing
-if (!old.reference())
-return false; // we have already freed hence noop.
+boolean success;
 try
 {
- oldValue = deserialize(old);
+success = map.replace(key, o

[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-04-24 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/871a6030
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/871a6030
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/871a6030

Branch: refs/heads/cassandra-2.0
Commit: 871a6030bc6ed475e652e67a8631338010c607dc
Parents: b9bb2c8 72203c5
Author: Aleksey Yeschenko 
Authored: Fri Apr 25 03:19:17 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Apr 25 03:19:17 2014 +0300

--
 CHANGES.txt |  2 +
 .../cassandra/cache/RefCountedMemory.java   |  7 ++-
 .../cassandra/cache/SerializingCache.java   | 53 +++-
 3 files changed, 49 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/871a6030/CHANGES.txt
--
diff --cc CHANGES.txt
index 0b6aeaa,b3470bf..73c5034
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -12,61 -10,11 +12,63 @@@ Merged from 1.2
   * Fix CQLSH parsing of functions and BLOB literals (CASSANDRA-7018)
   * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
   * Ensure that batchlog and hint timeouts do not produce hints 
(CASSANDRA-7058)
 - * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
+  * Always clean up references in SerializingCache (CASSANDRA-6994)
++ * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
  
  
 -1.2.16
 +2.0.7
 + * Put nodes in hibernate when join_ring is false (CASSANDRA-6961)
 + * Avoid early loading of non-system keyspaces before compaction-leftovers 
 +   cleanup at startup (CASSANDRA-6913)
 + * Restrict Windows to parallel repairs (CASSANDRA-6907)
 + * (Hadoop) Allow manually specifying start/end tokens in CFIF 
(CASSANDRA-6436)
 + * Fix NPE in MeteredFlusher (CASSANDRA-6820)
 + * Fix race processing range scan responses (CASSANDRA-6820)
 + * Allow deleting snapshots from dropped keyspaces (CASSANDRA-6821)
 + * Add uuid() function (CASSANDRA-6473)
 + * Omit tombstones from schema digests (CASSANDRA-6862)
 + * Include correct consistencyLevel in LWT timeout (CASSANDRA-6884)
 + * Lower chances for losing new SSTables during nodetool refresh and
 +   ColumnFamilyStore.loadNewSSTables (CASSANDRA-6514)
 + * Add support for DELETE ... IF EXISTS to CQL3 (CASSANDRA-5708)
 + * Update hadoop_cql3_word_count example (CASSANDRA-6793)
 + * Fix handling of RejectedExecution in sync Thrift server (CASSANDRA-6788)
 + * Log more information when exceeding tombstone_warn_threshold 
(CASSANDRA-6865)
 + * Fix truncate to not abort due to unreachable fat clients (CASSANDRA-6864)
 + * Fix schema concurrency exceptions (CASSANDRA-6841)
 + * Fix leaking validator FH in StreamWriter (CASSANDRA-6832)
 + * Fix saving triggers to schema (CASSANDRA-6789)
 + * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 + * Fix accounting in FileCacheService to allow re-using RAR (CASSANDRA-6838)
 + * Fix static counter columns (CASSANDRA-6827)
 + * Restore expiring->deleted (cell) compaction optimization (CASSANDRA-6844)
 + * Fix CompactionManager.needsCleanup (CASSANDRA-6845)
 + * Correctly compare BooleanType values other than 0 and 1 (CASSANDRA-6779)
 + * Read message id as string from earlier versions (CASSANDRA-6840)
 + * Properly use the Paxos consistency for (non-protocol) batch 
(CASSANDRA-6837)
 + * Add paranoid disk failure option (CASSANDRA-6646)
 + * Improve PerRowSecondaryIndex performance (CASSANDRA-6876)
 + * Extend triggers to support CAS updates (CASSANDRA-6882)
 + * Static columns with IF NOT EXISTS don't always work as expected 
(CASSANDRA-6873)
 + * Fix paging with SELECT DISTINCT (CASSANDRA-6857)
 + * Fix UnsupportedOperationException on CAS timeout (CASSANDRA-6923)
 + * Improve MeteredFlusher handling of MF-unaffected column families
 +   (CASSANDRA-6867)
 + * Add CqlRecordReader using native pagination (CASSANDRA-6311)
 + * Add QueryHandler interface (CASSANDRA-6659)
 + * Track liveRatio per-memtable, not per-CF (CASSANDRA-6945)
 + * Make sure upgradesstables keeps sstable level (CASSANDRA-6958)
 + * Fix LIMIT with static columns (CASSANDRA-6956)
 + * Fix clash with CQL column name in thrift validation (CASSANDRA-6892)
 + * Fix error with super columns in mixed 1.2-2.0 clusters (CASSANDRA-6966)
 + * Fix bad skip of sstables on slice query with composite start/finish 
(CASSANDRA-6825)
 + * Fix unintended update with conditional statement (CASSANDRA-6893)
 + * Fix map element access in IF (CASSANDRA-6914)
 + * Avoid costly range calculations for range queries on system keyspaces
 +   (CASSANDRA-6906)
 + * Fix SSTable not released if stream session fails (CASSANDRA-6818)
 + * Avoid build 

git commit: Fix CFMetaData#getColumnDefinitionFromColumnName()

2014-04-24 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-1.2 9359b7a31 -> 72203c503


Fix CFMetaData#getColumnDefinitionFromColumnName()

patch by Benedict Elliott Smith; reviewed by Aleksey Yeschenko for
CASSANDRA-7074


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72203c50
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72203c50
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72203c50

Branch: refs/heads/cassandra-1.2
Commit: 72203c503767618ba89c6ed03c0ed091dc6e701b
Parents: 9359b7a
Author: belliottsmith 
Authored: Fri Apr 25 03:01:41 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Apr 25 03:14:56 2014 +0300

--
 CHANGES.txt |  1 +
 .../cassandra/cache/SerializingCache.java   | 52 +++-
 2 files changed, 42 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72203c50/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 69e9d37..b3470bf 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -11,6 +11,7 @@
  * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
  * Ensure that batchlog and hint timeouts do not produce hints (CASSANDRA-7058)
  * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
+ * Always clean up references in SerializingCache (CASSANDRA-6994)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72203c50/src/java/org/apache/cassandra/cache/SerializingCache.java
--
diff --git a/src/java/org/apache/cassandra/cache/SerializingCache.java 
b/src/java/org/apache/cassandra/cache/SerializingCache.java
index c7430d2..58da56b 100644
--- a/src/java/org/apache/cassandra/cache/SerializingCache.java
+++ b/src/java/org/apache/cassandra/cache/SerializingCache.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.cache;
 import java.io.IOException;
 import java.util.Set;
 
+import com.google.common.base.Throwables;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -92,7 +93,7 @@ public class SerializingCache implements ICache
 }
 catch (IOException e)
 {
-logger.debug("Cannot fetch in memory data, we will failback to 
read from disk ", e);
+logger.debug("Cannot fetch in memory data, we will fallback to 
read from disk ", e);
 return null;
 }
 }
@@ -119,6 +120,7 @@ public class SerializingCache implements ICache
 }
 catch (IOException e)
 {
+freeableMemory.unreference();
 throw new RuntimeException(e);
 }
 return freeableMemory;
@@ -177,7 +179,17 @@ public class SerializingCache implements ICache
 if (mem == null)
 return; // out of memory.  never mind.
 
-RefCountedMemory old = map.put(key, mem);
+RefCountedMemory old;
+try
+{
+old = map.put(key, mem);
+}
+catch (Throwable t)
+{
+mem.unreference();
+throw Throwables.propagate(t);
+}
+
 if (old != null)
 old.unreference();
 }
@@ -188,7 +200,17 @@ public class SerializingCache implements ICache
 if (mem == null)
 return false; // out of memory.  never mind.
 
-RefCountedMemory old = map.putIfAbsent(key, mem);
+RefCountedMemory old;
+try
+{
+old = map.putIfAbsent(key, mem);
+}
+catch (Throwable t)
+{
+mem.unreference();
+throw Throwables.propagate(t);
+}
+
 if (old != null)
 // the new value was not put, we've uselessly allocated some 
memory, free it
 mem.unreference();
@@ -202,24 +224,32 @@ public class SerializingCache implements ICache
 if (old == null)
 return false;
 
+V oldValue;
+// reference old guy before de-serializing
+if (!old.reference())
+return false; // we have already freed hence noop.
+
+oldValue = deserialize(old);
+old.unreference();
+
+if (!oldValue.equals(oldToReplace))
+return false;
+
 // see if the old value matches the one we want to replace
 RefCountedMemory mem = serialize(value);
 if (mem == null)
 return false; // out of memory.  never mind.
 
-V oldValue;
-// reference old guy before de-serializing
-if (!old.reference())
-return false; // we have already freed hence noop.
+boolean success;
 try
 {
- oldValue = deserialize(old);
+success = map.replace(key, o

[jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE"

2014-04-24 Thread Shyam K Gopal (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980524#comment-13980524
 ] 

Shyam K Gopal commented on CASSANDRA-6525:
--

FYI... Same issue also exist in 2.0.7 version as well. 

> Cannot select data which using "WHERE"
> --
>
> Key: CASSANDRA-6525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6525
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux RHEL5
> RAM: 1GB
> Cassandra 2.0.3
> CQL spec 3.1.1
> Thrift protocol 19.38.0
>Reporter: Silence Chow
> Attachments: 6981_test.py
>
>
> I am developing a system on my single machine using VMware Player with 1GB 
> Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when 
> I using "WHERE" and it has just below 10 records. I have got this error in 
> system log:
> ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) 
> Exception in thread Thread[ReadStage:41,5,main]
> java.io.IOError: java.io.EOFException
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:79)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
> at 
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
> at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.EOFException
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at java.io.RandomAccessFile.readFully(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74)
> at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
> ... 27 more
> E.g.
> SELECT * FROM table;
> Its fine.
> SELECT * FROM table WHERE field = 'N';
> field is the partition key.
> Its said "Request did not complete within rpc_timeout." in cqlsh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6875) CQL3: select multiple CQL rows in a single partition using IN

2014-04-24 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-6875:
---

Reviewer: Sylvain Lebresne

> CQL3: select multiple CQL rows in a single partition using IN
> -
>
> Key: CASSANDRA-6875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6875
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Nicolas Favre-Felix
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 2.0.8
>
>
> In the spirit of CASSANDRA-4851 and to bring CQL to parity with Thrift, it is 
> important to support reading several distinct CQL rows from a given partition 
> using a distinct set of "coordinates" for these rows within the partition.
> CASSANDRA-4851 introduced a range scan over the multi-dimensional space of 
> clustering keys. We also need to support a "multi-get" of CQL rows, 
> potentially using the "IN" keyword to define a set of clustering keys to 
> fetch at once.
> (reusing the same example\:)
> Consider the following table:
> {code}
> CREATE TABLE test (
>   k int,
>   c1 int,
>   c2 int,
>   PRIMARY KEY (k, c1, c2)
> );
> {code}
> with the following data:
> {code}
>  k | c1 | c2
> ---++
>  0 |  0 |  0
>  0 |  0 |  1
>  0 |  1 |  0
>  0 |  1 |  1
> {code}
> We can fetch a single row or a range of rows, but not a set of them:
> {code}
> > SELECT * FROM test WHERE k = 0 AND (c1, c2) IN ((0, 0), (1,1)) ;
> Bad Request: line 1:54 missing EOF at ','
> {code}
> Supporting this syntax would return:
> {code}
>  k | c1 | c2
> ---++
>  0 |  0 |  0
>  0 |  1 |  1
> {code}
> Being able to fetch these two CQL rows in a single read is important to 
> maintain partition-level isolation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7089) Some problems with typed ByteBuffer comparisons?

2014-04-24 Thread Benedict (JIRA)
Benedict created CASSANDRA-7089:
---

 Summary: Some problems with typed ByteBuffer comparisons?
 Key: CASSANDRA-7089
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7089
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.1 beta2


ColumnIdentifier.compareTo() appears subtly broken: it looks to me that we 
should be using ByteBufferUtil.compareUnsigned() instead of bytes.compareTo(), 
since they are meant to be UTF8Type. I think it would be nice to drop this 
compareTo method entirely, as it's only used by 
CFMetaData.regularColumnComparator and it seems possible to misuse accidentally 
at a later date since it only works for CQL columns, but a ColumnIdentifier is 
used for thrift columns as well.

There's a related problem with CellName.isPrefixOf, where we are using equals() 
instead of type.compareTo() == 0, which could break anyone misusing our old 
friend Boolean as a clustering column.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7088) Massive performance degradation for TRUNCATE when migrating to 2.0

2014-04-24 Thread Jacek Furmankiewicz (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980398#comment-13980398
 ] 

Jacek Furmankiewicz commented on CASSANDRA-7088:


I can validate that while watching our test suite run and seeing where it 
freezes all of a sudden...nearly every time that exception appears if I am 
doing a tail -f on /var/log/cassandra/system.log

so they definitely seem correlated

> Massive performance degradation for TRUNCATE when migrating to 2.0
> --
>
> Key: CASSANDRA-7088
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7088
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Linux Mint 16
>Reporter: Jacek Furmankiewicz
> Attachments: cassandra.yaml
>
>
> We attempted to migrate our developers to Cassandra 2.0.7 from 1.2.
> Everything worked perfectly, but we have experienced a massive drop in 
> developer velocity.
> We run integration tests with Cucumber BDD and 1000 BDDs went from 7 minutes 
> (Cassandra 1.2) to 15 minutes (2.0.7),
> This is when we run Cassandra of the ramdisk (/dev/shm) to make it run faster 
> on dev boxes.
> When we tried pointed to actual drives  the difference was dramatic: the 
> entire suite took over 70 minutes (!) vs 15 in Cassandra 1.2.
> After investigation, we found that most of the time is spent in the 
> truncation logic between every scenario, where we truncate all the column 
> families and start with a clean DB for the next test case.
> This used to be super fast in 1.2, is now very slow in 2.0.
> It may not seem important, but upgrading to 2.0 has basically cut down 
> developer velocity by 100%, just by more than doubling the time it takes to 
> run our BDD suite.
> We truncate the CFs using the Ruby driver:
>   $cassandra.column_families.each do |column_family|
> name = column_family[0].to_s
> $cassandra.truncate! name
>   end
> I am attaching our cassandra.yaml. Please note we already switched off 
> auto_compaction before truncate, just as we did in 1.2 for dev boxes, Made no 
> difference.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6998) HintedHandoff - expired hints may block future hints deliveries

2014-04-24 Thread Muhammad Adel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980393#comment-13980393
 ] 

Muhammad Adel commented on CASSANDRA-6998:
--

Sorry, changed status to TESTING by error. Can someone of the moderators return 
the status back to Patch Available? there is no undo functionality

> HintedHandoff - expired hints may block future hints deliveries
> ---
>
> Key: CASSANDRA-6998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6998
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: - cluster of two DCs: DC1, DC2
> - keyspace using NetworkTopologyStrategy (replication factors for both DCs)
> - heavy load (write:read, 100:1) with LOCAL_QUORUM using Java driver setup 
> with DC awareness, writing to DC1
>Reporter: Scooletz
>  Labels: HintedHandoff, TTL
> Fix For: 2.0.3
>
> Attachments: 6998
>
>
> For tests purposes, DC2 was shut down for 1 day. The _hints_ table was filled 
> with millions of rows. Now, when _HintedHandOffManager_ tries to 
> _doDeliverHintsToEndpoint_  it queries the store with 
> QueryFilter.getSliceFilter which counts deleted (TTLed) cells and throws 
> org.apache.cassandra.db.filter.TombstoneOverwhelmingException. 
> Throwing this exception stops the manager from running compaction as it is 
> run only after successful handoff. This leaves the HH practically disabled 
> till administrator runs truncateAllHints. 
> Wouldn't it be nicer if on 
> org.apache.cassandra.db.filter.TombstoneOverwhelmingException run compaction? 
> That would remove TTLed hints leaving whole HH mechanism in a healthy state.
> The stacktrace is:
> {quote}
> org.apache.cassandra.db.filter.TombstoneOverwhelmingException
>   at 
> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
>   at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>   at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>   at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
>   at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
>   at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
>   at 
> org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
>   at 
> org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
>   at 
> org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:92)
>   at 
> org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>   at java.lang.Thread.run(Thread.java:722)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7088) Massive performance degradation for TRUNCATE when migrating to 2.0

2014-04-24 Thread Jacek Furmankiewicz (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980384#comment-13980384
 ] 

Jacek Furmankiewicz commented on CASSANDRA-7088:


We also see a lot of exceptions like this at a time when our test suite seems 
to freeze waiting for Cassandra:

k 16ms for 3394 cells
ERROR [ReadStage:82] 2014-04-24 17:09:31,843 CassandraDaemon.java (line 198) 
Exception in thread Thread[ReadStage:82,5,main]
java.lang.RuntimeException: java.lang.NumberFormatException: Zero length 
BigInteger
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1920)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.NumberFormatException: Zero length BigInteger
at java.math.BigInteger.(BigInteger.java:190)
at 
org.apache.cassandra.serializers.IntegerSerializer.deserialize(IntegerSerializer.java:32)
at 
org.apache.cassandra.serializers.IntegerSerializer.deserialize(IntegerSerializer.java:26)
at 
org.apache.cassandra.db.marshal.AbstractType.getString(AbstractType.java:156)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.getString(AbstractCompositeType.java:242)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:219)
at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1540)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1369)
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1352)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1916)
... 3 more


> Massive performance degradation for TRUNCATE when migrating to 2.0
> --
>
> Key: CASSANDRA-7088
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7088
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Linux Mint 16
>Reporter: Jacek Furmankiewicz
> Attachments: cassandra.yaml
>
>
> We attempted to migrate our developers to Cassandra 2.0.7 from 1.2.
> Everything worked perfectly, but we have experienced a massive drop in 
> developer velocity.
> We run integration tests with Cucumber BDD and 1000 BDDs went from 7 minutes 
> (Cassandra 1.2) to 15 minutes (2.0.7),
> This is when we run Cassandra of the ramdisk (/dev/shm) to make it run faster 
> on dev boxes.
> When we tried pointed to actual drives  the difference was dramatic: the 
> entire suite took over 70 minutes (!) vs 15 in Cassandra 1.2.
> After investigation, we found that most of the time is spent in the 
> truncation logic between every scenario, where we truncate all the column 
> families and start with a clean DB for the next test case.
> This used to be super fast in 1.2, is now very slow in 2.0.
> It may not seem important, but upgrading to 2.0 has basically cut down 
> developer velocity by 100%, just by more than doubling the time it takes to 
> run our BDD suite.
> We truncate the CFs using the Ruby driver:
>   $cassandra.column_families.each do |column_family|
> name = column_family[0].to_s
> $cassandra.truncate! name
>   end
> I am attaching our cassandra.yaml. Please note we already switched off 
> auto_compaction before truncate, just as we did in 1.2 for dev boxes, Made no 
> difference.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7088) Massive performance degradation for TRUNCATE when migrating to 2.0

2014-04-24 Thread Jacek Furmankiewicz (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980347#comment-13980347
 ] 

Jacek Furmankiewicz commented on CASSANDRA-7088:


Sorry I meant

auto_snapshot: false

not auto_compaction

> Massive performance degradation for TRUNCATE when migrating to 2.0
> --
>
> Key: CASSANDRA-7088
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7088
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Linux Mint 16
>Reporter: Jacek Furmankiewicz
> Attachments: cassandra.yaml
>
>
> We attempted to migrate our developers to Cassandra 2.0.7 from 1.2.
> Everything worked perfectly, but we have experienced a massive drop in 
> developer velocity.
> We run integration tests with Cucumber BDD and 1000 BDDs went from 7 minutes 
> (Cassandra 1.2) to 15 minutes (2.0.7),
> This is when we run Cassandra of the ramdisk (/dev/shm) to make it run faster 
> on dev boxes.
> When we tried pointed to actual drives  the difference was dramatic: the 
> entire suite took over 70 minutes (!) vs 15 in Cassandra 1.2.
> After investigation, we found that most of the time is spent in the 
> truncation logic between every scenario, where we truncate all the column 
> families and start with a clean DB for the next test case.
> This used to be super fast in 1.2, is now very slow in 2.0.
> It may not seem important, but upgrading to 2.0 has basically cut down 
> developer velocity by 100%, just by more than doubling the time it takes to 
> run our BDD suite.
> We truncate the CFs using the Ruby driver:
>   $cassandra.column_families.each do |column_family|
> name = column_family[0].to_s
> $cassandra.truncate! name
>   end
> I am attaching our cassandra.yaml. Please note we already switched off 
> auto_compaction before truncate, just as we did in 1.2 for dev boxes, Made no 
> difference.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7088) Massive performance degradation for TRUNCATE when migrating to 2.0

2014-04-24 Thread Jacek Furmankiewicz (JIRA)
Jacek Furmankiewicz created CASSANDRA-7088:
--

 Summary: Massive performance degradation for TRUNCATE when 
migrating to 2.0
 Key: CASSANDRA-7088
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7088
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux Mint 16
Reporter: Jacek Furmankiewicz
 Attachments: cassandra.yaml

We attempted to migrate our developers to Cassandra 2.0.7 from 1.2.

Everything worked perfectly, but we have experienced a massive drop in 
developer velocity.

We run integration tests with Cucumber BDD and 1000 BDDs went from 7 minutes 
(Cassandra 1.2) to 15 minutes (2.0.7),
This is when we run Cassandra of the ramdisk (/dev/shm) to make it run faster 
on dev boxes.

When we tried pointed to actual drives  the difference was dramatic: the entire 
suite took over 70 minutes (!) vs 15 in Cassandra 1.2.

After investigation, we found that most of the time is spent in the truncation 
logic between every scenario, where we truncate all the column families and 
start with a clean DB for the next test case.

This used to be super fast in 1.2, is now very slow in 2.0.

It may not seem important, but upgrading to 2.0 has basically cut down 
developer velocity by 100%, just by more than doubling the time it takes to run 
our BDD suite.

We truncate the CFs using the Ruby driver:

  $cassandra.column_families.each do |column_family|
name = column_family[0].to_s
$cassandra.truncate! name
  end

I am attaching our cassandra.yaml. Please note we already switched off 
auto_compaction before truncate, just as we did in 1.2 for dev boxes, Made no 
difference.





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7062) Extension of static columns for compound cluster keys

2014-04-24 Thread Constance Eustace (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Constance Eustace updated CASSANDRA-7062:
-

Description: 
CASSANDRA-6561 implemented static columns for a given partition key.

What this is proposing for a compound cluster key is a static column that is 
static at intermediate parts of a compound cluster key. This example shows a 
table modelling a moderately complex EAV pattern  :

{code}
CREATE TABLE t (
   entityID text,
   propertyName text,
   valueIndex text,
   entityName text static (entityID),
   propertyType text static (entityID, propertyName),
   propertyRelations List static (entityID, propertyName),
   data text,
   PRIMARY KEY (entityID, (propertyName,valueIndex))
)
{code}
So in this example has the following static columns:
- the entityName column behaves exactly as CASSANDRA-6561 details, so all 
cluster rows have the same value
- the propertyType and propertyRelations columns are static with respect to the 
remaining parts of the cluster key (that is, across all valueIndex values for a 
given propertyName), so an update to those values for an entityID and a 
propertyName will be shared/constant by all the value rows...

Is this a relatively simple extension of the same mechanism in -6561, or is 
this a "whoa, you have no idea what you are proposing"?

Sample data:

Mary and Jane aren't married...
{code}
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex, 
data) VALUES ('0001','MARY MATALIN','married','SingleValue','0','false');
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex, 
data) VALUES ('0002','JANE JOHNSON','married','SingleValue','0','false');
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex) 
VALUES ('0001','MARY MATALIN','kids','NOVALUE','');
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex) 
VALUES ('0002','JANE JOHNSON','kids','NOVALUE','');
{code}
{code}
SELECT * FROM t:

0001 MARY MATALIN  married   SingleValue   0   false
0001 MARY MATALIN  kids NOVALUE  null
0002 JANE JOHNSON  married   SingleValue   0   false
0002 JANE JOHNSON  kids NOVALUE  null
{code}
Then mary and jane get married (so the entityName column that is static on the 
partition key is updated just like CASSANDRA-6561 )
{code}
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex, 
data) VALUES ('0001','MARY SMITH','married','SingleValue','0','TRUE');
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex, 
data) VALUES ('0002','JANE JONES','married','SingleValue','0','TRUE');
{code}
{code}
SELECT * FROM t:

0001 MARY SMITH  married   SingleValue   0   TRUE
0001 MARY SMITH  kids NOVALUE  null
0002 JANE JONES   married   SingleValue   0   TRUE
0002 JANE JONES   kids NOVALUE  null
{code}
Then mary and jane have a kid, so we add another value to the kids attribute:
{code}
INSERT INTO t (entityID, propertyName, propertyType, valueIndex,data) VALUES 
('0001','kids','SingleValue','0','JIM-BOB');
INSERT INTO t (entityID, propertyName, propertyType, valueIndex,data) VALUES 
('0002','kids','SingleValue','0','JENNY');
{code}
{code}
SELECT * FROM t:

0001 MARY SMITH  married   SingleValue   0   TRUE
0001 MARY SMITH  kids SingleValuenull
0001 MARY SMITH  kids SingleValue   0   JIM-BOB
0002 JANE JONES   married   SingleValue   0   TRUE
0002 JANE JONES   kids SingleValuenull
0002 JANE JONES   kids SingleValue   0   JENNY
{code}
Then Mary has ANOTHER kid, which demonstrates the partially static column 
relative to the cluster key, as ALL value rows for the property 'kids' get 
updated to the new value:
{code}
INSERT INTO t (entityID, propertyName, propertyType, valueIndex,data) VALUES 
('0001','kids','MultiValue','1','HARRY');
{code}
{code}
SELECT * FROM t:

0001 MARY SMITH  married   SingleValue  0   TRUE
0001 MARY SMITH  kids  MultiValue  null
0001 MARY SMITH  kids  MultiValue 0   JIM-BOB
0001 MARY SMITH  kids  MultiValue 1   HARRY
0002 JANE JONES  married   SingleValue   0   TRUE
0002 JANE JONES  kids  SingleValuenull
0002 JANE JONES  kids  SingleValue   0   JENNY
{code}

... ok, hopefully that example isn't TOO complicated. Yes, there's a stupid 
hack bug in there with the null/empty row for the kids attribute, but please 
bear with me on that 

Generally speaking, this will aid in flattening / denormalization of relational 
constructs into cassandra-friendly schemas. In the above example we are 
flattening a relational schema of three tables: entity, property, and value 
tables into a single sparse flattened denormalized compound table.


  was:
CASSANDRA-6561 implemented static columns for a given partition key.

What this is proposing for a compound cluster key is a static column that is 
stat

[jira] [Updated] (CASSANDRA-7062) Extension of static columns for compound cluster keys

2014-04-24 Thread Constance Eustace (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Constance Eustace updated CASSANDRA-7062:
-

Description: 
CASSANDRA-6561 implemented static columns for a given partition key.

What this is proposing for a compound cluster key is a static column that is 
static at intermediate parts of a compound cluster key. This example shows a 
table modelling a moderately complex EAV pattern  :

{code}
CREATE TABLE t (
   entityID text,
   propertyName text,
   valueIndex text,
   entityName text static (entityID),
   propertyType text static (entityID, propertyName),
   propertyRelations List static (entityID, propertyName),
   data text,
   PRIMARY KEY (entityID, (propertyName,valueIndex))
)
{code}
So in this example has the following static columns:
- the entityName column behaves exactly as CASSANDRA-6561 details, so all 
cluster rows have the same value
- the propertyType and propertyRelations columns are static with respect to the 
remaining parts of the cluster key (that is, across all valueIndex values for a 
given propertyName), so an update to those values for an entityID and a 
propertyName will be shared/constant by all the value rows...

Is this a relatively simple extension of the same mechanism in -6561, or is 
this a "whoa, you have no idea what you are proposing"?

Sample data:

Mary and Jane aren't married...
{code}
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex, 
data) VALUES ('0001','MARY MATALIN','married','SingleValue','0','false');
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex, 
data) VALUES ('0002','JANE JOHNSON','married','SingleValue','0','false');
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex) 
VALUES ('0001','MARY MATALIN','kids','NOVALUE','');
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex) 
VALUES ('0002','JANE JOHNSON','kids','NOVALUE','');
{code}
{code}
SELECT * FROM t:

0001 MARY MATALIN  married   SingleValue   0   false
0001 MARY MATALIN  kids NOVALUE  null
0002 JANE JOHNSON  married   SingleValue   0   false
0002 JANE JOHNSON  kids NOVALUE  null
{code}
Then mary and jane get married (so the entityName column that is static on the 
partition key is updated just like CASSANDRA-6561 )
{code}
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex, 
data) VALUES ('0001','MARY SMITH','married','SingleValue','0','TRUE');
INSERT INTO t (entityID, entityName, propertyName, propertyType, valueIndex, 
data) VALUES ('0002','JANE JONES','married','SingleValue','0','TRUE');
{code}
{code}
SELECT * FROM t:

0001 MARY SMITH  married   SingleValue   0   TRUE
0001 MARY SMITH  kids NOVALUE  null
0002 JANE JONES   married   SingleValue   0   TRUE
0002 JANE JONES   kids NOVALUE  null
{code}
Then mary and jane have a kid, so we add another value to the kids attribute:
{code}
INSERT INTO t (entityID, propertyName, propertyType, valueIndex,data) VALUES 
('0001','kids','SingleValue','0','JIM-BOB');
INSERT INTO t (entityID, propertyName, propertyType, valueIndex,data) VALUES 
('0002','kids','SingleValue','0','JENNY');
{code}
{code}
SELECT * FROM t:

0001 MARY SMITH  married   SingleValue   0   TRUE
0001 MARY SMITH  kids SingleValuenull
0001 MARY SMITH  kids SingleValue   0   JIM-BOB
0002 JANE JONES   married   SingleValue   0   TRUE
0002 JANE JONES   kids SingleValuenull
0002 JANE JONES   kids SingleValue   0   JENNY
{code}
Then Mary has ANOTHER kid, which demonstrates the partially static column 
relative to the cluster key, as ALL value rows for the property 'kids' get 
updated to the new value:
{code}
INSERT INTO t (entityID, propertyName, propertyType, valueIndex,data) VALUES 
('0001','kids','MultiValue','1','HARRY');
{code}
{code}
SELECT * FROM t:

0001 MARY SMITH  married   SingleValue  0   TRUE
0001 MARY SMITH  kids MultiValue  null
0001 MARY SMITH  kids MultiValue 0   JIM-BOB
0001 MARY SMITH  kids MultiValue 1   HARRY
0002 JANE JONES  married   SingleValue   0   TRUE
0002 JANE JONES  kids SingleValuenull
0002 JANE JONES  kids SingleValue   0   JENNY
{code}

... ok, hopefully that example isn't TOO complicated. Yes, there's a stupid 
hack bug in there with the null/empty row for the kids attribute, but please 
bear with me on that 

Generally speaking, this will aid in flattening / denormalization of relational 
constructs into cassandra-friendly schemas. In the above example we are 
flattening a relational schema of three tables: entity, property, and value 
tables into a single sparse flattened denormalized compound table.


  was:
CASSANDRA-6561 implemented static columns for a given partition key.

What this is proposing for a compound cluster key is a static colum

[jira] [Updated] (CASSANDRA-6357) Flush memtables to separate directory

2014-04-24 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6357:
--

Fix Version/s: 2.1 beta2

> Flush memtables to separate directory
> -
>
> Key: CASSANDRA-6357
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6357
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Patrick McFadin
>Assignee: Jonathan Ellis
>Priority: Minor
>  Labels: performance
> Fix For: 2.1 beta1, 2.1 beta2
>
> Attachments: 6357-revert-v2.txt, 6357-revert.txt, 6357-v2.txt, 
> 6357.txt, c6357-2.1-stress-write-adj-ops-sec.png, 
> c6357-2.1-stress-write-latency-99th.png, 
> c6357-2.1-stress-write-latency-median.png, 
> c6357-stress-write-latency-99th-1.png
>
>
> Flush writers are a critical element for keeping a node healthy. When several 
> compactions run on systems with low performing data directories, IO becomes a 
> premium. Once the disk subsystem is saturated, write IO is blocked which will 
> cause flush writer threads to backup. Since memtables are large blocks of 
> memory in the JVM, too much blocking can cause excessive GC over time 
> degrading performance. In the worst case causing an OOM.
> Since compaction is running on the data directories. My proposal is to create 
> a separate directory for flushing memtables. Potentially we can use the same 
> methodology of keeping the commit log separate and minimize disk contention 
> against the critical function of the flushwriter. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7087) Use JMX_PORT for the RMI port to simplify nodetool connectivity

2014-04-24 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7087:


 Reviewer: Brandon Williams
Fix Version/s: 2.1 beta2
   2.0.8
 Assignee: Chris Lohfink

Nice!  Committed this to 2.0 and above so we don't have to fool with jvm 
version detection (even though we already have the code to do that.)

> Use JMX_PORT for the RMI port to simplify nodetool connectivity
> ---
>
> Key: CASSANDRA-7087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7087
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
>  Labels: security
> Fix For: 2.0.8, 2.1 beta2
>
> Attachments: patch.txt
>
>
> Mentioned in the user list by Steven Robenalt there is a config option in 
> Java7 to allow configuring the port used for the followup rmi connection in 
> JMX.  It simplifies things a lot to have both connections use 7199 since it 
> could be reused for both.
> bq. There's a little-known change in the way JMX uses ports that was add to 
> JDK7u4 which simplifies the use of JMX in a firewalled environment. The 
> standard RMI registry port for JMX is controlled by the 
> com.sun.management.jmxremote.port property. The change to Java 7 was to 
> introduce the related com.sun.management.jmxremote.rmi.port property, Setting 
> this second property means that JMX will use that second port, rather than a 
> randomly assigned port, for making the actual connection. This solution works 
> well in the AWS VPC environment that I'm running in, and I've heard of others 
> using it successfully as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-7087) Use JMX_PORT for the RMI port to simplify nodetool connectivity

2014-04-24 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-7087.
-

Resolution: Fixed

> Use JMX_PORT for the RMI port to simplify nodetool connectivity
> ---
>
> Key: CASSANDRA-7087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7087
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Minor
>  Labels: security
> Fix For: 2.0.8, 2.1 beta2
>
> Attachments: patch.txt
>
>
> Mentioned in the user list by Steven Robenalt there is a config option in 
> Java7 to allow configuring the port used for the followup rmi connection in 
> JMX.  It simplifies things a lot to have both connections use 7199 since it 
> could be reused for both.
> bq. There's a little-known change in the way JMX uses ports that was add to 
> JDK7u4 which simplifies the use of JMX in a firewalled environment. The 
> standard RMI registry port for JMX is controlled by the 
> com.sun.management.jmxremote.port property. The change to Java 7 was to 
> introduce the related com.sun.management.jmxremote.rmi.port property, Setting 
> this second property means that JMX will use that second port, rather than a 
> randomly assigned port, for making the actual connection. This solution works 
> well in the AWS VPC environment that I'm running in, and I've heard of others 
> using it successfully as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[5/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-24 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab87f833
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab87f833
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab87f833

Branch: refs/heads/cassandra-2.1
Commit: ab87f8334ac638051dad69ee0b1565e3deafcee4
Parents: 11827f0 b9bb2c8
Author: Brandon Williams 
Authored: Thu Apr 24 14:48:54 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 14:48:54 2014 -0500

--
 CHANGES.txt   | 2 ++
 conf/cassandra-env.sh | 1 +
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab87f833/CHANGES.txt
--
diff --cc CHANGES.txt
index 295eb78,0b6aeaa..11630e7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,60 -1,20 +1,62 @@@
 +2.1.0-beta2
 + * Increase default CL space to 8GB (CASSANDRA-7031)
 + * Add range tombstones to read repair digests (CASSANDRA-6863)
 + * Fix BTree.clear for large updates (CASSANDRA-6943)
 + * Fail write instead of logging a warning when unable to append to CL
 +   (CASSANDRA-6764)
 + * Eliminate possibility of CL segment appearing twice in active list 
 +   (CASSANDRA-6557)
 + * Apply DONTNEED fadvise to commitlog segments (CASSANDRA-6759)
 + * Switch CRC component to Adler and include it for compressed sstables 
 +   (CASSANDRA-4165)
 + * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451)
 + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
 + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
 + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
 + * Fix ABTC NPE and apply update function correctly (CASSANDRA-6692)
 + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)
 + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742)
 + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705)
 + * Scrub should not always clear out repaired status (CASSANDRA-5351)
 + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446)
 + * Fix ClassCastException for compact table with composites (CASSANDRA-6738)
 + * Fix potentially repairing with wrong nodes (CASSANDRA-6808)
 + * Change caching option syntax (CASSANDRA-6745)
 + * Fix stress to do proper counter reads (CASSANDRA-6835)
 + * Fix help message for stress counter_write (CASSANDRA-6824)
 + * Fix stress smart Thrift client to pick servers correctly (CASSANDRA-6848)
 + * Add logging levels (minimal, normal or verbose) to stress tool 
(CASSANDRA-6849)
 + * Fix race condition in Batch CLE (CASSANDRA-6860)
 + * Improve cleanup/scrub/upgradesstables failure handling (CASSANDRA-6774)
 + * ByteBuffer write() methods for serializing sstables (CASSANDRA-6781)
 + * Proper compare function for CollectionType (CASSANDRA-6783)
 + * Update native server to Netty 4 (CASSANDRA-6236)
 + * Fix off-by-one error in stress (CASSANDRA-6883)
 + * Make OpOrder AutoCloseable (CASSANDRA-6901)
 + * Remove sync repair JMX interface (CASSANDRA-6900)
 + * Add multiple memory allocation options for memtables (CASSANDRA-6689)
 + * Remove adjusted op rate from stress output (CASSANDRA-6921)
 + * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
 + * Serialize batchlog mutations with the version of the target node
 +   (CASSANDRA-6931)
 + * Optimize CounterColumn#reconcile() (CASSANDRA-6953)
 + * Properly remove 1.2 sstable support in 2.1 (CASSANDRA-6869)
 + * Lock counter cells, not partitions (CASSANDRA-6880)
 + * Track presence of legacy counter shards in sstables (CASSANDRA-6888)
 + * Ensure safe resource cleanup when replacing sstables (CASSANDRA-6912)
 + * Add failure handler to async callback (CASSANDRA-6747)
 + * Fix AE when closing SSTable without releasing reference (CASSANDRA-7000)
 + * Clean up IndexInfo on keyspace/table drops (CASSANDRA-6924)
 + * Only snapshot relative SSTables when sequential repair (CASSANDRA-7024)
 + * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
 + * fix cassandra stress errors on reads with native protocol (CASSANDRA-7033)
 + * Use OpOrder to guard sstable references for reads (CASSANDRA-6919)
 + * Preemptive opening of compaction result (CASSANDRA-6916)
 +Merged from 2.0:
+ 2.0.8
+  * Set JMX RMI port to 7199 (CASSANDRA-7087)
   * Use LOCAL_QUORUM for data reads at LOCAL_SERIAL (CASSANDRA-6939)
   * Log a warning for large batches (CASSANDRA-6487)
 - * Queries on compact tables can return more rows that requested 
(CASSANDRA-7052)
 - * USING TIMESTAMP for batches does not work (CASSANDRA-7053)
 - * Fix performance regression from CASSANDRA-5614 (CASSANDRA-6949)
 - * Merge groupable mutations in TriggerExecutor#execute() (CA

[4/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-24 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab87f833
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab87f833
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab87f833

Branch: refs/heads/trunk
Commit: ab87f8334ac638051dad69ee0b1565e3deafcee4
Parents: 11827f0 b9bb2c8
Author: Brandon Williams 
Authored: Thu Apr 24 14:48:54 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 14:48:54 2014 -0500

--
 CHANGES.txt   | 2 ++
 conf/cassandra-env.sh | 1 +
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab87f833/CHANGES.txt
--
diff --cc CHANGES.txt
index 295eb78,0b6aeaa..11630e7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,60 -1,20 +1,62 @@@
 +2.1.0-beta2
 + * Increase default CL space to 8GB (CASSANDRA-7031)
 + * Add range tombstones to read repair digests (CASSANDRA-6863)
 + * Fix BTree.clear for large updates (CASSANDRA-6943)
 + * Fail write instead of logging a warning when unable to append to CL
 +   (CASSANDRA-6764)
 + * Eliminate possibility of CL segment appearing twice in active list 
 +   (CASSANDRA-6557)
 + * Apply DONTNEED fadvise to commitlog segments (CASSANDRA-6759)
 + * Switch CRC component to Adler and include it for compressed sstables 
 +   (CASSANDRA-4165)
 + * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451)
 + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
 + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
 + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
 + * Fix ABTC NPE and apply update function correctly (CASSANDRA-6692)
 + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)
 + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742)
 + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705)
 + * Scrub should not always clear out repaired status (CASSANDRA-5351)
 + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446)
 + * Fix ClassCastException for compact table with composites (CASSANDRA-6738)
 + * Fix potentially repairing with wrong nodes (CASSANDRA-6808)
 + * Change caching option syntax (CASSANDRA-6745)
 + * Fix stress to do proper counter reads (CASSANDRA-6835)
 + * Fix help message for stress counter_write (CASSANDRA-6824)
 + * Fix stress smart Thrift client to pick servers correctly (CASSANDRA-6848)
 + * Add logging levels (minimal, normal or verbose) to stress tool 
(CASSANDRA-6849)
 + * Fix race condition in Batch CLE (CASSANDRA-6860)
 + * Improve cleanup/scrub/upgradesstables failure handling (CASSANDRA-6774)
 + * ByteBuffer write() methods for serializing sstables (CASSANDRA-6781)
 + * Proper compare function for CollectionType (CASSANDRA-6783)
 + * Update native server to Netty 4 (CASSANDRA-6236)
 + * Fix off-by-one error in stress (CASSANDRA-6883)
 + * Make OpOrder AutoCloseable (CASSANDRA-6901)
 + * Remove sync repair JMX interface (CASSANDRA-6900)
 + * Add multiple memory allocation options for memtables (CASSANDRA-6689)
 + * Remove adjusted op rate from stress output (CASSANDRA-6921)
 + * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
 + * Serialize batchlog mutations with the version of the target node
 +   (CASSANDRA-6931)
 + * Optimize CounterColumn#reconcile() (CASSANDRA-6953)
 + * Properly remove 1.2 sstable support in 2.1 (CASSANDRA-6869)
 + * Lock counter cells, not partitions (CASSANDRA-6880)
 + * Track presence of legacy counter shards in sstables (CASSANDRA-6888)
 + * Ensure safe resource cleanup when replacing sstables (CASSANDRA-6912)
 + * Add failure handler to async callback (CASSANDRA-6747)
 + * Fix AE when closing SSTable without releasing reference (CASSANDRA-7000)
 + * Clean up IndexInfo on keyspace/table drops (CASSANDRA-6924)
 + * Only snapshot relative SSTables when sequential repair (CASSANDRA-7024)
 + * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
 + * fix cassandra stress errors on reads with native protocol (CASSANDRA-7033)
 + * Use OpOrder to guard sstable references for reads (CASSANDRA-6919)
 + * Preemptive opening of compaction result (CASSANDRA-6916)
 +Merged from 2.0:
+ 2.0.8
+  * Set JMX RMI port to 7199 (CASSANDRA-7087)
   * Use LOCAL_QUORUM for data reads at LOCAL_SERIAL (CASSANDRA-6939)
   * Log a warning for large batches (CASSANDRA-6487)
 - * Queries on compact tables can return more rows that requested 
(CASSANDRA-7052)
 - * USING TIMESTAMP for batches does not work (CASSANDRA-7053)
 - * Fix performance regression from CASSANDRA-5614 (CASSANDRA-6949)
 - * Merge groupable mutations in TriggerExecutor#execute() (CASSANDRA-

[3/6] git commit: Set JMX RMI port to 7199

2014-04-24 Thread brandonwilliams
Set JMX RMI port to 7199

Patch by Chris Lohfink, reviewed by brandonwilliams for CASSANDRA-7087


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b9bb2c88
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b9bb2c88
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b9bb2c88

Branch: refs/heads/trunk
Commit: b9bb2c88689f602b43cd3516f6526875b09715b1
Parents: 205b661
Author: Brandon Williams 
Authored: Thu Apr 24 14:47:06 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 14:48:25 2014 -0500

--
 CHANGES.txt   | 1 +
 conf/cassandra-env.sh | 1 +
 2 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b9bb2c88/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8b1ed54..0b6aeaa 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.8
+ * Set JMX RMI port to 7199 (CASSANDRA-7087)
  * Use LOCAL_QUORUM for data reads at LOCAL_SERIAL (CASSANDRA-6939)
  * Log a warning for large batches (CASSANDRA-6487)
  * Queries on compact tables can return more rows that requested 
(CASSANDRA-7052)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b9bb2c88/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 934e463..3b15517 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -254,6 +254,7 @@ JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
 # for more on configuring JMX through firewalls, etc. (Short version:
 # get it working with no firewall first.)
 JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT"
+JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
 JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.ssl=false"
 JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=false"
 #JVM_OPTS="$JVM_OPTS 
-Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"



[1/6] git commit: Set JMX RMI port to 7199

2014-04-24 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 205b6616e -> b9bb2c886
  refs/heads/cassandra-2.1 11827f0d7 -> ab87f8334
  refs/heads/trunk 7fe5503f2 -> 16bb16ed2


Set JMX RMI port to 7199

Patch by Chris Lohfink, reviewed by brandonwilliams for CASSANDRA-7087


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b9bb2c88
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b9bb2c88
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b9bb2c88

Branch: refs/heads/cassandra-2.0
Commit: b9bb2c88689f602b43cd3516f6526875b09715b1
Parents: 205b661
Author: Brandon Williams 
Authored: Thu Apr 24 14:47:06 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 14:48:25 2014 -0500

--
 CHANGES.txt   | 1 +
 conf/cassandra-env.sh | 1 +
 2 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b9bb2c88/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8b1ed54..0b6aeaa 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.8
+ * Set JMX RMI port to 7199 (CASSANDRA-7087)
  * Use LOCAL_QUORUM for data reads at LOCAL_SERIAL (CASSANDRA-6939)
  * Log a warning for large batches (CASSANDRA-6487)
  * Queries on compact tables can return more rows that requested 
(CASSANDRA-7052)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b9bb2c88/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 934e463..3b15517 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -254,6 +254,7 @@ JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
 # for more on configuring JMX through firewalls, etc. (Short version:
 # get it working with no firewall first.)
 JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT"
+JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
 JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.ssl=false"
 JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=false"
 #JVM_OPTS="$JVM_OPTS 
-Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"



[2/6] git commit: Set JMX RMI port to 7199

2014-04-24 Thread brandonwilliams
Set JMX RMI port to 7199

Patch by Chris Lohfink, reviewed by brandonwilliams for CASSANDRA-7087


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b9bb2c88
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b9bb2c88
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b9bb2c88

Branch: refs/heads/cassandra-2.1
Commit: b9bb2c88689f602b43cd3516f6526875b09715b1
Parents: 205b661
Author: Brandon Williams 
Authored: Thu Apr 24 14:47:06 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 14:48:25 2014 -0500

--
 CHANGES.txt   | 1 +
 conf/cassandra-env.sh | 1 +
 2 files changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b9bb2c88/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8b1ed54..0b6aeaa 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.8
+ * Set JMX RMI port to 7199 (CASSANDRA-7087)
  * Use LOCAL_QUORUM for data reads at LOCAL_SERIAL (CASSANDRA-6939)
  * Log a warning for large batches (CASSANDRA-6487)
  * Queries on compact tables can return more rows that requested 
(CASSANDRA-7052)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b9bb2c88/conf/cassandra-env.sh
--
diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh
index 934e463..3b15517 100644
--- a/conf/cassandra-env.sh
+++ b/conf/cassandra-env.sh
@@ -254,6 +254,7 @@ JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
 # for more on configuring JMX through firewalls, etc. (Short version:
 # get it working with no firewall first.)
 JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT"
+JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
 JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.ssl=false"
 JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=false"
 #JVM_OPTS="$JVM_OPTS 
-Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"



[6/6] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-24 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/16bb16ed
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/16bb16ed
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/16bb16ed

Branch: refs/heads/trunk
Commit: 16bb16ed2ff112fb006c911fd15ae919f1c92574
Parents: 7fe5503 ab87f83
Author: Brandon Williams 
Authored: Thu Apr 24 14:49:01 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 14:49:01 2014 -0500

--
 CHANGES.txt   | 2 ++
 conf/cassandra-env.sh | 1 +
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/16bb16ed/CHANGES.txt
--



[jira] [Updated] (CASSANDRA-7087) Use JMX_PORT for the RMI port to simplify nodetool connectivity

2014-04-24 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-7087:
-

Description: 
Mentioned in the user list by Steven Robenalt there is a config option in Java7 
to allow configuring the port used for the followup rmi connection in JMX.  It 
simplifies things a lot to have both connections use 7199 since it could be 
reused for both.

bq. There's a little-known change in the way JMX uses ports that was add to 
JDK7u4 which simplifies the use of JMX in a firewalled environment. The 
standard RMI registry port for JMX is controlled by the 
com.sun.management.jmxremote.port property. The change to Java 7 was to 
introduce the related com.sun.management.jmxremote.rmi.port property, Setting 
this second property means that JMX will use that second port, rather than a 
randomly assigned port, for making the actual connection. This solution works 
well in the AWS VPC environment that I'm running in, and I've heard of others 
using it successfully as well.


  was:Mentioned in the user list by Steven Robenalt there is a config option in 
Java7 to allow configuring the port used for the followup rmi connection in 
JMX.  It simplifies things a lot to have both connections use 7199 since it 
could be reused for both.


> Use JMX_PORT for the RMI port to simplify nodetool connectivity
> ---
>
> Key: CASSANDRA-7087
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7087
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Chris Lohfink
>Priority: Minor
>  Labels: security
> Attachments: patch.txt
>
>
> Mentioned in the user list by Steven Robenalt there is a config option in 
> Java7 to allow configuring the port used for the followup rmi connection in 
> JMX.  It simplifies things a lot to have both connections use 7199 since it 
> could be reused for both.
> bq. There's a little-known change in the way JMX uses ports that was add to 
> JDK7u4 which simplifies the use of JMX in a firewalled environment. The 
> standard RMI registry port for JMX is controlled by the 
> com.sun.management.jmxremote.port property. The change to Java 7 was to 
> introduce the related com.sun.management.jmxremote.rmi.port property, Setting 
> this second property means that JMX will use that second port, rather than a 
> randomly assigned port, for making the actual connection. This solution works 
> well in the AWS VPC environment that I'm running in, and I've heard of others 
> using it successfully as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7087) Use JMX_PORT for the RMI port to simplify nodetool connectivity

2014-04-24 Thread Chris Lohfink (JIRA)
Chris Lohfink created CASSANDRA-7087:


 Summary: Use JMX_PORT for the RMI port to simplify nodetool 
connectivity
 Key: CASSANDRA-7087
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7087
 Project: Cassandra
  Issue Type: Improvement
  Components: Config
Reporter: Chris Lohfink
Priority: Minor
 Attachments: patch.txt

Mentioned in the user list by Steven Robenalt there is a config option in Java7 
to allow configuring the port used for the followup rmi connection in JMX.  It 
simplifies things a lot to have both connections use 7199 since it could be 
reused for both.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7086) Error CorruptSSTableException when deleting

2014-04-24 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980114#comment-13980114
 ] 

Brandon Williams commented on CASSANDRA-7086:
-

Without a reproducible case there's not much we can do.  You should probably 
run scrub and then repair to fix the corruption.

> Error CorruptSSTableException when deleting
> ---
>
> Key: CASSANDRA-7086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7086
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Edmund Lo
>
> I ran into a table corrupt issue when attempting to delete items from a 
> table. Not really sure what caused the corruption in the first place.
> ERROR [ReadStage:63] 2014-04-24 14:46:15,462 CassandraDaemon.java (line 196) 
> Exception in thread Thread[ReadStage:63,5,main]
> java.lang.RuntimeException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: 
> java.io.EOFException: EOF after 258 bytes out of 25966
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1900)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
> java.io.EOFException: EOF after 258 bytes out of 25966
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:82)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
> at 
> org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
> at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1380)
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1341)
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1896)
> ... 3 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-24 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980098#comment-13980098
 ] 

Mikhail Stepura edited comment on CASSANDRA-6831 at 4/24/14 7:04 PM:
-

So, here is how I see what's happening on 1.2:

* We create CFMetaData from Thrift (i.e. no aliases)
* We create a mutation from that Thrift-based metadata ( 
{{newState.toSchemaNoColumns(rm, modificationTimestamp);}} ). This mutation 
will delete all column/value aliases from {{schema_columnfamilies}}
* We apply the mutation ({{DefsTable.mergeSchema}}). *Damage done*. 
* We reload in-memory presentation. Now the {{apply(CFMetaData cfm)}} merges 
the both definitions, correctly keeping column/value aliases, but they are now 
in-memory only (in {{Schema.instance}}), there are no aliases in the 
{{schema_columnfamilies}} and this information will be lost on a next restart.

And the patch populates the columns/value aliases for that Thrift-based 
metadata, hence the mutation doesn't erase them.


was (Author: mishail):
So, here is how I see what's happening:

* We create CFMetaData from Thrift (i.e. no aliases)
* We create a mutation from that Thrift-based metadata ( 
{{newState.toSchemaNoColumns(rm, modificationTimestamp);}} ). This mutation 
will delete all column/value aliases from {{schema_columnfamilies}}
* We apply the mutation ({{DefsTable.mergeSchema}}). *Damage done*. 
* We reload in-memory presentation. Now the {{apply(CFMetaData cfm)}} merges 
the both definitions, correctly keeping column/value aliases, but they are now 
in-memory only (in {{Schema.instance}}), there are no aliases in the 
{{schema_columnfamilies}} and this information will be lost on a next restart

> Updates to COMPACT STORAGE tables via cli drop CQL information
> --
>
> Key: CASSANDRA-6831
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Russell Bradberry
>Assignee: Mikhail Stepura
>Priority: Minor
> Fix For: 1.2.17, 2.0.8, 2.1 beta2
>
> Attachments: cassandra-1.2-6831.patch, cassandra-2.0-6831.patch
>
>
> If a COMPACT STORAGE table is altered using the CLI all information about the 
> column names reverts to the initial "key, column1, column2" namings.  
> Additionally, the changes in the columns name will not take effect until the 
> Cassandra service is restarted.  This means that the clients using CQL will 
> continue to work properly until the service is restarted, at which time they 
> will start getting errors about non-existant columns in the table.
> When attempting to rename the columns back using ALTER TABLE an error stating 
> the column already exists will be raised.  The only way to get it back is to 
> ALTER TABLE and change the comment or something, which will bring back all 
> the original column names.
> This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
> In cqlsh
> {code}
> Connected to cluster1 at 127.0.0.3:9160.
> [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
> 19.36.2]
> Use HELP for help.
> cqlsh> CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 3 };
> cqlsh> USE test;
> cqlsh:test> CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
> baz) ) WITH COMPACT STORAGE;
> cqlsh:test> describe table foo;
> CREATE TABLE foo (
>   bar text,
>   baz text,
>   qux text,
>   PRIMARY KEY (bar, baz)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> {code}
> Now in cli:
> {code}
>   Connected to: "cluster1" on 127.0.0.3/9160
> Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
> Type 'help;' or '?' for help.
> Type 'quit;' or 'exit;' to quit.
> [default@unknown] use test;
> Authenticated to keyspace: test
> [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
> 3bf5fa49-5d03-34f0-b46c-6745f7740925
> {code}
> Now back in cqlsh:
> {code}
> cqlsh:test> describe table foo;
> CREATE TABLE foo (
>   bar text,
>   column1 text,
>   value text,
>   PRIMARY KEY (bar, column1)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='hey this is a comment' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_

[jira] [Created] (CASSANDRA-7086) Error CorruptSSTableException when deleting

2014-04-24 Thread Edmund Lo (JIRA)
Edmund Lo created CASSANDRA-7086:


 Summary: Error CorruptSSTableException when deleting
 Key: CASSANDRA-7086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7086
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Edmund Lo


I ran into a table corrupt issue when attempting to delete items from a table. 
Not really sure what caused the corruption in the first place.

ERROR [ReadStage:63] 2014-04-24 14:46:15,462 CassandraDaemon.java (line 196) 
Exception in thread Thread[ReadStage:63,5,main]
java.lang.RuntimeException: 
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException: 
EOF after 258 bytes out of 25966
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1900)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
java.io.EOFException: EOF after 258 bytes out of 25966
at 
org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:82)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1551)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1380)
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1341)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1896)
... 3 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-24 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980098#comment-13980098
 ] 

Mikhail Stepura commented on CASSANDRA-6831:


So, here is how I see what's happening:

* We create CFMetaData from Thrift (i.e. no aliases)
* We create a mutation from that Thrift-based metadata ( 
{{newState.toSchemaNoColumns(rm, modificationTimestamp);}} ). This mutation 
will delete all column/value aliases from {{schema_columnfamilies}}
* We apply the mutation ({{DefsTable.mergeSchema}}). *Damage done*. 
* We reload in-memory presentation. Now the {{apply(CFMetaData cfm)}} merges 
the both definitions, correctly keeping column/value aliases, but they are now 
in-memory only (in {{Schema.instance}}), there are no aliases in the 
{{schema_columnfamilies}} and this information will be lost on a next restart

> Updates to COMPACT STORAGE tables via cli drop CQL information
> --
>
> Key: CASSANDRA-6831
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Russell Bradberry
>Assignee: Mikhail Stepura
>Priority: Minor
> Fix For: 1.2.17, 2.0.8, 2.1 beta2
>
> Attachments: cassandra-1.2-6831.patch, cassandra-2.0-6831.patch
>
>
> If a COMPACT STORAGE table is altered using the CLI all information about the 
> column names reverts to the initial "key, column1, column2" namings.  
> Additionally, the changes in the columns name will not take effect until the 
> Cassandra service is restarted.  This means that the clients using CQL will 
> continue to work properly until the service is restarted, at which time they 
> will start getting errors about non-existant columns in the table.
> When attempting to rename the columns back using ALTER TABLE an error stating 
> the column already exists will be raised.  The only way to get it back is to 
> ALTER TABLE and change the comment or something, which will bring back all 
> the original column names.
> This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
> In cqlsh
> {code}
> Connected to cluster1 at 127.0.0.3:9160.
> [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
> 19.36.2]
> Use HELP for help.
> cqlsh> CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 3 };
> cqlsh> USE test;
> cqlsh:test> CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
> baz) ) WITH COMPACT STORAGE;
> cqlsh:test> describe table foo;
> CREATE TABLE foo (
>   bar text,
>   baz text,
>   qux text,
>   PRIMARY KEY (bar, baz)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> {code}
> Now in cli:
> {code}
>   Connected to: "cluster1" on 127.0.0.3/9160
> Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
> Type 'help;' or '?' for help.
> Type 'quit;' or 'exit;' to quit.
> [default@unknown] use test;
> Authenticated to keyspace: test
> [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
> 3bf5fa49-5d03-34f0-b46c-6745f7740925
> {code}
> Now back in cqlsh:
> {code}
> cqlsh:test> describe table foo;
> CREATE TABLE foo (
>   bar text,
>   column1 text,
>   value text,
>   PRIMARY KEY (bar, column1)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='hey this is a comment' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> cqlsh:test> ALTER TABLE foo WITH comment='this is a new comment';
> cqlsh:test> describe table foo;
> CREATE TABLE foo (
>   bar text,
>   baz text,
>   qux text,
>   PRIMARY KEY (bar, baz)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='this is a new comment' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6106) Provide timestamp with true microsecond resolution

2014-04-24 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980097#comment-13980097
 ] 

Benedict commented on CASSANDRA-6106:
-

Just to add to the analysis, since it's quite computationally tractable I have 
uploaded a patch to the branch which brute force checks every possible 
computation to ensure the result is always monotonically increasing, and within 
the bounds of what is expected. I have run this to completion and indeed all of 
my statements above check out empirically.

> Provide timestamp with true microsecond resolution
> --
>
> Key: CASSANDRA-6106
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: DSE Cassandra 3.1, but also HEAD
>Reporter: Christopher Smith
>Assignee: Benedict
>Priority: Minor
>  Labels: timestamps
> Fix For: 2.1 beta2
>
> Attachments: microtimstamp.patch, microtimstamp_random.patch, 
> microtimstamp_random_rev2.patch
>
>
> I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
> mentioned issues with millisecond rounding in timestamps and was able to 
> reproduce the issue. If I specify a timestamp in a mutating query, I get 
> microsecond precision, but if I don't, I get timestamps rounded to the 
> nearest millisecond, at least for my first query on a given connection, which 
> substantially increases the possibilities of collision.
> I believe I found the offending code, though I am by no means sure this is 
> comprehensive. I think we probably need a fairly comprehensive replacement of 
> all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7042) Disk space growth until restart

2014-04-24 Thread Zach Aller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980094#comment-13980094
 ] 

Zach Aller commented on CASSANDRA-7042:
---

Here is a schema in question

CREATE TABLE grid (
  data_id text,
  cylinder text,
  value blob,
  PRIMARY KEY (data_id, cylinder)
) WITH COMPACT STORAGE AND
  bloom_filter_fp_chance=0.10 AND
  caching='ALL' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=0 AND
  index_interval=128 AND
  read_repair_chance=0.10 AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'sstable_size_in_mb': '160', 'class': 
'LeveledCompactionStrategy'} AND
  compression={};

> Disk space growth until restart
> ---
>
> Key: CASSANDRA-7042
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7042
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 12.04
> Sun Java 7
> Cassandra 2.0.6
>Reporter: Zach Aller
> Attachments: Screen Shot 2014-04-17 at 11.07.24 AM.png, Screen Shot 
> 2014-04-18 at 11.47.30 AM.png, Screen Shot 2014-04-22 at 1.40.41 PM.png, 
> after.log, before.log, tabledump_after_restart.txt, 
> tabledump_before_restart.txt
>
>
> Cassandra will constantly eat disk space not sure whats causing it the only 
> thing that seems to fix it is a restart of cassandra this happens about every 
> 3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
> restart cassandra it usually all clears itself up and disks return to normal 
> for a while then something triggers its and starts climbing again. Sometimes 
> when we restart compactions pending skyrocket and if we restart a second time 
> the compactions pending drop off back to a normal level. One other thing to 
> note is the space is not free'd until cassandra starts back up and not when 
> shutdown.
> I will get a clean log of before and after restarting next time it happens 
> and post it.
> Here is a common ERROR in our logs that might be related
> ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
> (line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
> (No such file or directory)
> at 
> org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
> at 
> org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
> at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:194)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:258)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.run(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.FileNotFoundException: 
> /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
> (No such file or directory)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
> at 
> org.apache.cassandra.io.util.ThrottledReader.(ThrottledReader.java:35)
> at 
> org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:49)
> ... 17 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7042) Disk space growth until restart

2014-04-24 Thread Zach Aller (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach Aller updated CASSANDRA-7042:
--

Attachment: tabledump_before_restart.txt
tabledump_after_restart.txt

This is a dump of the directory for the cf that has the issues as you can see 
file count increases. It seems like files are not being deleted when they 
should be. The before file is taken when cassandra was in a stopped state the 
after is when cassandra had been started up again.

> Disk space growth until restart
> ---
>
> Key: CASSANDRA-7042
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7042
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 12.04
> Sun Java 7
> Cassandra 2.0.6
>Reporter: Zach Aller
> Attachments: Screen Shot 2014-04-17 at 11.07.24 AM.png, Screen Shot 
> 2014-04-18 at 11.47.30 AM.png, Screen Shot 2014-04-22 at 1.40.41 PM.png, 
> after.log, before.log, tabledump_after_restart.txt, 
> tabledump_before_restart.txt
>
>
> Cassandra will constantly eat disk space not sure whats causing it the only 
> thing that seems to fix it is a restart of cassandra this happens about every 
> 3-5 hrs we will grow from about 350GB to 650GB with no end in site. Once we 
> restart cassandra it usually all clears itself up and disks return to normal 
> for a while then something triggers its and starts climbing again. Sometimes 
> when we restart compactions pending skyrocket and if we restart a second time 
> the compactions pending drop off back to a normal level. One other thing to 
> note is the space is not free'd until cassandra starts back up and not when 
> shutdown.
> I will get a clean log of before and after restarting next time it happens 
> and post it.
> Here is a common ERROR in our logs that might be related
> ERROR [CompactionExecutor:46] 2014-04-15 09:12:51,040 CassandraDaemon.java 
> (line 196) Exception in thread Thread[CompactionExecutor:46,1,main]
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
> (No such file or directory)
> at 
> org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:53)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1355)
> at 
> org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1161)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1173)
> at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:194)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:258)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:126)
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.run(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.FileNotFoundException: 
> /local-project/cassandra_data/data/wxgrid/grid/wxgrid-grid-jb-468677-Data.db 
> (No such file or directory)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.(Unknown Source)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
> at 
> org.apache.cassandra.io.util.ThrottledReader.(ThrottledReader.java:35)
> at 
> org.apache.cassandra.io.util.ThrottledReader.open(ThrottledReader.java:49)
> ... 17 more



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7082) Nodetool status always displays the first token instead of the number of vnodes

2014-04-24 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7082:


Summary: Nodetool status always displays the first token instead of the 
number of vnodes  (was: Nodetool status always display only the first token)

> Nodetool status always displays the first token instead of the number of 
> vnodes
> ---
>
> Key: CASSANDRA-7082
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7082
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jivko Donev
>Assignee: Brandon Williams
>Priority: Minor
>  Labels: nodetool
> Fix For: 1.2.17, 2.0.8
>
> Attachments: 7082.txt
>
>
> nodetool status command always displays the first token for a node even if 
> using vnodes. The defect is only reproduced on version 2.0.7. 
> With the same configuration 2.0.7 displays:
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Owns (effective)  Host ID
>TokenRack
> UN  127.0.0.1  156.34 KB  100.0%
> d6629553-d6e9-434d-bf01-54c257b20ea9  -9134643033027010921
>  Rack1
> But 2.0.6 displays:
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Tokens  Owns   Host ID
> UN  127.0.0.1  210.32 KB  256 100.0%  08208ec9-8976-4ad0-b6bb-ee5dcf0109e
> The problem seems to be in NodeCmd.java the check for vnodes.
> In the print() method there is a check 
> // More tokens then nodes (aka vnodes)?
> if (tokensToEndpoints.values().size() < 
> tokensToEndpoints.keySet().size())
> isTokenPerNode = false;
> while in 2.0.6 the same code was:
> // More tokens then nodes (aka vnodes)?
> if (new HashSet(tokensToEndpoints.values()).size() < 
> tokensToEndpoints.keySet().size())
> isTokenPerNode = false;
> In 2.0.7 this check is never true as values collection is always equal by 
> size with key set size.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7082) Nodetool status always display only the first token

2014-04-24 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7082:


Since Version: 1.2.16  (was: 2.0.7)
Fix Version/s: 1.2.17

Also affects 1.2, since this is a regression from CASSANDRA-6811

> Nodetool status always display only the first token
> ---
>
> Key: CASSANDRA-7082
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7082
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jivko Donev
>Assignee: Brandon Williams
>Priority: Minor
>  Labels: nodetool
> Fix For: 1.2.17, 2.0.8
>
> Attachments: 7082.txt
>
>
> nodetool status command always displays the first token for a node even if 
> using vnodes. The defect is only reproduced on version 2.0.7. 
> With the same configuration 2.0.7 displays:
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Owns (effective)  Host ID
>TokenRack
> UN  127.0.0.1  156.34 KB  100.0%
> d6629553-d6e9-434d-bf01-54c257b20ea9  -9134643033027010921
>  Rack1
> But 2.0.6 displays:
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Tokens  Owns   Host ID
> UN  127.0.0.1  210.32 KB  256 100.0%  08208ec9-8976-4ad0-b6bb-ee5dcf0109e
> The problem seems to be in NodeCmd.java the check for vnodes.
> In the print() method there is a check 
> // More tokens then nodes (aka vnodes)?
> if (tokensToEndpoints.values().size() < 
> tokensToEndpoints.keySet().size())
> isTokenPerNode = false;
> while in 2.0.6 the same code was:
> // More tokens then nodes (aka vnodes)?
> if (new HashSet(tokensToEndpoints.values()).size() < 
> tokensToEndpoints.keySet().size())
> isTokenPerNode = false;
> In 2.0.7 this check is never true as values collection is always equal by 
> size with key set size.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6811) nodetool no longer shows node joining

2014-04-24 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6811:


Fix Version/s: 2.0.7

> nodetool no longer shows node joining
> -
>
> Key: CASSANDRA-6811
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6811
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Vijay
>Priority: Minor
> Fix For: 1.2.16, 2.0.7
>
> Attachments: 0001-CASSANDRA-6811-v2.patch, ringfix.txt
>
>
> When we added effective ownership output to nodetool ring/status, we 
> accidentally began excluding joining nodes because we iterate the ownership 
> maps instead of the the endpoint to token map when printing the output, and 
> the joining nodes don't have any ownership.  The simplest thing to do is 
> probably iterate the token map instead, and not output any ownership info for 
> joining nodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7082) Nodetool status always display only the first token

2014-04-24 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7082:


Attachment: 7082.txt

Patch to compare the number of host stats against tokens.

> Nodetool status always display only the first token
> ---
>
> Key: CASSANDRA-7082
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7082
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jivko Donev
>Assignee: Vijay
>Priority: Minor
>  Labels: nodetool
> Fix For: 2.0.8
>
> Attachments: 7082.txt
>
>
> nodetool status command always displays the first token for a node even if 
> using vnodes. The defect is only reproduced on version 2.0.7. 
> With the same configuration 2.0.7 displays:
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Owns (effective)  Host ID
>TokenRack
> UN  127.0.0.1  156.34 KB  100.0%
> d6629553-d6e9-434d-bf01-54c257b20ea9  -9134643033027010921
>  Rack1
> But 2.0.6 displays:
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Tokens  Owns   Host ID
> UN  127.0.0.1  210.32 KB  256 100.0%  08208ec9-8976-4ad0-b6bb-ee5dcf0109e
> The problem seems to be in NodeCmd.java the check for vnodes.
> In the print() method there is a check 
> // More tokens then nodes (aka vnodes)?
> if (tokensToEndpoints.values().size() < 
> tokensToEndpoints.keySet().size())
> isTokenPerNode = false;
> while in 2.0.6 the same code was:
> // More tokens then nodes (aka vnodes)?
> if (new HashSet(tokensToEndpoints.values()).size() < 
> tokensToEndpoints.keySet().size())
> isTokenPerNode = false;
> In 2.0.7 this check is never true as values collection is always equal by 
> size with key set size.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6916) Preemptive opening of compaction result

2014-04-24 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6916:


Attachment: 6916.fixup.txt

> Preemptive opening of compaction result
> ---
>
> Key: CASSANDRA-6916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6916
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
>  Labels: performance
> Fix For: 2.1 beta2
>
> Attachments: 6916-stock2_1.mixed.cache_tweaks.tar.gz, 
> 6916-stock2_1.mixed.logs.tar.gz, 6916.fixup.txt, 
> 6916v3-preempive-open-compact.logs.gz, 
> 6916v3-preempive-open-compact.mixed.2.logs.tar.gz, 
> 6916v3-premptive-open-compact.mixed.cache_tweaks.2.tar.gz
>
>
> Related to CASSANDRA-6812, but a little simpler: when compacting, we mess 
> quite badly with the page cache. One thing we can do to mitigate this problem 
> is to use the sstable we're writing before we've finished writing it, and to 
> drop the regions from the old sstables from the page cache as soon as the new 
> sstables have them (even if they're only written to the page cache). This 
> should minimise any page cache churn, as the old sstables must be larger than 
> the new sstable, and since both will be in memory, dropping the old sstables 
> is at least as good as dropping the new.
> The approach is quite straight-forward. Every X MB written:
> # grab flushed length of index file;
> # grab second to last index summary record, after excluding those that point 
> to positions after the flushed length;
> # open index file, and check that our last record doesn't occur outside of 
> the flushed length of the data file (pretty unlikely)
> # Open the sstable with the calculated upper bound
> Some complications:
> # must keep running copy of compression metadata for reopening with
> # we need to be able to replace an sstable with itself but a different lower 
> bound
> # we need to drop the old page cache only when readers have finished



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6987) sstablesplit fails in 2.1

2014-04-24 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980002#comment-13980002
 ] 

Benedict commented on CASSANDRA-6987:
-

Final tweak to CASSANDRA-6916 before commit accidentally broke this, fix 
attached to ticket

> sstablesplit fails in 2.1
> -
>
> Key: CASSANDRA-6987
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6987
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Debian Testing/Jessie
> Oracle JDK 1.7.0_51
> c*-2.1 branch, commit 5ebadc11e36749e6479f9aba19406db3aacdaf41
>Reporter: Michael Shuler
>Assignee: Benedict
> Fix For: 2.1 beta2
>
> Attachments: 6987.txt
>
>
> sstablesplit dtest began failing in 2.1 at 
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/95/ triggered by 
> http://cassci.datastax.com/job/cassandra-2.1/186/
> repro:
> {noformat}
> (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./bin/cassandra >/dev/null 
> (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./tools/bin/cassandra-stress 
> write n=100
> Created keyspaces. Sleeping 1s for propagation.
> Warming up WRITE with 5 iterations...
> Connected to cluster: Test Cluster
> Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
> Sleeping 2s...
> Running WRITE with 50 threads  for 100 iterations
> ops   ,op/s,   key/s,mean, med, .95, .99,.999,
>  max,   time,   stderr
> 26836 ,   26830,   26830, 2.0, 1.1, 4.0,20.8,   131.4,   
> 207.4,1.0,  0.0
> 64002 ,   36236,   36236, 1.4, 0.8, 4.2,13.8,41.3,   
> 234.8,2.0,  0.0
> 105604,   38188,   38188, 1.3, 0.8, 3.2,10.6,78.4,
> 93.7,3.1,  0.10546
> 156179,   36750,   36750, 1.4, 0.9, 2.9, 8.8,   117.0,   
> 139.8,4.5,  0.08482
> 202092,   40487,   40487, 1.2, 0.9, 2.9, 7.3,45.6,   
> 122.5,5.6,  0.07231
> 246947,   40583,   40583, 1.2, 0.8, 3.0, 7.6,98.2,   
> 152.1,6.7,  0.07056
> 290186,   39867,   39867, 1.3, 0.8, 2.6, 8.9,   113.3,   
> 126.4,7.8,  0.06391
> 331609,   40155,   40155, 1.2, 0.8, 3.1, 8.7,99.1,   
> 124.9,8.8,  0.05731
> 371813,   38742,   38742, 1.3, 0.8, 3.1, 9.2,   117.2,   
> 123.9,9.9,  0.05153
> 416853,   40024,   40024, 1.2, 0.8, 3.2, 8.1,70.4,   
> 119.8,   11.0,  0.04634
> 458389,   39045,   39045, 1.3, 0.8, 3.2, 9.1,   106.4,   
> 135.9,   12.1,  0.04236
> 511323,   36513,   36513, 1.4, 0.8, 3.3, 9.2,   120.2,   
> 161.0,   13.5,  0.03883
> 549872,   34296,   34296, 1.5, 0.9, 3.4,11.5,   106.7,   
> 132.7,   14.6,  0.03678
> 589405,   34535,   34535, 1.4, 0.9, 2.9,10.6,   106.2,   
> 147.9,   15.8,  0.03607
> 633225,   39472,   39472, 1.3, 0.8, 3.0, 7.6,   106.3,   
> 125.1,   16.9,  0.03374
> 672751,   38251,   38251, 1.3, 0.8, 3.0, 8.0,94.7,   
> 157.5,   17.9,  0.03193
> 714762,   38047,   38047, 1.3, 0.8, 3.0, 9.3,   102.6,   
> 167.8,   19.0,  0.03001
> 756629,   38080,   38080, 1.3, 0.8, 3.2, 8.8,   101.7,   
> 117.4,   20.1,  0.02847
> 802981,   38955,   38955, 1.3, 0.8, 3.0, 9.1,   105.2,   
> 164.6,   21.3,  0.02708
> 847262,   38817,   38817, 1.3, 0.7, 3.2, 9.8,   112.1,   
> 137.4,   22.5,  0.02581
> 887639,   38403,   38403, 1.3, 0.8, 2.9,10.0,99.1,   
> 147.8,   23.5,  0.02470
> 929362,   35056,   35056, 1.4, 0.8, 3.3,11.5,   111.8,   
> 149.3,   24.7,  0.02360
> 980996,   38247,   38247, 1.3, 0.8, 3.5, 8.3,78.8,   
> 129.0,   26.1,  0.02338
> 100   ,   39379,   39379, 1.2, 0.9, 3.1, 9.0,29.4,
> 83.8,   26.5,  0.02238
> Results:
> real op rate  : 37673
> adjusted op rate stderr   : 0
> key rate  : 37673
> latency mean  : 1.3
> latency median: 0.8
> latency 95th percentile   : 3.2
> latency 99th percentile   : 10.4
> latency 99.9th percentile : 92.1
> latency max   : 234.8
> Total operation time  : 00:00:26
> END
> (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./bin/nodetool compact Keyspace1
> (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./bin/sstablesplit 
> /var/lib/cassandra/data/Keyspace1/Standard1-*/Keyspace1-Standard1-ka-2-Data.db
> Exception in thread "main" java.lang.AssertionError
> at 
> org.apache.cassandra.db.Keyspace.openWithoutSSTables(Keyspace.java:104)
> at 
> org.apache.cassandra.tools.StandaloneSplitter.main(StandaloneSplitter.j

[jira] [Reopened] (CASSANDRA-6916) Preemptive opening of compaction result

2014-04-24 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict reopened CASSANDRA-6916:
-


Missed in if (offline) test in CompactionTask to ensure offline split works. 
Not totally sure why it didn't break before this patch though, but either way 
was an oversight. Attaching fix that also adds class level javadoc to 
SSTableRewriter

> Preemptive opening of compaction result
> ---
>
> Key: CASSANDRA-6916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6916
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
>  Labels: performance
> Fix For: 2.1 beta2
>
> Attachments: 6916-stock2_1.mixed.cache_tweaks.tar.gz, 
> 6916-stock2_1.mixed.logs.tar.gz, 6916v3-preempive-open-compact.logs.gz, 
> 6916v3-preempive-open-compact.mixed.2.logs.tar.gz, 
> 6916v3-premptive-open-compact.mixed.cache_tweaks.2.tar.gz
>
>
> Related to CASSANDRA-6812, but a little simpler: when compacting, we mess 
> quite badly with the page cache. One thing we can do to mitigate this problem 
> is to use the sstable we're writing before we've finished writing it, and to 
> drop the regions from the old sstables from the page cache as soon as the new 
> sstables have them (even if they're only written to the page cache). This 
> should minimise any page cache churn, as the old sstables must be larger than 
> the new sstable, and since both will be in memory, dropping the old sstables 
> is at least as good as dropping the new.
> The approach is quite straight-forward. Every X MB written:
> # grab flushed length of index file;
> # grab second to last index summary record, after excluding those that point 
> to positions after the flushed length;
> # open index file, and check that our last record doesn't occur outside of 
> the flushed length of the data file (pretty unlikely)
> # Open the sstable with the calculated upper bound
> Some complications:
> # must keep running copy of compression metadata for reopening with
> # we need to be able to replace an sstable with itself but a different lower 
> bound
> # we need to drop the old page cache only when readers have finished



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (CASSANDRA-6987) sstablesplit fails in 2.1

2014-04-24 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler reopened CASSANDRA-6987:
---

Tester: Michael Shuler

Going through my repo steps to verify, nodetool hangs with an exception and has 
to be interrupted. I do see the split tables 3,4,5 in the datadir.

{noformat}
(cassandra-2.1)mshuler@hana:~/git/cassandra$ sstablesplit 
/var/lib/cassandra/data/Keyspace1/Standard1-*/Keyspace1-Standard1-ka-2-Data.db
Pre-split sstables snapshotted into snapshot pre-split-1398358335853
Exception in thread "main" java.lang.AssertionError: Incoherent new size -1 
replacing 
[SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1-93bf9810cbd011e3b2f375998baadb41/Keyspace1-Standard1-ka-2-Data.db')]
 by [] in View(pending_count=0, sstables=[], 
compacting=[SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1-93bf9810cbd011e3b2f375998baadb41/Keyspace1-Standard1-ka-2-Data.db')])
at 
org.apache.cassandra.db.DataTracker$View.newSSTables(DataTracker.java:678)
at 
org.apache.cassandra.db.DataTracker$View.replace(DataTracker.java:650)
at org.apache.cassandra.db.DataTracker.replace(DataTracker.java:369)
at 
org.apache.cassandra.db.DataTracker.markCompactedSSTablesReplaced(DataTracker.java:253)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:221)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:64)
at 
org.apache.cassandra.db.compaction.SSTableSplitter.split(SSTableSplitter.java:38)
at 
org.apache.cassandra.tools.StandaloneSplitter.main(StandaloneSplitter.java:142)
^C
{noformat}

> sstablesplit fails in 2.1
> -
>
> Key: CASSANDRA-6987
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6987
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Debian Testing/Jessie
> Oracle JDK 1.7.0_51
> c*-2.1 branch, commit 5ebadc11e36749e6479f9aba19406db3aacdaf41
>Reporter: Michael Shuler
>Assignee: Benedict
> Fix For: 2.1 beta2
>
> Attachments: 6987.txt
>
>
> sstablesplit dtest began failing in 2.1 at 
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/95/ triggered by 
> http://cassci.datastax.com/job/cassandra-2.1/186/
> repro:
> {noformat}
> (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./bin/cassandra >/dev/null 
> (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./tools/bin/cassandra-stress 
> write n=100
> Created keyspaces. Sleeping 1s for propagation.
> Warming up WRITE with 5 iterations...
> Connected to cluster: Test Cluster
> Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
> Sleeping 2s...
> Running WRITE with 50 threads  for 100 iterations
> ops   ,op/s,   key/s,mean, med, .95, .99,.999,
>  max,   time,   stderr
> 26836 ,   26830,   26830, 2.0, 1.1, 4.0,20.8,   131.4,   
> 207.4,1.0,  0.0
> 64002 ,   36236,   36236, 1.4, 0.8, 4.2,13.8,41.3,   
> 234.8,2.0,  0.0
> 105604,   38188,   38188, 1.3, 0.8, 3.2,10.6,78.4,
> 93.7,3.1,  0.10546
> 156179,   36750,   36750, 1.4, 0.9, 2.9, 8.8,   117.0,   
> 139.8,4.5,  0.08482
> 202092,   40487,   40487, 1.2, 0.9, 2.9, 7.3,45.6,   
> 122.5,5.6,  0.07231
> 246947,   40583,   40583, 1.2, 0.8, 3.0, 7.6,98.2,   
> 152.1,6.7,  0.07056
> 290186,   39867,   39867, 1.3, 0.8, 2.6, 8.9,   113.3,   
> 126.4,7.8,  0.06391
> 331609,   40155,   40155, 1.2, 0.8, 3.1, 8.7,99.1,   
> 124.9,8.8,  0.05731
> 371813,   38742,   38742, 1.3, 0.8, 3.1, 9.2,   117.2,   
> 123.9,9.9,  0.05153
> 416853,   40024,   40024, 1.2, 0.8, 3.2, 8.1,70.4,   
> 119.8,   11.0,  0.04634
> 458389,   39045,   39045, 1.3, 0.8, 3.2, 9.1,   106.4,   
> 135.9,   12.1,  0.04236
> 511323,   36513,   36513, 1.4, 0.8, 3.3, 9.2,   120.2,   
> 161.0,   13.5,  0.03883
> 549872,   34296,   34296, 1.5, 0.9, 3.4,11.5,   106.7,   
> 132.7,   14.6,  0.03678
> 589405,   34535,   34535, 1.4, 0.9, 2.9,10.6,   106.2,   
> 147.9,   15.8,  0.03607
> 633225,   39472,   39472, 1.3, 0.8, 3.0, 7.6,   106.3,   
> 125.1,   16.9,  0.03374
> 672751,   38251,   38251, 1.3, 0.8, 3.0, 8.0

[jira] [Comment Edited] (CASSANDRA-6987) sstablesplit fails in 2.1

2014-04-24 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979967#comment-13979967
 ] 

Michael Shuler edited comment on CASSANDRA-6987 at 4/24/14 5:11 PM:


Going through my repo steps to verify, sstablesplit hangs with an exception and 
has to be interrupted. I do see the split tables 3,4,5 in the datadir.

{noformat}
(cassandra-2.1)mshuler@hana:~/git/cassandra$ sstablesplit 
/var/lib/cassandra/data/Keyspace1/Standard1-*/Keyspace1-Standard1-ka-2-Data.db
Pre-split sstables snapshotted into snapshot pre-split-1398358335853
Exception in thread "main" java.lang.AssertionError: Incoherent new size -1 
replacing 
[SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1-93bf9810cbd011e3b2f375998baadb41/Keyspace1-Standard1-ka-2-Data.db')]
 by [] in View(pending_count=0, sstables=[], 
compacting=[SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1-93bf9810cbd011e3b2f375998baadb41/Keyspace1-Standard1-ka-2-Data.db')])
at 
org.apache.cassandra.db.DataTracker$View.newSSTables(DataTracker.java:678)
at 
org.apache.cassandra.db.DataTracker$View.replace(DataTracker.java:650)
at org.apache.cassandra.db.DataTracker.replace(DataTracker.java:369)
at 
org.apache.cassandra.db.DataTracker.markCompactedSSTablesReplaced(DataTracker.java:253)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:221)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:64)
at 
org.apache.cassandra.db.compaction.SSTableSplitter.split(SSTableSplitter.java:38)
at 
org.apache.cassandra.tools.StandaloneSplitter.main(StandaloneSplitter.java:142)
^C
{noformat}


was (Author: mshuler):
Going through my repo steps to verify, nodetool hangs with an exception and has 
to be interrupted. I do see the split tables 3,4,5 in the datadir.

{noformat}
(cassandra-2.1)mshuler@hana:~/git/cassandra$ sstablesplit 
/var/lib/cassandra/data/Keyspace1/Standard1-*/Keyspace1-Standard1-ka-2-Data.db
Pre-split sstables snapshotted into snapshot pre-split-1398358335853
Exception in thread "main" java.lang.AssertionError: Incoherent new size -1 
replacing 
[SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1-93bf9810cbd011e3b2f375998baadb41/Keyspace1-Standard1-ka-2-Data.db')]
 by [] in View(pending_count=0, sstables=[], 
compacting=[SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1-93bf9810cbd011e3b2f375998baadb41/Keyspace1-Standard1-ka-2-Data.db')])
at 
org.apache.cassandra.db.DataTracker$View.newSSTables(DataTracker.java:678)
at 
org.apache.cassandra.db.DataTracker$View.replace(DataTracker.java:650)
at org.apache.cassandra.db.DataTracker.replace(DataTracker.java:369)
at 
org.apache.cassandra.db.DataTracker.markCompactedSSTablesReplaced(DataTracker.java:253)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:221)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:64)
at 
org.apache.cassandra.db.compaction.SSTableSplitter.split(SSTableSplitter.java:38)
at 
org.apache.cassandra.tools.StandaloneSplitter.main(StandaloneSplitter.java:142)
^C
{noformat}

> sstablesplit fails in 2.1
> -
>
> Key: CASSANDRA-6987
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6987
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Debian Testing/Jessie
> Oracle JDK 1.7.0_51
> c*-2.1 branch, commit 5ebadc11e36749e6479f9aba19406db3aacdaf41
>Reporter: Michael Shuler
>Assignee: Benedict
> Fix For: 2.1 beta2
>
> Attachments: 6987.txt
>
>
> sstablesplit dtest began failing in 2.1 at 
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/95/ triggered by 
> http://cassci.datastax.com/job/cassandra-2.1/186/
> repro:
> {noformat}
> (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./bin/cassandra >/dev/null 
> (cassandra-2.1)mshuler@hana:~/git/cassandra$ ./tools/bin/cassandra-stress 
> write n=100
> Created keyspaces. Sleeping 1s for propagation.
> Warming up WRITE with 5 iterations...
> Connected to cluster

[jira] [Commented] (CASSANDRA-5547) Multi-threaded scrub

2014-04-24 Thread Russell Alexander Spitzer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979954#comment-13979954
 ] 

Russell Alexander Spitzer commented on CASSANDRA-5547:
--

Looks good to me +1, I hope this will end up being useful for folks. 

> Multi-threaded scrub
> 
>
> Key: CASSANDRA-5547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5547
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Benjamin Coverston
>Assignee: Russell Alexander Spitzer
>  Labels: lhf
> Fix For: 2.0.8
>
> Attachments: 0001-5547.patch, cassandra-2.0-5547.txt
>
>
> Scrub (especially offline) could benefit from being multi-threaded, 
> especially in the case where the SSTables are compressed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6831) Updates to COMPACT STORAGE tables via cli drop CQL information

2014-04-24 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979947#comment-13979947
 ] 

Mikhail Stepura commented on CASSANDRA-6831:


I'll take a look

> Updates to COMPACT STORAGE tables via cli drop CQL information
> --
>
> Key: CASSANDRA-6831
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6831
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Russell Bradberry
>Assignee: Mikhail Stepura
>Priority: Minor
> Fix For: 1.2.17, 2.0.8, 2.1 beta2
>
> Attachments: cassandra-1.2-6831.patch, cassandra-2.0-6831.patch
>
>
> If a COMPACT STORAGE table is altered using the CLI all information about the 
> column names reverts to the initial "key, column1, column2" namings.  
> Additionally, the changes in the columns name will not take effect until the 
> Cassandra service is restarted.  This means that the clients using CQL will 
> continue to work properly until the service is restarted, at which time they 
> will start getting errors about non-existant columns in the table.
> When attempting to rename the columns back using ALTER TABLE an error stating 
> the column already exists will be raised.  The only way to get it back is to 
> ALTER TABLE and change the comment or something, which will bring back all 
> the original column names.
> This seems to be related to CASSANDRA-6676 and CASSANDRA-6370
> In cqlsh
> {code}
> Connected to cluster1 at 127.0.0.3:9160.
> [cqlsh 3.1.8 | Cassandra 1.2.15-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 
> 19.36.2]
> Use HELP for help.
> cqlsh> CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 3 };
> cqlsh> USE test;
> cqlsh:test> CREATE TABLE foo (bar text, baz text, qux text, PRIMARY KEY(bar, 
> baz) ) WITH COMPACT STORAGE;
> cqlsh:test> describe table foo;
> CREATE TABLE foo (
>   bar text,
>   baz text,
>   qux text,
>   PRIMARY KEY (bar, baz)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> {code}
> Now in cli:
> {code}
>   Connected to: "cluster1" on 127.0.0.3/9160
> Welcome to Cassandra CLI version 1.2.15-SNAPSHOT
> Type 'help;' or '?' for help.
> Type 'quit;' or 'exit;' to quit.
> [default@unknown] use test;
> Authenticated to keyspace: test
> [default@test] UPDATE COLUMN FAMILY foo WITH comment='hey this is a comment';
> 3bf5fa49-5d03-34f0-b46c-6745f7740925
> {code}
> Now back in cqlsh:
> {code}
> cqlsh:test> describe table foo;
> CREATE TABLE foo (
>   bar text,
>   column1 text,
>   value text,
>   PRIMARY KEY (bar, column1)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='hey this is a comment' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> cqlsh:test> ALTER TABLE foo WITH comment='this is a new comment';
> cqlsh:test> describe table foo;
> CREATE TABLE foo (
>   bar text,
>   baz text,
>   qux text,
>   PRIMARY KEY (bar, baz)
> ) WITH COMPACT STORAGE AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='this is a new comment' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   read_repair_chance=0.10 AND
>   replicate_on_write='true' AND
>   populate_io_cache_on_flush='false' AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'SnappyCompressor'};
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6551) Rack-aware batchlog replication

2014-04-24 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6551:
---

Fix Version/s: (was: 2.0.8)

> Rack-aware batchlog replication
> ---
>
> Key: CASSANDRA-6551
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6551
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Rick Branson
>Assignee: Mikhail Stepura
>Priority: Minor
> Fix For: 2.1 beta2
>
> Attachments: cassandra-2.0-6551-2.patch
>
>
> Right now the batchlog replication code just randomly picks 2 other nodes in 
> the same DC, regardless of rack. Ideally we'd pick 2 replicas in other racks 
> to achieve higher fault tolerance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-5323) Revisit disabled dtests

2014-04-24 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979943#comment-13979943
 ] 

Michael Shuler edited comment on CASSANDRA-5323 at 4/24/14 4:44 PM:


thrift_hsha_test.py is now back in the testing cycles for all branches.  
Considering that we have individual tickets open for the last two that are 
being excluded, I'm going to close this ticket.

sstablesplit_test
https://issues.apache.org/jira/browse/CASSANDRA-6987  committed as fixed - need 
to test that one out and include

counter_tests.py:TestCounters.upgrade_test
https://issues.apache.org/jira/browse/CASSANDRA-7036


was (Author: mshuler):
thrift_hsha_test.py is now back in the testing cycles for all branches.  
Considering that we have individual tickets open for the last two that are 
being excluded, I'm going to close this ticket.

sstablesplit_test
https://issues.apache.org/jira/browse/CASSANDRA-7036  committed as fixed - need 
to test that one out and include

counter_tests.py:TestCounters.upgrade_test
https://issues.apache.org/jira/browse/CASSANDRA-7036

> Revisit disabled dtests
> ---
>
> Key: CASSANDRA-5323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5323
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ryan McGuire
>Assignee: Michael Shuler
>
> The following dtests are disabled in buildbot, if they can be re-enabled 
> great, if they can't can they be fixed? 
> upgrade|decommission|sstable_gen|global_row|putget_2dc|cql3_insert



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-5323) Revisit disabled dtests

2014-04-24 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-5323.
---

Resolution: Done

thrift_hsha_test.py is now back in the testing cycles for all branches.  
Considering that we have individual tickets open for the last two that are 
being excluded, I'm going to close this ticket.

sstablesplit_test
https://issues.apache.org/jira/browse/CASSANDRA-7036  committed as fixed - need 
to test that one out and include

counter_tests.py:TestCounters.upgrade_test
https://issues.apache.org/jira/browse/CASSANDRA-7036

> Revisit disabled dtests
> ---
>
> Key: CASSANDRA-5323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5323
> Project: Cassandra
>  Issue Type: Test
>Reporter: Ryan McGuire
>Assignee: Michael Shuler
>
> The following dtests are disabled in buildbot, if they can be re-enabled 
> great, if they can't can they be fixed? 
> upgrade|decommission|sstable_gen|global_row|putget_2dc|cql3_insert



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6551) Rack-aware batchlog replication

2014-04-24 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6551:
-

Reviewer: Rick Branson  (was: Aleksey Yeschenko)

> Rack-aware batchlog replication
> ---
>
> Key: CASSANDRA-6551
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6551
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Rick Branson
>Assignee: Mikhail Stepura
>Priority: Minor
> Fix For: 2.0.8, 2.1 beta2
>
> Attachments: cassandra-2.0-6551-2.patch
>
>
> Right now the batchlog replication code just randomly picks 2 other nodes in 
> the same DC, regardless of rack. Ideally we'd pick 2 replicas in other racks 
> to achieve higher fault tolerance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-6588) Add a 'NO EMPTY RESULTS' filter to SELECT

2014-04-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6588.
-

   Resolution: Duplicate
Fix Version/s: (was: 2.1 beta2)

No, it wouldn't require any new syntax. So anyway, since that's somewhat 
different from what have been discussed here, created CASSANDRA-7085 to tackle 
that. New internal filters do means we'll have to wait for 3.0 however at this 
point.

> Add a 'NO EMPTY RESULTS' filter to SELECT
> -
>
> Key: CASSANDRA-6588
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6588
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Priority: Minor
>
> It is the semantic of CQL that a (CQL) row exists as long as it has one 
> non-null column (including the PK columns, which, given that no PK columns 
> can be null, means that it's enough to have the PK set for a row to exist). 
> This does means that the result to
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v1 int, v2 int);
> INSERT INTO test(k, v1) VALUES (0, 4);
> SELECT v2 FROM test;
> {noformat}
> must be (and is)
> {noformat}
>  v2
> --
>  null
> {noformat}
> That fact does mean however that when we only select a few columns of a row, 
> we still need to find out rows that exist but have no values for the selected 
> columns. Long story short, given how the storage engine works, this means we 
> need to query full (CQL) rows even when only some of the columns are selected 
> because that's the only way to distinguish between "the row exists but have 
> no value for the selected columns" and "the row doesn't exist". I'll note in 
> particular that, due to CASSANDRA-5762, we can't unfortunately rely on the 
> row marker to optimize that out.
> Now, when you selects only a subsets of the columns of a row, there is many 
> cases where you don't care about rows that exists but have no value for the 
> columns you requested and are happy to filter those out. So, for those cases, 
> we could provided a new SELECT filter. Outside the potential convenience (not 
> having to filter empty results client side), one interesting part is that 
> when this filter is provided, we could optimize a bit by only querying the 
> columns selected, since we wouldn't need to return rows that exists but have 
> no values for the selected columns.
> For the exact syntax, there is probably a bunch of options. For instance:
> * {{SELECT NON EMPTY(v2, v3) FROM test}}: the vague rational for putting it 
> in the SELECT part is that such filter is kind of in the spirit to DISTINCT.  
> Possibly a bit ugly outside of that.
> * {{SELECT v2, v3 FROM test NO EMPTY RESULTS}} or {{SELECT v2, v3 FROM test 
> NO EMPTY ROWS}} or {{SELECT v2, v3 FROM test NO EMPTY}}: the last one is 
> shorter but maybe a bit less explicit. As for {{RESULTS}} versus {{ROWS}}, 
> the only small object to {{NO EMPTY ROWS}} could be that it might suggest it 
> is filtering non existing rows (I mean, the fact we never ever return non 
> existing rows should hint that it's not what it does but well...) while we're 
> just filtering empty "resultSet rows".
> Of course, if there is a pre-existing SQL syntax for that, it's even better, 
> though a very quick search didn't turn anything. Other suggestions welcome 
> too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6551) Rack-aware batchlog replication

2014-04-24 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979924#comment-13979924
 ] 

Aleksey Yeschenko commented on CASSANDRA-6551:
--

Actually, would rather have [~rbranson] review it, since it's his pony, and 
we've written the previous node-picking implementation together.

Also, should probably target 2.1, at this point (the patch should still apply).

[~rbranson] assigning you to review. Assign back to me if you don't have time 
to review or something.

> Rack-aware batchlog replication
> ---
>
> Key: CASSANDRA-6551
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6551
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Rick Branson
>Assignee: Mikhail Stepura
>Priority: Minor
> Fix For: 2.0.8, 2.1 beta2
>
> Attachments: cassandra-2.0-6551-2.patch
>
>
> Right now the batchlog replication code just randomly picks 2 other nodes in 
> the same DC, regardless of rack. Ideally we'd pick 2 replicas in other racks 
> to achieve higher fault tolerance.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7085) Specialized query filters for CQL3

2014-04-24 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-7085:
---

 Summary: Specialized query filters for CQL3
 Key: CASSANDRA-7085
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7085
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
 Fix For: 3.0


The semantic of CQL makes it so that the current {{NamesQueryFilter}} and 
{{SliceQueryFilter}} are not always as efficient as we could be. Namely, when a 
{{SELECT}} only selects a handful of columns, we still have to query to query 
all the columns of the select rows to distinguish between 'live row but with no 
data for the queried columns' and 'no row' (see CASSANDRA-6588 for more 
details).

We can solve that however by adding new filters (name and slice) specialized 
for CQL. The new name filter would be a list of row prefix + a list of CQL 
column names (instead of one list of cell names). The slice filter would still 
take a ColumnSlice[] but would add the list of column names we care about for 
each row.

The new sstable readers that goes with those filter would use the list of 
column names to filter out all the cells we don't care about, so we don't have 
to ship those back to the coordinator to skip them there, yet would know to 
still return the row marker when necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-7084) o.a.c.db.RecoveryManagerTest.testNothingToRecover Unit Test Flaps in 2.0

2014-04-24 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler reassigned CASSANDRA-7084:
-

Assignee: Michael Shuler

> o.a.c.db.RecoveryManagerTest.testNothingToRecover Unit Test Flaps in 2.0
> 
>
> Key: CASSANDRA-7084
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7084
> Project: Cassandra
>  Issue Type: Test
>  Components: Tests
>Reporter: Michael Shuler
>Assignee: Michael Shuler
>Priority: Minor
>
> Example:
>   http://cassci.datastax.com/job/cassandra-2.0_test/326/
> (this test appears to pass consistently in 1.2 and 2.1 with a quick glance - 
> will test out the other branches more thoroughly and bisect)
> {noformat}
> REGRESSION:  org.apache.cassandra.db.RecoveryManagerTest.testNothingToRecover
> Error Message:
> java.io.FileNotFoundException: 
> /var/lib/jenkins/jobs/cassandra-2.0_test/workspace/build/test/cassandra/commitlog/CommitLog-3-1398354429966.log
>  (No such file or directory)
> Stack Trace:
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /var/lib/jenkins/jobs/cassandra-2.0_test/workspace/build/test/cassandra/commitlog/CommitLog-3-1398354429966.log
>  (No such file or directory)
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:102)
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:90)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:186)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:95)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:151)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:131)
>   at 
> org.apache.cassandra.db.RecoveryManagerTest.testNothingToRecover(RecoveryManagerTest.java:42)
> Caused by: java.io.FileNotFoundException: 
> /var/lib/jenkins/jobs/cassandra-2.0_test/workspace/build/test/cassandra/commitlog/CommitLog-3-1398354429966.log
>  (No such file or directory)
>   at java.io.RandomAccessFile.open(Native Method)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:241)
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:98)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6449) Tools error out if they can't make ~/.cassandra

2014-04-24 Thread Sucwinder Bassi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979895#comment-13979895
 ] 

Sucwinder Bassi commented on CASSANDRA-6449:


I've had another user run into this issue who would like this functionality. I 
can reproduce the problem just by restricting write access to the users home 
dir that is running nodetool.

> Tools error out if they can't make ~/.cassandra
> ---
>
> Key: CASSANDRA-6449
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6449
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jeremiah Jordan
>  Labels: lhf
>
> We shouldn't error out if we can't make the .cassandra folder for the new 
> history stuff.
> {noformat}
> Exception in thread "main" FSWriteError in 
> /usr/share/opscenter-agent/.cassandra
>   at 
> org.apache.cassandra.io.util.FileUtils.createDirectory(FileUtils.java:261)
>   at 
> org.apache.cassandra.utils.FBUtilities.getToolsOutputDirectory(FBUtilities.java:627)
>   at org.apache.cassandra.tools.NodeCmd.printHistory(NodeCmd.java:1403)
>   at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1122)
> Caused by: java.io.IOException: Failed to mkdirs 
> /usr/share/opscenter-agent/.cassandra
>   ... 4 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7084) o.a.c.db.RecoveryManagerTest.testNothingToRecover Unit Test Flaps in 2.0

2014-04-24 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-7084:
-

 Summary: o.a.c.db.RecoveryManagerTest.testNothingToRecover Unit 
Test Flaps in 2.0
 Key: CASSANDRA-7084
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7084
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Michael Shuler
Priority: Minor


Example:
  http://cassci.datastax.com/job/cassandra-2.0_test/326/

(this test appears to pass consistently in 1.2 and 2.1 with a quick glance - 
will test out the other branches more thoroughly and bisect)

{noformat}
REGRESSION:  org.apache.cassandra.db.RecoveryManagerTest.testNothingToRecover

Error Message:
java.io.FileNotFoundException: 
/var/lib/jenkins/jobs/cassandra-2.0_test/workspace/build/test/cassandra/commitlog/CommitLog-3-1398354429966.log
 (No such file or directory)

Stack Trace:
java.lang.RuntimeException: java.io.FileNotFoundException: 
/var/lib/jenkins/jobs/cassandra-2.0_test/workspace/build/test/cassandra/commitlog/CommitLog-3-1398354429966.log
 (No such file or directory)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:102)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:90)
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:186)
at 
org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:95)
at 
org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:151)
at 
org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:131)
at 
org.apache.cassandra.db.RecoveryManagerTest.testNothingToRecover(RecoveryManagerTest.java:42)
Caused by: java.io.FileNotFoundException: 
/var/lib/jenkins/jobs/cassandra-2.0_test/workspace/build/test/cassandra/commitlog/CommitLog-3-1398354429966.log
 (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:241)
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
at 
org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:98)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6916) Preemptive opening of compaction result

2014-04-24 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979872#comment-13979872
 ] 

Aleksey Yeschenko commented on CASSANDRA-6916:
--

bq. if by wide you mean CQL composite keys

By wide we usually mean just partitions with a metric ton of cells, composite 
or not.

> Preemptive opening of compaction result
> ---
>
> Key: CASSANDRA-6916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6916
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
>  Labels: performance
> Fix For: 2.1 beta2
>
> Attachments: 6916-stock2_1.mixed.cache_tweaks.tar.gz, 
> 6916-stock2_1.mixed.logs.tar.gz, 6916v3-preempive-open-compact.logs.gz, 
> 6916v3-preempive-open-compact.mixed.2.logs.tar.gz, 
> 6916v3-premptive-open-compact.mixed.cache_tweaks.2.tar.gz
>
>
> Related to CASSANDRA-6812, but a little simpler: when compacting, we mess 
> quite badly with the page cache. One thing we can do to mitigate this problem 
> is to use the sstable we're writing before we've finished writing it, and to 
> drop the regions from the old sstables from the page cache as soon as the new 
> sstables have them (even if they're only written to the page cache). This 
> should minimise any page cache churn, as the old sstables must be larger than 
> the new sstable, and since both will be in memory, dropping the old sstables 
> is at least as good as dropping the new.
> The approach is quite straight-forward. Every X MB written:
> # grab flushed length of index file;
> # grab second to last index summary record, after excluding those that point 
> to positions after the flushed length;
> # open index file, and check that our last record doesn't occur outside of 
> the flushed length of the data file (pretty unlikely)
> # Open the sstable with the calculated upper bound
> Some complications:
> # must keep running copy of compression metadata for reopening with
> # we need to be able to replace an sstable with itself but a different lower 
> bound
> # we need to drop the old page cache only when readers have finished



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-7081) select writetime(colname) returns 0 for static columns

2014-04-24 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-7081:
---

Assignee: Sylvain Lebresne

> select writetime(colname) returns 0 for static columns
> --
>
> Key: CASSANDRA-7081
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7081
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolas Favre-Felix
>Assignee: Sylvain Lebresne
>
> Selecting the write time for a static column returns 0 in Cassandra 2.0 
> (c3550fe) and an expected timestamp in 2.1 (trunk, acdbbb9). Would it be 
> possible to include this timestamp in a 2.0 release too?
> {code}
> > CREATE TABLE test (partition_key text, cluster_key text, data text, st text 
> > static, PRIMARY KEY(partition_key, cluster_key));
> > INSERT INTO test (partition_key, cluster_key, data, st) VALUES ( 'PK', 
> > 'CK', 'DATA', 'ST');
> > SELECT writetime(st), writetime(data) FROM test where partition_key='PK';
>  writetime(st) | writetime(data)
> ---+--
>  0 | 1398314681729000
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7082) Nodetool status always display only the first token

2014-04-24 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-7082:


Reviewer: Brandon Williams

> Nodetool status always display only the first token
> ---
>
> Key: CASSANDRA-7082
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7082
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jivko Donev
>Assignee: Vijay
>Priority: Minor
>  Labels: nodetool
>
> nodetool status command always displays the first token for a node even if 
> using vnodes. The defect is only reproduced on version 2.0.7. 
> With the same configuration 2.0.7 displays:
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Owns (effective)  Host ID
>TokenRack
> UN  127.0.0.1  156.34 KB  100.0%
> d6629553-d6e9-434d-bf01-54c257b20ea9  -9134643033027010921
>  Rack1
> But 2.0.6 displays:
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Tokens  Owns   Host ID
> UN  127.0.0.1  210.32 KB  256 100.0%  08208ec9-8976-4ad0-b6bb-ee5dcf0109e
> The problem seems to be in NodeCmd.java the check for vnodes.
> In the print() method there is a check 
> // More tokens then nodes (aka vnodes)?
> if (tokensToEndpoints.values().size() < 
> tokensToEndpoints.keySet().size())
> isTokenPerNode = false;
> while in 2.0.6 the same code was:
> // More tokens then nodes (aka vnodes)?
> if (new HashSet(tokensToEndpoints.values()).size() < 
> tokensToEndpoints.keySet().size())
> isTokenPerNode = false;
> In 2.0.7 this check is never true as values collection is always equal by 
> size with key set size.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-7082) Nodetool status always display only the first token

2014-04-24 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-7082:
---

Assignee: Vijay

> Nodetool status always display only the first token
> ---
>
> Key: CASSANDRA-7082
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7082
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jivko Donev
>Assignee: Vijay
>Priority: Minor
>  Labels: nodetool
>
> nodetool status command always displays the first token for a node even if 
> using vnodes. The defect is only reproduced on version 2.0.7. 
> With the same configuration 2.0.7 displays:
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Owns (effective)  Host ID
>TokenRack
> UN  127.0.0.1  156.34 KB  100.0%
> d6629553-d6e9-434d-bf01-54c257b20ea9  -9134643033027010921
>  Rack1
> But 2.0.6 displays:
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Tokens  Owns   Host ID
> UN  127.0.0.1  210.32 KB  256 100.0%  08208ec9-8976-4ad0-b6bb-ee5dcf0109e
> The problem seems to be in NodeCmd.java the check for vnodes.
> In the print() method there is a check 
> // More tokens then nodes (aka vnodes)?
> if (tokensToEndpoints.values().size() < 
> tokensToEndpoints.keySet().size())
> isTokenPerNode = false;
> while in 2.0.6 the same code was:
> // More tokens then nodes (aka vnodes)?
> if (new HashSet(tokensToEndpoints.values()).size() < 
> tokensToEndpoints.keySet().size())
> isTokenPerNode = false;
> In 2.0.7 this check is never true as values collection is always equal by 
> size with key set size.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6106) Provide timestamp with true microsecond resolution

2014-04-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979841#comment-13979841
 ] 

Jonathan Ellis commented on CASSANDRA-6106:
---

Do you have time to look at the math, [~xcbsmith]?  The code itself is a single 
class and reasonably straightforward.

> Provide timestamp with true microsecond resolution
> --
>
> Key: CASSANDRA-6106
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: DSE Cassandra 3.1, but also HEAD
>Reporter: Christopher Smith
>Assignee: Benedict
>Priority: Minor
>  Labels: timestamps
> Fix For: 2.1 beta2
>
> Attachments: microtimstamp.patch, microtimstamp_random.patch, 
> microtimstamp_random_rev2.patch
>
>
> I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
> mentioned issues with millisecond rounding in timestamps and was able to 
> reproduce the issue. If I specify a timestamp in a mutating query, I get 
> microsecond precision, but if I don't, I get timestamps rounded to the 
> nearest millisecond, at least for my first query on a given connection, which 
> substantially increases the possibilities of collision.
> I believe I found the offending code, though I am by no means sure this is 
> comprehensive. I think we probably need a fairly comprehensive replacement of 
> all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6476) Assertion error in MessagingService.addCallback

2014-04-24 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6476:


  Component/s: Core
Since Version: 1.2.11
Fix Version/s: 1.2.17

Committed.

> Assertion error in MessagingService.addCallback
> ---
>
> Key: CASSANDRA-6476
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6476
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.0.2 DCE, Cassandra 1.2.15
>Reporter: Theo Hultberg
>Assignee: Brandon Williams
> Fix For: 1.2.17
>
> Attachments: 6476.txt
>
>
> Two of the three Cassandra nodes in one of our clusters just started behaving 
> very strange about an hour ago. Within a minute of each other they started 
> logging AssertionErrors (see stack traces here: 
> https://gist.github.com/iconara/7917438) over and over again. The client lost 
> connection with the nodes at roughly the same time. The nodes were still up, 
> and even if no clients were connected to them they continued logging the same 
> errors over and over.
> The errors are in the native transport (specifically 
> MessagingService.addCallback) which makes me suspect that it has something to 
> do with a test that we started running this afternoon. I've just implemented 
> support for frame compression in my CQL driver cql-rb. About two hours before 
> this happened I deployed a version of the application which enabled Snappy 
> compression on all frames larger than 64 bytes. It's not impossible that 
> there is a bug somewhere in the driver or compression library that caused 
> this -- but at the same time, it feels like it shouldn't be possible to make 
> C* a zombie with a bad frame.
> Restarting seems to have got them back running again, but I suspect they will 
> go down again sooner or later.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[04/10] git commit: Don't shut MessagingService down when replacing a node.

2014-04-24 Thread brandonwilliams
Don't shut MessagingService down when replacing a node.

Patch by brandonwilliams, reviewed by Benedict for CASSANDRA-6476


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9359b7a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9359b7a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9359b7a3

Branch: refs/heads/trunk
Commit: 9359b7a318884c9d3a052946d50711ce9f8b51e2
Parents: 2890cc5
Author: Brandon Williams 
Authored: Thu Apr 24 10:21:45 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 10:21:45 2014 -0500

--
 CHANGES.txt  |  1 +
 src/java/org/apache/cassandra/net/MessagingService.java  |  5 +
 .../org/apache/cassandra/service/StorageService.java | 11 ++-
 3 files changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 74ddcfd..69e9d37 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -10,6 +10,7 @@
  * Fix CQLSH parsing of functions and BLOB literals (CASSANDRA-7018)
  * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
  * Ensure that batchlog and hint timeouts do not produce hints (CASSANDRA-7058)
+ * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index 3f90d7f..5e4a117 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -471,6 +471,11 @@ public final class MessagingService implements 
MessagingServiceMBean
 }
 }
 
+public boolean isListening()
+{
+return listenGate.isSignaled();
+}
+
 public void destroyConnectionPool(InetAddress to)
 {
 OutboundTcpConnectionPool cp = connectionManagers.get(to);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 1e7bed4..3b2d945 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -390,7 +390,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 public synchronized Collection prepareReplacementInfo() throws 
ConfigurationException
 {
 logger.info("Gathering node replacement information for {}", 
DatabaseDescriptor.getReplaceAddress());
-MessagingService.instance().listen(FBUtilities.getLocalAddress());
+if (!MessagingService.instance().isListening())
+MessagingService.instance().listen(FBUtilities.getLocalAddress());
 
 // make magic happen
 Gossiper.instance.doShadowRound();
@@ -407,7 +408,6 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 Collection tokens = 
TokenSerializer.deserialize(getPartitioner(), new DataInputStream(new 
ByteArrayInputStream(getApplicationStateValue(DatabaseDescriptor.getReplaceAddress(),
 ApplicationState.TOKENS;
 
 SystemTable.setLocalHostId(hostId); // use the replacee's host Id 
as our own so we receive hints, etc
-MessagingService.instance().shutdown();
 Gossiper.instance.resetEndpointStateMap(); // clean up since we 
have what we need
 return tokens;
 }
@@ -435,7 +435,6 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 break outer;
 }
 }
-
 // sleep until any schema migrations have finished
 while (!MigrationManager.isReadyForBootstrap())
 {
@@ -464,7 +463,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 Gossiper.instance.start((int) (System.currentTimeMillis() / 1000)); // 
needed for node-ring gathering.
 
Gossiper.instance.addLocalApplicationState(ApplicationState.NET_VERSION, 
valueFactory.networkVersion());
 
-MessagingService.instance().listen(FBUtilities.getLocalAddress());
+if (!MessagingService.instance().isListening())
+MessagingService.instance().listen(FBUtilities.g

[09/10] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-24 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/11827f0d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/11827f0d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/11827f0d

Branch: refs/heads/trunk
Commit: 11827f0d7e0d50565f276a7aefe9a88873529ba7
Parents: c073fab 205b661
Author: Brandon Williams 
Authored: Thu Apr 24 10:22:37 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 10:22:37 2014 -0500

--

--




[01/10] git commit: Don't shut MessagingService down when replacing a node.

2014-04-24 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-1.2 2890cc5be -> 9359b7a31
  refs/heads/cassandra-2.0 c3550fe40 -> 205b6616e
  refs/heads/cassandra-2.1 c073fab77 -> 11827f0d7
  refs/heads/trunk 417ebf03e -> 7fe5503f2


Don't shut MessagingService down when replacing a node.

Patch by brandonwilliams, reviewed by Benedict for CASSANDRA-6476


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9359b7a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9359b7a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9359b7a3

Branch: refs/heads/cassandra-1.2
Commit: 9359b7a318884c9d3a052946d50711ce9f8b51e2
Parents: 2890cc5
Author: Brandon Williams 
Authored: Thu Apr 24 10:21:45 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 10:21:45 2014 -0500

--
 CHANGES.txt  |  1 +
 src/java/org/apache/cassandra/net/MessagingService.java  |  5 +
 .../org/apache/cassandra/service/StorageService.java | 11 ++-
 3 files changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 74ddcfd..69e9d37 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -10,6 +10,7 @@
  * Fix CQLSH parsing of functions and BLOB literals (CASSANDRA-7018)
  * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
  * Ensure that batchlog and hint timeouts do not produce hints (CASSANDRA-7058)
+ * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index 3f90d7f..5e4a117 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -471,6 +471,11 @@ public final class MessagingService implements 
MessagingServiceMBean
 }
 }
 
+public boolean isListening()
+{
+return listenGate.isSignaled();
+}
+
 public void destroyConnectionPool(InetAddress to)
 {
 OutboundTcpConnectionPool cp = connectionManagers.get(to);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 1e7bed4..3b2d945 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -390,7 +390,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 public synchronized Collection prepareReplacementInfo() throws 
ConfigurationException
 {
 logger.info("Gathering node replacement information for {}", 
DatabaseDescriptor.getReplaceAddress());
-MessagingService.instance().listen(FBUtilities.getLocalAddress());
+if (!MessagingService.instance().isListening())
+MessagingService.instance().listen(FBUtilities.getLocalAddress());
 
 // make magic happen
 Gossiper.instance.doShadowRound();
@@ -407,7 +408,6 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 Collection tokens = 
TokenSerializer.deserialize(getPartitioner(), new DataInputStream(new 
ByteArrayInputStream(getApplicationStateValue(DatabaseDescriptor.getReplaceAddress(),
 ApplicationState.TOKENS;
 
 SystemTable.setLocalHostId(hostId); // use the replacee's host Id 
as our own so we receive hints, etc
-MessagingService.instance().shutdown();
 Gossiper.instance.resetEndpointStateMap(); // clean up since we 
have what we need
 return tokens;
 }
@@ -435,7 +435,6 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 break outer;
 }
 }
-
 // sleep until any schema migrations have finished
 while (!MigrationManager.isReadyForBootstrap())
 {
@@ -464,7 +463,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 Gossiper.instance.start((int) (System.currentTimeMillis() / 1000)); // 
needed for node-ring gathering.
 
Gossiper.instance.addLocalApplicationState(ApplicationState.

[07/10] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-04-24 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/205b6616
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/205b6616
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/205b6616

Branch: refs/heads/trunk
Commit: 205b6616ead9d7740f59cdd1a3f4d5a2c9bf96b1
Parents: c3550fe 9359b7a
Author: Brandon Williams 
Authored: Thu Apr 24 10:22:24 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 10:22:24 2014 -0500

--

--




[05/10] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-04-24 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/205b6616
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/205b6616
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/205b6616

Branch: refs/heads/cassandra-2.1
Commit: 205b6616ead9d7740f59cdd1a3f4d5a2c9bf96b1
Parents: c3550fe 9359b7a
Author: Brandon Williams 
Authored: Thu Apr 24 10:22:24 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 10:22:24 2014 -0500

--

--




[02/10] git commit: Don't shut MessagingService down when replacing a node.

2014-04-24 Thread brandonwilliams
Don't shut MessagingService down when replacing a node.

Patch by brandonwilliams, reviewed by Benedict for CASSANDRA-6476


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9359b7a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9359b7a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9359b7a3

Branch: refs/heads/cassandra-2.0
Commit: 9359b7a318884c9d3a052946d50711ce9f8b51e2
Parents: 2890cc5
Author: Brandon Williams 
Authored: Thu Apr 24 10:21:45 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 10:21:45 2014 -0500

--
 CHANGES.txt  |  1 +
 src/java/org/apache/cassandra/net/MessagingService.java  |  5 +
 .../org/apache/cassandra/service/StorageService.java | 11 ++-
 3 files changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 74ddcfd..69e9d37 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -10,6 +10,7 @@
  * Fix CQLSH parsing of functions and BLOB literals (CASSANDRA-7018)
  * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
  * Ensure that batchlog and hint timeouts do not produce hints (CASSANDRA-7058)
+ * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index 3f90d7f..5e4a117 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -471,6 +471,11 @@ public final class MessagingService implements 
MessagingServiceMBean
 }
 }
 
+public boolean isListening()
+{
+return listenGate.isSignaled();
+}
+
 public void destroyConnectionPool(InetAddress to)
 {
 OutboundTcpConnectionPool cp = connectionManagers.get(to);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 1e7bed4..3b2d945 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -390,7 +390,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 public synchronized Collection prepareReplacementInfo() throws 
ConfigurationException
 {
 logger.info("Gathering node replacement information for {}", 
DatabaseDescriptor.getReplaceAddress());
-MessagingService.instance().listen(FBUtilities.getLocalAddress());
+if (!MessagingService.instance().isListening())
+MessagingService.instance().listen(FBUtilities.getLocalAddress());
 
 // make magic happen
 Gossiper.instance.doShadowRound();
@@ -407,7 +408,6 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 Collection tokens = 
TokenSerializer.deserialize(getPartitioner(), new DataInputStream(new 
ByteArrayInputStream(getApplicationStateValue(DatabaseDescriptor.getReplaceAddress(),
 ApplicationState.TOKENS;
 
 SystemTable.setLocalHostId(hostId); // use the replacee's host Id 
as our own so we receive hints, etc
-MessagingService.instance().shutdown();
 Gossiper.instance.resetEndpointStateMap(); // clean up since we 
have what we need
 return tokens;
 }
@@ -435,7 +435,6 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 break outer;
 }
 }
-
 // sleep until any schema migrations have finished
 while (!MigrationManager.isReadyForBootstrap())
 {
@@ -464,7 +463,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 Gossiper.instance.start((int) (System.currentTimeMillis() / 1000)); // 
needed for node-ring gathering.
 
Gossiper.instance.addLocalApplicationState(ApplicationState.NET_VERSION, 
valueFactory.networkVersion());
 
-MessagingService.instance().listen(FBUtilities.getLocalAddress());
+if (!MessagingService.instance().isListening())
+MessagingService.instance().listen(FBUti

[06/10] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-04-24 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/205b6616
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/205b6616
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/205b6616

Branch: refs/heads/cassandra-2.0
Commit: 205b6616ead9d7740f59cdd1a3f4d5a2c9bf96b1
Parents: c3550fe 9359b7a
Author: Brandon Williams 
Authored: Thu Apr 24 10:22:24 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 10:22:24 2014 -0500

--

--




[10/10] git commit: Merge branch 'cassandra-2.1' into trunk

2014-04-24 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7fe5503f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7fe5503f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7fe5503f

Branch: refs/heads/trunk
Commit: 7fe5503f21b478a57625eb98ba3b242619b457b1
Parents: 417ebf0 11827f0
Author: Brandon Williams 
Authored: Thu Apr 24 10:22:46 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 10:22:46 2014 -0500

--

--




[08/10] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-04-24 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/11827f0d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/11827f0d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/11827f0d

Branch: refs/heads/cassandra-2.1
Commit: 11827f0d7e0d50565f276a7aefe9a88873529ba7
Parents: c073fab 205b661
Author: Brandon Williams 
Authored: Thu Apr 24 10:22:37 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 10:22:37 2014 -0500

--

--




[03/10] git commit: Don't shut MessagingService down when replacing a node.

2014-04-24 Thread brandonwilliams
Don't shut MessagingService down when replacing a node.

Patch by brandonwilliams, reviewed by Benedict for CASSANDRA-6476


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9359b7a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9359b7a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9359b7a3

Branch: refs/heads/cassandra-2.1
Commit: 9359b7a318884c9d3a052946d50711ce9f8b51e2
Parents: 2890cc5
Author: Brandon Williams 
Authored: Thu Apr 24 10:21:45 2014 -0500
Committer: Brandon Williams 
Committed: Thu Apr 24 10:21:45 2014 -0500

--
 CHANGES.txt  |  1 +
 src/java/org/apache/cassandra/net/MessagingService.java  |  5 +
 .../org/apache/cassandra/service/StorageService.java | 11 ++-
 3 files changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 74ddcfd..69e9d37 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -10,6 +10,7 @@
  * Fix CQLSH parsing of functions and BLOB literals (CASSANDRA-7018)
  * Require nodetool rebuild_index to specify index names (CASSANDRA-7038)
  * Ensure that batchlog and hint timeouts do not produce hints (CASSANDRA-7058)
+ * Don't shut MessagingService down when replacing a node (CASSANDRA-6476)
 
 
 1.2.16

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index 3f90d7f..5e4a117 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -471,6 +471,11 @@ public final class MessagingService implements 
MessagingServiceMBean
 }
 }
 
+public boolean isListening()
+{
+return listenGate.isSignaled();
+}
+
 public void destroyConnectionPool(InetAddress to)
 {
 OutboundTcpConnectionPool cp = connectionManagers.get(to);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9359b7a3/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 1e7bed4..3b2d945 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -390,7 +390,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 public synchronized Collection prepareReplacementInfo() throws 
ConfigurationException
 {
 logger.info("Gathering node replacement information for {}", 
DatabaseDescriptor.getReplaceAddress());
-MessagingService.instance().listen(FBUtilities.getLocalAddress());
+if (!MessagingService.instance().isListening())
+MessagingService.instance().listen(FBUtilities.getLocalAddress());
 
 // make magic happen
 Gossiper.instance.doShadowRound();
@@ -407,7 +408,6 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 Collection tokens = 
TokenSerializer.deserialize(getPartitioner(), new DataInputStream(new 
ByteArrayInputStream(getApplicationStateValue(DatabaseDescriptor.getReplaceAddress(),
 ApplicationState.TOKENS;
 
 SystemTable.setLocalHostId(hostId); // use the replacee's host Id 
as our own so we receive hints, etc
-MessagingService.instance().shutdown();
 Gossiper.instance.resetEndpointStateMap(); // clean up since we 
have what we need
 return tokens;
 }
@@ -435,7 +435,6 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 break outer;
 }
 }
-
 // sleep until any schema migrations have finished
 while (!MigrationManager.isReadyForBootstrap())
 {
@@ -464,7 +463,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 Gossiper.instance.start((int) (System.currentTimeMillis() / 1000)); // 
needed for node-ring gathering.
 
Gossiper.instance.addLocalApplicationState(ApplicationState.NET_VERSION, 
valueFactory.networkVersion());
 
-MessagingService.instance().listen(FBUtilities.getLocalAddress());
+if (!MessagingService.instance().isListening())
+MessagingService.instance().listen(FBUti

[jira] [Commented] (CASSANDRA-6950) Secondary index query fails with tc range query when ordered by DESC

2014-04-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979833#comment-13979833
 ] 

Sylvain Lebresne commented on CASSANDRA-6950:
-

It does reproduce on the current cassandra-2.0 branch so I guess not (I've 
pushed the dtest at 
https://github.com/riptano/cassandra-dtest/commit/56a0350eeaa84d01d358724cc915a3c548229c20).
 I'll have a look.

> Secondary index query fails with tc range query when ordered by DESC
> 
>
> Key: CASSANDRA-6950
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6950
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: RHEL 6.3 virtual guest, 
> apache-cassandra-2.0.6-SNAPSHOT-src.tar.gz from build #284 (also tried with 
> 2.0.5 with CASSANDRA- patch custom-applied with same result).
>Reporter: Andre Campeau
>Assignee: Sylvain Lebresne
> Fix For: 2.0.8
>
>
> create table test4 ( name text, lname text, tc bigint, record text, 
> PRIMARY KEY ((name, lname), tc)) WITH CLUSTERING ORDER BY (tc DESC) AND 
> compaction={'class': 'LeveledCompactionStrategy'};
> create index test4_index ON test4(lname);
> Populate it with some data and non-zero tc values, then try:
> select * from test4 where lname='blah' and tc>0 allow filtering;
> And, (0 rows) returned, even though there are rows which should be found.
> When I create the table using CLUSTERING ORDER BY (tc ASC), the above query 
> works. Rows are correctly returned based on the range check.
> Tried various combinations but with descending order on tc nothing works.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6950) Secondary index query fails with tc range query when ordered by DESC

2014-04-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6950:


Fix Version/s: 2.0.8

> Secondary index query fails with tc range query when ordered by DESC
> 
>
> Key: CASSANDRA-6950
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6950
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: RHEL 6.3 virtual guest, 
> apache-cassandra-2.0.6-SNAPSHOT-src.tar.gz from build #284 (also tried with 
> 2.0.5 with CASSANDRA- patch custom-applied with same result).
>Reporter: Andre Campeau
>Assignee: Sylvain Lebresne
> Fix For: 2.0.8
>
>
> create table test4 ( name text, lname text, tc bigint, record text, 
> PRIMARY KEY ((name, lname), tc)) WITH CLUSTERING ORDER BY (tc DESC) AND 
> compaction={'class': 'LeveledCompactionStrategy'};
> create index test4_index ON test4(lname);
> Populate it with some data and non-zero tc values, then try:
> select * from test4 where lname='blah' and tc>0 allow filtering;
> And, (0 rows) returned, even though there are rows which should be found.
> When I create the table using CLUSTERING ORDER BY (tc ASC), the above query 
> works. Rows are correctly returned based on the range check.
> Tried various combinations but with descending order on tc nothing works.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6106) Provide timestamp with true microsecond resolution

2014-04-24 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979769#comment-13979769
 ] 

Benedict commented on CASSANDRA-6106:
-

Also, just in case overflow might be considered an issue: per 1 and 2a, we have 
adjustMicros * (micros-adjustFromMicros) <= 10billion, which is well within 
limits of safe long values

> Provide timestamp with true microsecond resolution
> --
>
> Key: CASSANDRA-6106
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: DSE Cassandra 3.1, but also HEAD
>Reporter: Christopher Smith
>Assignee: Benedict
>Priority: Minor
>  Labels: timestamps
> Fix For: 2.1 beta2
>
> Attachments: microtimstamp.patch, microtimstamp_random.patch, 
> microtimstamp_random_rev2.patch
>
>
> I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
> mentioned issues with millisecond rounding in timestamps and was able to 
> reproduce the issue. If I specify a timestamp in a mutating query, I get 
> microsecond precision, but if I don't, I get timestamps rounded to the 
> nearest millisecond, at least for my first query on a given connection, which 
> substantially increases the possibilities of collision.
> I believe I found the offending code, though I am by no means sure this is 
> comprehensive. I think we probably need a fairly comprehensive replacement of 
> all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7083) Authentication Support for CqlRecordWriter

2014-04-24 Thread Henning Kropp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henning Kropp updated CASSANDRA-7083:
-

Description: 
The {{CqlRecordWriter}} seems not to support authentication. When the keyspace 
in Cassandra is to set to use authentication our Pig job fails with, when 
credentials are provided using the URI ('cql://username:password...):
{code}
java.lang.RuntimeException: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:123)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:90)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:76)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:57)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
at 
org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:553)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:38677)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1597)
at 
org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1582)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.retrievePartitionKeyValidator(CqlRecordWriter.java:332)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:108)
... 7 more
{code}

If not supplied in the URI but as only in the {{JobConf}} the exception is:
{code}
Output Location Validation Failed for: 'cql://...' More info to follow:
InvalidRequestException(why:You have not logged in)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$
{code}

Which let to the finding, that authentication is correctly supplied for 
{{CqlStorage}} but not for the {{CqlRecordWriter}}.

May be it would make sense to put the authentication part into 
{{ConfigHelper.getClientFromAddressList()}}? Then in {{CqlStorage} the username 
and password in the conf would need to be set from the URI. If so the 
{{ConfigHelper}} has all the information to authenticate and already returns 
the client.

  was:
The {{CqlRecordWriter}} seems not to support authentication. When the keyspace 
in Cassandra is to set to use authentication our Pig job fails with, when 
credentials are provided using the URI ('cql://username:password...):
{code}
java.lang.RuntimeException: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:123)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:90)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:76)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:57)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
at 
org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:553)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:38677)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1597)
at 
org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1582)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.retrievePartitionKeyValidator(CqlRecordWriter.java:332)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:108)
... 7 more
{code}

If not supplied in the URI but as a {{JobConf}} the exception:
{code}
Output Location Validation Failed for: 'cql://...' More info to follow:
InvalidRequestException(why:You have not logged in)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$
{code}

Which let to the finding, that authentication is correctly supplied for 
{{CqlStorage}} but not for the {{CqlRecordWriter}}.

May be it would make sense to put the authentication part into 
{{ConfigHelper.getClientFromAddressList()}}? Then in {{CqlStorage} the username 
and password in the conf would need to be set from the URI. If s

[jira] [Updated] (CASSANDRA-7083) Authentication Support for CqlRecordWriter

2014-04-24 Thread Henning Kropp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henning Kropp updated CASSANDRA-7083:
-

Description: 
The {{CqlRecordWriter}} seems not to support authentication. When the keyspace 
in Cassandra is to set to use authentication our Pig job fails with, when 
credentials are provided using the URI ('cql://username:password...):
{code}
java.lang.RuntimeException: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:123)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:90)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:76)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:57)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
at 
org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:553)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:38677)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1597)
at 
org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1582)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.retrievePartitionKeyValidator(CqlRecordWriter.java:332)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:108)
... 7 more
{code}

If not supplied in the URI but as only in the {{JobConf}} the exception is:
{code}
Output Location Validation Failed for: 'cql://...' More info to follow:
InvalidRequestException(why:You have not logged in)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$
{code}

Which let to the finding, that authentication is correctly supplied for 
{{CqlStorage}} but not for the {{CqlRecordWriter}}.

Maybe it would make sense to put the authentication part into 
{{ConfigHelper.getClientFromAddressList()}}? Then in {{CqlStorage}} the 
username and password in the conf would need to be set from the URI. If so the 
{{ConfigHelper}} has all the information to authenticate and already returns 
the client.

  was:
The {{CqlRecordWriter}} seems not to support authentication. When the keyspace 
in Cassandra is to set to use authentication our Pig job fails with, when 
credentials are provided using the URI ('cql://username:password...):
{code}
java.lang.RuntimeException: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:123)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:90)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:76)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:57)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
at 
org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:553)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:38677)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1597)
at 
org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1582)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.retrievePartitionKeyValidator(CqlRecordWriter.java:332)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:108)
... 7 more
{code}

If not supplied in the URI but as only in the {{JobConf}} the exception is:
{code}
Output Location Validation Failed for: 'cql://...' More info to follow:
InvalidRequestException(why:You have not logged in)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$
{code}

Which let to the finding, that authentication is correctly supplied for 
{{CqlStorage}} but not for the {{CqlRecordWriter}}.

Maybe it would make sense to put the authentication part into 
{{ConfigHelper.getClientFromAddressList()}}? Then in {{CqlStorage} the username 
and password in the conf would need to be set from t

[jira] [Commented] (CASSANDRA-6106) Provide timestamp with true microsecond resolution

2014-04-24 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979758#comment-13979758
 ] 

Benedict commented on CASSANDRA-6106:
-

Well, what I should have made clear is that I am willing to drop the 
monotonicity guarantees, however I am -1 on your extra thread.

But I still think the monotonicity guarantees are good, and not so difficult to 
prove, so if we can get somebody who doesn't have a newborn to contend with to 
take a look maybe that wouldn't be a bad thing :)

In case it helps, here's a quick proof we can never give a whack value:

{noformat}
1. -10<= adjustMicros<=10
2. expire-adjustFrom=10
2a. expireMicros-adjustFromMicros=100
3. adjustFromMicros<=micros<=expireMicros
4. delta = (adjustMicros * (micros-adjustFromMicros)) / 
(expireMicros-adjustFromMicros)
5. 2a ^ 3 ^ 4 -> expireMicros-adjustFromMicros > micros-adjustFromMicros -> 
|delta| <= |adjustMicros|
{noformat}

i.e. the adjustment is definitely always less than adjustMicros, which is 
itself always less than 100ms per second (per 1 and 2). So we can never give a 
totally whack result. Can do more thorough proofs of other criteria, but I 
think this plus my other statement is enough to demonstrate its safety.

> Provide timestamp with true microsecond resolution
> --
>
> Key: CASSANDRA-6106
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: DSE Cassandra 3.1, but also HEAD
>Reporter: Christopher Smith
>Assignee: Benedict
>Priority: Minor
>  Labels: timestamps
> Fix For: 2.1 beta2
>
> Attachments: microtimstamp.patch, microtimstamp_random.patch, 
> microtimstamp_random_rev2.patch
>
>
> I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
> mentioned issues with millisecond rounding in timestamps and was able to 
> reproduce the issue. If I specify a timestamp in a mutating query, I get 
> microsecond precision, but if I don't, I get timestamps rounded to the 
> nearest millisecond, at least for my first query on a given connection, which 
> substantially increases the possibilities of collision.
> I believe I found the offending code, though I am by no means sure this is 
> comprehensive. I think we probably need a fairly comprehensive replacement of 
> all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7083) Authentication Support for CqlRecordWriter

2014-04-24 Thread Henning Kropp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henning Kropp updated CASSANDRA-7083:
-

Description: 
The {{CqlRecordWriter}} seems not to support authentication. When the keyspace 
in Cassandra is to set to use authentication our Pig job fails with, when 
credentials are provided using the URI ('cql://username:password...):
{code}
java.lang.RuntimeException: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:123)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:90)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:76)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:57)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
at 
org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:553)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:38677)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1597)
at 
org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1582)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.retrievePartitionKeyValidator(CqlRecordWriter.java:332)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:108)
... 7 more
{code}

If not supplied in the URI but as only in the {{JobConf}} the exception is:
{code}
Output Location Validation Failed for: 'cql://...' More info to follow:
InvalidRequestException(why:You have not logged in)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$
{code}

Which let to the finding, that authentication is correctly supplied for 
{{CqlStorage}} but not for the {{CqlRecordWriter}}.

Maybe it would make sense to put the authentication part into 
{{ConfigHelper.getClientFromAddressList()}}? Then in {{CqlStorage} the username 
and password in the conf would need to be set from the URI. If so the 
{{ConfigHelper}} has all the information to authenticate and already returns 
the client.

  was:
The {{CqlRecordWriter}} seems not to support authentication. When the keyspace 
in Cassandra is to set to use authentication our Pig job fails with, when 
credentials are provided using the URI ('cql://username:password...):
{code}
java.lang.RuntimeException: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:123)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:90)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:76)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:57)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
at 
org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:553)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:38677)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1597)
at 
org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1582)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.retrievePartitionKeyValidator(CqlRecordWriter.java:332)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:108)
... 7 more
{code}

If not supplied in the URI but as only in the {{JobConf}} the exception is:
{code}
Output Location Validation Failed for: 'cql://...' More info to follow:
InvalidRequestException(why:You have not logged in)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$
{code}

Which let to the finding, that authentication is correctly supplied for 
{{CqlStorage}} but not for the {{CqlRecordWriter}}.

May be it would make sense to put the authentication part into 
{{ConfigHelper.getClientFromAddressList()}}? Then in {{CqlStorage} the username 
and password in the conf would need to be set from t

[jira] [Updated] (CASSANDRA-7083) Authentication Support for CqlRecordWriter

2014-04-24 Thread Henning Kropp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henning Kropp updated CASSANDRA-7083:
-

Attachment: auth_cql.patch

patch/workaround for {{cassandra-1.2.15}}

The here given patch is more a workaround than a real patch as it just copies 
the code from {{CqlStorage}} and only works if username and password are given 
in the conf. Works for us for now.

> Authentication Support for CqlRecordWriter
> --
>
> Key: CASSANDRA-7083
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7083
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Reporter: Henning Kropp
>  Labels: authentication, pig
> Attachments: auth_cql.patch
>
>
> The {{CqlRecordWriter}} seems not to support authentication. When the 
> keyspace in Cassandra is to set to use authentication our Pig job fails with, 
> when credentials are provided using the URI ('cql://username:password...):
> {code}
> java.lang.RuntimeException: InvalidRequestException(why:You have not logged 
> in)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:123)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:90)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:76)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:57)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:553)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
>   at org.apache.hadoop.mapred.Child.main(Child.java:170)
> Caused by: InvalidRequestException(why:You have not logged in)
>   at 
> org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:38677)
>   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
>   at 
> org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1597)
>   at 
> org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1582)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.retrievePartitionKeyValidator(CqlRecordWriter.java:332)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:108)
>   ... 7 more
> {code}
> If not supplied in the URI but as a {{JobConf}} the exception:
> {code}
> Output Location Validation Failed for: 'cql://...' More info to follow:
> InvalidRequestException(why:You have not logged in)
> at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$
> {code}
> Which let to the finding, that authentication is correctly supplied for 
> {{CqlStorage}} but not for the {{CqlRecordWriter}}.
> May be it would make sense to put the authentication part into 
> {{ConfigHelper.getClientFromAddressList()}}? Then in {{CqlStorage} the 
> username and password in the conf would need to be set from the URI. If so 
> the {{ConfigHelper}} has all the information to authenticate and already 
> returns the client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7083) Authentication Support for CqlRecordWriter

2014-04-24 Thread Henning Kropp (JIRA)
Henning Kropp created CASSANDRA-7083:


 Summary: Authentication Support for CqlRecordWriter
 Key: CASSANDRA-7083
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7083
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Henning Kropp
 Attachments: auth_cql.patch

The {{CqlRecordWriter}} seems not to support authentication. When the keyspace 
in Cassandra is to set to use authentication our Pig job fails with, when 
credentials are provided using the URI ('cql://username:password...):
{code}
java.lang.RuntimeException: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:123)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:90)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:76)
at 
org.apache.cassandra.hadoop.cql3.CqlOutputFormat.getRecordWriter(CqlOutputFormat.java:57)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
at 
org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:553)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: InvalidRequestException(why:You have not logged in)
at 
org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:38677)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1597)
at 
org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1582)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.retrievePartitionKeyValidator(CqlRecordWriter.java:332)
at 
org.apache.cassandra.hadoop.cql3.CqlRecordWriter.(CqlRecordWriter.java:108)
... 7 more
{code}

If not supplied in the URI but as a {{JobConf}} the exception:
{code}
Output Location Validation Failed for: 'cql://...' More info to follow:
InvalidRequestException(why:You have not logged in)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$
{code}

Which let to the finding, that authentication is correctly supplied for 
{{CqlStorage}} but not for the {{CqlRecordWriter}}.

May be it would make sense to put the authentication part into 
{{ConfigHelper.getClientFromAddressList()}}? Then in {{CqlStorage} the username 
and password in the conf would need to be set from the URI. If so the 
{{ConfigHelper}} has all the information to authenticate and already returns 
the client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6826) Query returns different number of results depending on fetchsize

2014-04-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979734#comment-13979734
 ] 

Sylvain Lebresne commented on CASSANDRA-6826:
-

[~wtmitchell3] Did you time to check if you could reproduce on 2.0.7, now that 
it's out?

> Query returns different number of results depending on fetchsize
> 
>
> Key: CASSANDRA-6826
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6826
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: quad-core Windows 7 x64, single node cluster
> Cassandra 2.0.5
>Reporter: Bill Mitchell
>Assignee: Sylvain Lebresne
>
> I issue a query across the set of partitioned wide rows for one logical row, 
> where s, l, and partition specify the composite primary key for the row:
> SELECT ec, ea, rd FROM sr WHERE s = ? and partition IN ? and l = ? ALLOW 
> FILTERING;
> If I set fetchSize to only 1000 when the Cluster is configured, the query 
> sometimes does not return all the results.  In the particular case I am 
> chasing, it returns a total of 98586 rows.  If I increase the fetchsize to 
> 10, all the 9 actual rows are returned.  This suggests there is some 
> problem with fetchsize re-establishing the position on the next segment of 
> the result set, at least when multiple partitions are being accessed.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6106) Provide timestamp with true microsecond resolution

2014-04-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6106:


Reviewer:   (was: Sylvain Lebresne)

> Provide timestamp with true microsecond resolution
> --
>
> Key: CASSANDRA-6106
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: DSE Cassandra 3.1, but also HEAD
>Reporter: Christopher Smith
>Assignee: Benedict
>Priority: Minor
>  Labels: timestamps
> Fix For: 2.1 beta2
>
> Attachments: microtimstamp.patch, microtimstamp_random.patch, 
> microtimstamp_random_rev2.patch
>
>
> I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
> mentioned issues with millisecond rounding in timestamps and was able to 
> reproduce the issue. If I specify a timestamp in a mutating query, I get 
> microsecond precision, but if I don't, I get timestamps rounded to the 
> nearest millisecond, at least for my first query on a given connection, which 
> substantially increases the possibilities of collision.
> I believe I found the offending code, though I am by no means sure this is 
> comprehensive. I think we probably need a fairly comprehensive replacement of 
> all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6106) Provide timestamp with true microsecond resolution

2014-04-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13979722#comment-13979722
 ] 

Sylvain Lebresne commented on CASSANDRA-6106:
-

bq. it's still much better than without it, if that's your only concern?

I'm not afraid of slight microseconds imprecision that wouldn't matter, I'm 
afraid of returning a timestamp that is completely broken on some edge case, 
and the more arithmetic is going on, the more risk there is. Sure we can double 
and triple check the math to convince ourself, it's just that I don't think 
your solution bring any real benefits in practice "for conflict-resolution 
timestamps" over my proposition, and I think my solution is conceptually 
simpler, and I think we should always go for simpler when we can, and I think 
we can.

Now, I've discussed my view on the ticket itself (which I still halfway think 
could be closed as won't fix since at the end of the day the real problem for 
which it was opened is really CASSANDRA-6123), and on you branch (for which I 
don't see the point of "getting comfortable with the math" when there is a 
simpler solution imo) enough. I don't see much to add at this point. I'm not 
vetoing your solution, I just can't +1 it when I think my solution is a tad 
better (because simpler). Let's have someone else look at it and formulate an 
opinion, probably I'm just being difficult for lack of sleep.

> Provide timestamp with true microsecond resolution
> --
>
> Key: CASSANDRA-6106
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: DSE Cassandra 3.1, but also HEAD
>Reporter: Christopher Smith
>Assignee: Benedict
>Priority: Minor
>  Labels: timestamps
> Fix For: 2.1 beta2
>
> Attachments: microtimstamp.patch, microtimstamp_random.patch, 
> microtimstamp_random_rev2.patch
>
>
> I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
> mentioned issues with millisecond rounding in timestamps and was able to 
> reproduce the issue. If I specify a timestamp in a mutating query, I get 
> microsecond precision, but if I don't, I get timestamps rounded to the 
> nearest millisecond, at least for my first query on a given connection, which 
> substantially increases the possibilities of collision.
> I believe I found the offending code, though I am by no means sure this is 
> comprehensive. I think we probably need a fairly comprehensive replacement of 
> all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-7037) thrift_hsha_test.py dtest hangs in 2.1

2014-04-24 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-7037.
---

Resolution: Fixed

> thrift_hsha_test.py dtest hangs in 2.1
> --
>
> Key: CASSANDRA-7037
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7037
> Project: Cassandra
>  Issue Type: Test
>  Components: Tests
>Reporter: Michael Shuler
>Assignee: Michael Shuler
>
> system.log from node1:
> {noformat}
> INFO  [main] 2014-04-14 19:18:53,829 CassandraDaemon.java:102 - Hostname: 
> buildbot-ccm
> INFO  [main] 2014-04-14 19:18:53,868 YamlConfigurationLoader.java:80 - 
> Loading settings from file:/tmp/dtest-pRNmjg/test/node1/conf/cassandra.yaml
> INFO  [main] 2014-04-14 19:18:54,031 YamlConfigurationLoader.java:123 - Node 
> configuration:[authenticator=AllowAllAuthenticator; 
> authorizer=AllowAllAuthorizer; auto_bootstrap=false; auto_snapshot=true; 
> batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; 
> client_encryption_options=; cluster_name=test; 
> column_index_size_in_kb=64; 
> commitlog_directory=/tmp/dtest-pRNmjg/test/node1/commitlogs; 
> commitlog_segment_size_in_mb=32; commitlog_sync=periodic; 
> commitlog_sync_period_in_ms=1; compaction_preheat_key_cache=true; 
> compaction_throughput_mb_per_sec=16; concurrent_counter_writes=32; 
> concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; 
> counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; 
> cross_node_timeout=false; 
> data_file_directories=[/tmp/dtest-pRNmjg/test/node1/data]; 
> disk_failure_policy=stop; dynamic_snitch_badness_threshold=0.1; 
> dynamic_snitch_reset_interval_in_ms=60; 
> dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=SimpleSnitch; 
> flush_directory=/tmp/dtest-pRNmjg/test/node1/flush; 
> hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; 
> in_memory_compaction_limit_in_mb=64; incremental_backups=false; 
> index_summary_capacity_in_mb=null; 
> index_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; 
> internode_compression=all; key_cache_save_period=14400; 
> key_cache_size_in_mb=null; listen_address=127.0.0.1; 
> max_hint_window_in_ms=1080; max_hints_delivery_threads=2; 
> memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=0.4; 
> native_transport_port=9042; num_tokens=256; 
> partitioner=org.apache.cassandra.dht.Murmur3Partitioner; 
> permissions_validity_in_ms=2000; phi_convict_threshold=5; 
> preheat_kernel_page_cache=false; range_request_timeout_in_ms=1; 
> read_request_timeout_in_ms=1; 
> request_scheduler=org.apache.cassandra.scheduler.NoScheduler; 
> request_timeout_in_ms=1; row_cache_save_period=0; row_cache_size_in_mb=0; 
> rpc_address=127.0.0.1; rpc_keepalive=true; rpc_port=9160; 
> rpc_server_type=hsha; 
> saved_caches_directory=/tmp/dtest-pRNmjg/test/node1/saved_caches; 
> seed_provider=[{class_name=org.apache.cassandra.locator.SimpleSeedProvider, 
> parameters=[{seeds=127.0.0.1}]}]; server_encryption_options=; 
> snapshot_before_compaction=false; ssl_storage_port=7001; 
> start_native_transport=true; start_rpc=true; storage_port=7000; 
> thrift_framed_transport_size_in_mb=15; tombstone_failure_threshold=10; 
> tombstone_warn_threshold=1000; trickle_fsync=false; 
> trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=1; 
> write_request_timeout_in_ms=1]
> INFO  [main] 2014-04-14 19:18:54,339 DatabaseDescriptor.java:197 - 
> DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
> INFO  [main] 2014-04-14 19:18:54,351 DatabaseDescriptor.java:285 - Global 
> memtable on-heap threshold is enabled at 124MB
> INFO  [main] 2014-04-14 19:18:54,352 DatabaseDescriptor.java:289 - Global 
> memtable off-heap threshold is enabled at 124MB
> INFO  [main] 2014-04-14 19:18:54,813 CassandraDaemon.java:113 - JVM 
> vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.7.0_51
> INFO  [main] 2014-04-14 19:18:54,813 CassandraDaemon.java:141 - Heap size: 
> 523501568/523501568
> INFO  [main] 2014-04-14 19:18:54,813 CassandraDaemon.java:143 - Code Cache 
> Non-heap memory: init = 2555904(2496K) used = 686464(670K) committed = 
> 2555904(2496K) max = 50331648(49152K)
> INFO  [main] 2014-04-14 19:18:54,814 CassandraDaemon.java:143 - Par Eden 
> Space Heap memory: init = 107479040(104960K) used = 71013720(69349K) 
> committed = 107479040(104960K) max = 107479040(104960K)
> INFO  [main] 2014-04-14 19:18:54,814 CassandraDaemon.java:143 - Par Survivor 
> Space Heap memory: init = 13369344(13056K) used = 0(0K) committed = 
> 13369344(13056K) max = 13369344(13056K)
> INFO  [main] 2014-04-14 19:18:54,814 CassandraDaemon.java:143 - CMS Old Gen 
> Heap memory: init = 402653184(393216K) used = 0(0K) committed = 
> 402653184(393216K) max = 402

  1   2   >