[jira] [Commented] (CASSANDRA-4795) replication, compaction, compression? options are not validated

2013-02-13 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577397#comment-13577397
 ] 

Sylvain Lebresne commented on CASSANDRA-4795:
-

Do we agree on the first 2 patches in the meantime?

 replication, compaction, compression? options are not validated
 ---

 Key: CASSANDRA-4795
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4795
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
Reporter: Brandon Williams
Assignee: Dave Brosius
Priority: Minor
 Fix For: 1.2.1

 Attachments: 0001-Reallow-unexpected-strategy-options-for-thrift.txt, 
 0002-Reallow-unexpected-strategy-options-for-thrift.txt, 
 0003-Adds-application_metadata-field-to-ks-metadata.txt, 
 4795.compaction_strategy.txt, 4795_compaction_strategy_v2.txt, 
 4795_compaction_strategy_v3.txt, 4795.replication_strategy.txt


 When creating a keyspace and specifying strategy options, you can pass any 
 k/v pair you like.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: Fix missing columns in wide rows queries

2013-02-13 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 85cfd383b - 2fe8133bb


Fix missing columns in wide rows queries

patch by slebresne; reviewed by driftx for CASSANDRA-5225


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2fe8133b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2fe8133b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2fe8133b

Branch: refs/heads/cassandra-1.2
Commit: 2fe8133bbe71d186ef43aeaa3b5a320685441d68
Parents: 85cfd38
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 13 09:19:44 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 13 09:19:44 2013 +0100

--
 CHANGES.txt|1 +
 .../apache/cassandra/io/sstable/IndexHelper.java   |6 +++---
 .../cassandra/io/sstable/IndexHelperTest.java  |8 
 3 files changed, 8 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fe8133b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0621c79..5dd2499 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -17,6 +17,7 @@
  * Fix drop of sstables in some circumstance (CASSANDRA-5232)
  * Implement caching of authorization results (CASSANDRA-4295)
  * Add support for LZ4 compression (CASSANDRA-5038)
+ * Fix missing columns in wide rows queries (CASSANDRA-5225)
 
 
 1.2.1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fe8133b/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/IndexHelper.java 
b/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
index 6a2e101..36e972e 100644
--- a/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
+++ b/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
@@ -178,12 +178,12 @@ public class IndexHelper
 {
 if (reversed)
 {
-startIdx = lastIndex;
-toSearch = indexList.subList(lastIndex, indexList.size());
+toSearch = indexList.subList(0, lastIndex + 1);
 }
 else
 {
-toSearch = indexList.subList(0, lastIndex + 1);
+startIdx = lastIndex;
+toSearch = indexList.subList(lastIndex, indexList.size());
 }
 }
 int index = Collections.binarySearch(toSearch, target, 
getComparator(comparator, reversed));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fe8133b/test/unit/org/apache/cassandra/io/sstable/IndexHelperTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/sstable/IndexHelperTest.java 
b/test/unit/org/apache/cassandra/io/sstable/IndexHelperTest.java
index d96cab1..eb297da 100644
--- a/test/unit/org/apache/cassandra/io/sstable/IndexHelperTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/IndexHelperTest.java
@@ -47,8 +47,8 @@ public class IndexHelperTest
 assertEquals(1, IndexHelper.indexFor(bytes(12L), indexes, comp, false, 
-1));
 assertEquals(2, IndexHelper.indexFor(bytes(17L), indexes, comp, false, 
-1));
 assertEquals(3, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, -1));
-assertEquals(1, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 0));
-assertEquals(2, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 1));
+assertEquals(3, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 0));
+assertEquals(3, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 1));
 assertEquals(3, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 2));
 assertEquals(-1, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 3));
 
@@ -56,9 +56,9 @@ public class IndexHelperTest
 assertEquals(0, IndexHelper.indexFor(bytes(5L), indexes, comp, true, 
-1));
 assertEquals(1, IndexHelper.indexFor(bytes(17L), indexes, comp, true, 
-1));
 assertEquals(2, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
-1));
-assertEquals(2, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
0));
+assertEquals(0, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
0));
 assertEquals(1, IndexHelper.indexFor(bytes(12L), indexes, comp, true, 
-1));
-assertEquals(2, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
1));
+assertEquals(1, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
1));
 assertEquals(2, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
2));
 assertEquals(-1, IndexHelper.indexFor(bytes(100L), indexes, comp, 

[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-13 Thread slebresne
Updated Branches:
  refs/heads/trunk 7d81f8cb5 - 5df067418


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5df06741
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5df06741
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5df06741

Branch: refs/heads/trunk
Commit: 5df067418aafa2bcb47bdb0f23fb2cb8123f33a4
Parents: 7d81f8c 2fe8133
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 13 09:20:47 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 13 09:20:47 2013 +0100

--
 CHANGES.txt|1 +
 .../apache/cassandra/io/sstable/IndexHelper.java   |6 +++---
 .../cassandra/io/sstable/IndexHelperTest.java  |8 
 3 files changed, 8 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5df06741/CHANGES.txt
--



[1/2] git commit: Fix missing columns in wide rows queries

2013-02-13 Thread slebresne
Fix missing columns in wide rows queries

patch by slebresne; reviewed by driftx for CASSANDRA-5225


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2fe8133b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2fe8133b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2fe8133b

Branch: refs/heads/trunk
Commit: 2fe8133bbe71d186ef43aeaa3b5a320685441d68
Parents: 85cfd38
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 13 09:19:44 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 13 09:19:44 2013 +0100

--
 CHANGES.txt|1 +
 .../apache/cassandra/io/sstable/IndexHelper.java   |6 +++---
 .../cassandra/io/sstable/IndexHelperTest.java  |8 
 3 files changed, 8 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fe8133b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0621c79..5dd2499 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -17,6 +17,7 @@
  * Fix drop of sstables in some circumstance (CASSANDRA-5232)
  * Implement caching of authorization results (CASSANDRA-4295)
  * Add support for LZ4 compression (CASSANDRA-5038)
+ * Fix missing columns in wide rows queries (CASSANDRA-5225)
 
 
 1.2.1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fe8133b/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/IndexHelper.java 
b/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
index 6a2e101..36e972e 100644
--- a/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
+++ b/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
@@ -178,12 +178,12 @@ public class IndexHelper
 {
 if (reversed)
 {
-startIdx = lastIndex;
-toSearch = indexList.subList(lastIndex, indexList.size());
+toSearch = indexList.subList(0, lastIndex + 1);
 }
 else
 {
-toSearch = indexList.subList(0, lastIndex + 1);
+startIdx = lastIndex;
+toSearch = indexList.subList(lastIndex, indexList.size());
 }
 }
 int index = Collections.binarySearch(toSearch, target, 
getComparator(comparator, reversed));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fe8133b/test/unit/org/apache/cassandra/io/sstable/IndexHelperTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/sstable/IndexHelperTest.java 
b/test/unit/org/apache/cassandra/io/sstable/IndexHelperTest.java
index d96cab1..eb297da 100644
--- a/test/unit/org/apache/cassandra/io/sstable/IndexHelperTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/IndexHelperTest.java
@@ -47,8 +47,8 @@ public class IndexHelperTest
 assertEquals(1, IndexHelper.indexFor(bytes(12L), indexes, comp, false, 
-1));
 assertEquals(2, IndexHelper.indexFor(bytes(17L), indexes, comp, false, 
-1));
 assertEquals(3, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, -1));
-assertEquals(1, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 0));
-assertEquals(2, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 1));
+assertEquals(3, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 0));
+assertEquals(3, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 1));
 assertEquals(3, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 2));
 assertEquals(-1, IndexHelper.indexFor(bytes(100L), indexes, comp, 
false, 3));
 
@@ -56,9 +56,9 @@ public class IndexHelperTest
 assertEquals(0, IndexHelper.indexFor(bytes(5L), indexes, comp, true, 
-1));
 assertEquals(1, IndexHelper.indexFor(bytes(17L), indexes, comp, true, 
-1));
 assertEquals(2, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
-1));
-assertEquals(2, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
0));
+assertEquals(0, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
0));
 assertEquals(1, IndexHelper.indexFor(bytes(12L), indexes, comp, true, 
-1));
-assertEquals(2, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
1));
+assertEquals(1, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
1));
 assertEquals(2, IndexHelper.indexFor(bytes(100L), indexes, comp, true, 
2));
 assertEquals(-1, IndexHelper.indexFor(bytes(100L), indexes, comp, 
true, 4));
 }



[jira] [Resolved] (CASSANDRA-5225) Missing columns, errors when requesting specific columns from wide rows

2013-02-13 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-5225.
-

Resolution: Fixed
  Reviewer: brandon.williams

Committed, thanks.

 Missing columns, errors when requesting specific columns from wide rows
 ---

 Key: CASSANDRA-5225
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5225
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Tyler Hobbs
Assignee: Sylvain Lebresne
Priority: Critical
 Fix For: 1.2.2

 Attachments: 5225.txt, pycassa-repro.py


 With Cassandra 1.2.1 (and probably 1.2.0), I'm seeing some problems with 
 Thrift queries that request a set of specific column names when the row is 
 very wide.
 To reproduce, I'm inserting 10 million columns into a single row and then 
 randomly requesting three columns by name in a loop.  It's common for only 
 one or two of the three columns to be returned.  I'm also seeing stack traces 
 like the following in the Cassandra log:
 {noformat}
 ERROR 13:12:01,017 Exception in thread Thread[ReadStage:76,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 
 bytes remaining)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1576)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 
 bytes remaining)
   at 
 org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:69)
   at 
 org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
   at 
 org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:133)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1358)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1215)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1127)
   at org.apache.cassandra.db.Table.getRow(Table.java:355)
   at 
 org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1572)
   ... 3 more
 {noformat}
 This doesn't seem to happen when the row is smaller, so it might have 
 something to do with incremental large row compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


buildbot failure in ASF Buildbot on cassandra-trunk

2013-02-13 Thread buildbot
The Buildbot has detected a new failure on builder cassandra-trunk while 
building cassandra.
Full details are available at:
 http://ci.apache.org/builders/cassandra-trunk/builds/2343

Buildbot URL: http://ci.apache.org/

Buildslave for this Build: portunus_ubuntu

Build Reason: scheduler
Build Source Stamp: [branch trunk] 5df067418aafa2bcb47bdb0f23fb2cb8123f33a4
Blamelist: Sylvain Lebresne sylv...@datastax.com

BUILD FAILED: failed shell

sincerely,
 -The Buildbot





[jira] [Commented] (CASSANDRA-5222) OOM Exception during repair session with LeveledCompactionStrategy

2013-02-13 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577420#comment-13577420
 ] 

Sylvain Lebresne commented on CASSANDRA-5222:
-

bq. I noticed that Validator.add already asserts that the row is in the range 
being validated, so we must be doing a range check somewhere.

The validation is intrinsically limited to a range, so 
ValidationCompactionIterable make sure we only scan said range through
{noformat}
cfs.getCompactionStrategy().getScanners(sstables, range)
{noformat}
However, looking at it, it's not very efficient because currently we end up 
creating an SSTableBoundedScanner object even if the range is not covered at 
all by the sstable. And while said SSTableBoundedScanner will be correct in 
that it will be exhausted right away, the way the code works, we still open the 
data file and so we still allocate the reader buffer(s). So, probably a 
separate issue but we should optimize that nonetheless. And while looking at 
that, I realized that LeveledScanner poorly compute the getLengthInBytes info 
as it assumes we cover the whole ring (we will still end up at 100% because 
getCurrentPosition() will also return a value assuming the whole ring, but it's 
still a poor indicator of the actual work we do/have to do).



 OOM Exception during repair session with LeveledCompactionStrategy
 --

 Key: CASSANDRA-5222
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5222
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.0
 Environment: 3Gb Heap(12Gb per node RAM)
 36 nodes, 0.9 Tb of data per node, Leveled compaction strategy, SSTable size 
 =100Mb
Reporter: Ivan Sobolev
Assignee: Jonathan Ellis
 Fix For: 1.1.11

 Attachments: 5222.txt, chunks.json, sstablescanner.png


 1.8 Gb of heap is consumed with 12k SSTableBoundedScanner * 140kbytes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5182) Deletable rows are sometimes not removed during compaction

2013-02-13 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577446#comment-13577446
 ] 

Sylvain Lebresne commented on CASSANDRA-5182:
-

bq. If our goal is to throw out the maximum possible amount of obsolete data

I kind agree with Bryan, this doesn't have to be black and white. What we want 
is doing the best we can to remove obsolete rows without impacting compaction 
too much. Now if you do have active bloom filters, then I think just checking 
the bloom filters as we do now is the right trade-off: it maximize  with a very 
high probability the amount of removed data at the relatively cheap cost. Using 
getPosition in that case would be a bad idea, because the reward (a tiny 
fraction of additional data potentially removed) is not worth the cost (hitting 
disk each time a row we compact is also in a non-compacted sstable) imo, hence 
my opposition to the idea.

But if you deactivate bloom filters, you also fully destroy our bloom filter 
trade-off. So using getPosition does now provide a substantial benefit as it 
allows to go from 'no deletion' to 'maximize deletion'. The reward is, in that 
case, likely worth the cost, especially since people shouldn't desactivate 
bloom filters unless their index files fits in memory, in which case 
getPosition costs won't be that big.

So overall I do like the last patch attached by Yuki. Of course, the solution 
of just saying you shouldn't disable bloom filters on workloads that perform 
deletes works too, and I wouldn't oppose it, but it doesn't have my preference 
because I'm always a bit afraid of solving an issue by saying don't do this, 
as it usually end up in people getting bitten first and hearing they shouldn't 
have done it second. 

 Deletable rows are sometimes not removed during compaction
 --

 Key: CASSANDRA-5182
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5182
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
Reporter: Binh Van Nguyen
Assignee: Yuki Morishita
 Fix For: 1.2.2

 Attachments: 5182-1.1.txt, 5182-1.2.txt, test_ttl.tar.gz


 Our use case is write heavy and read seldom.  To optimize the space used, 
 we've set the bloom_filter_fp_ratio=1.0  That along with the fact that each 
 row is only written to one time and that there are more than 20 SSTables 
 keeps the rows from ever being compacted. Here is the code:
 https://github.com/apache/cassandra/blob/cassandra-1.1/src/java/org/apache/cassandra/db/compaction/CompactionController.java#L162
 We hit this conner case and because of this C* keeps consuming more and more 
 space on disk while it should not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5149) Respect slice count even if column expire mid-request

2013-02-13 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577457#comment-13577457
 ] 

Sylvain Lebresne commented on CASSANDRA-5149:
-

That's a good idea, I think that would work (at least I don't see why it 
wouldn't right away).

 Respect slice count even if column expire mid-request
 -

 Key: CASSANDRA-5149
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5149
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 0.7.0
Reporter: Sylvain Lebresne
 Fix For: 2.0


 This is a follow-up of CASSANDRA-5099.
 If a column expire just while a slice query is performed, it is possible for 
 replicas to count said column as live but to have the coordinator seeing it 
 as dead when building the final result. The effect that the query might 
 return strictly less columns that the requested slice count even though there 
 is some live columns matching the slice predicate but not returned in the 
 result.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: Make CompactionsTest.testDontPurgeAccidentaly more reliable with gcgrace=0

2013-02-13 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 2fe8133bb - e531be77a


Make CompactionsTest.testDontPurgeAccidentaly more reliable with gcgrace=0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e531be77
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e531be77
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e531be77

Branch: refs/heads/cassandra-1.2
Commit: e531be77a417e45d5a4f8fe7149b489d4e6cf3b1
Parents: 2fe8133
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 13 11:54:59 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 13 11:54:59 2013 +0100

--
 .../cassandra/db/compaction/CompactionsTest.java   |3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e531be77/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
index e543b00..b41bf19 100644
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
@@ -340,6 +340,9 @@ public class CompactionsTest extends SchemaLoader
 ColumnFamily cf = cfs.getColumnFamily(filter);
 assert cf == null || cf.isEmpty() : should be empty:  + cf;
 
+// Sleep one second so that the removal is indeed purgeable even with 
gcgrace == 0
+Thread.sleep(1000);
+
 cfs.forceBlockingFlush();
 
 CollectionSSTableReader sstablesAfter = cfs.getSSTables();



[jira] [Created] (CASSANDRA-5248) Fix timestamp-based tomstone removal logic

2013-02-13 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-5248:
---

 Summary: Fix timestamp-based tomstone removal logic
 Key: CASSANDRA-5248
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5248
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.1
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.2.2


Quoting the description of CASSANDRA-4671:
{quote}
In other words, we should force CompactionController.shouldPurge() to return 
true if min_timestamp(non-compacted-overlapping-sstables)  
max_timestamp(compacted-sstables)
{quote}
but somehow this was translating in the code to:
{noformat}
if (sstable.getBloomFilter().isPresent(key.key)  sstable.getMinTimestamp() = 
maxDeletionTimestamp)
return false;
{noformat}
which, well, is reversed.

Attaching the trivial patch to fix. I note that we already had a test that 
catched this (CompactionsTest.testDontPurgeAccidentaly), but that test was racy 
in that most of the time the compaction was done in the same second than the 
removal done prior to that and thus the compaction wasn't considering the 
tombstone gcable even though gcgrace was 0. I've already pushed the addition of 
a 1 second delay to make sure the patch reliably catch this bug.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5248) Fix timestamp-based tomstone removal logic

2013-02-13 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5248:


Attachment: 5248.txt

 Fix timestamp-based tomstone removal logic
 --

 Key: CASSANDRA-5248
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5248
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.1
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.2.2

 Attachments: 5248.txt


 Quoting the description of CASSANDRA-4671:
 {quote}
 In other words, we should force CompactionController.shouldPurge() to return 
 true if min_timestamp(non-compacted-overlapping-sstables)  
 max_timestamp(compacted-sstables)
 {quote}
 but somehow this was translating in the code to:
 {noformat}
 if (sstable.getBloomFilter().isPresent(key.key)  sstable.getMinTimestamp() 
 = maxDeletionTimestamp)
 return false;
 {noformat}
 which, well, is reversed.
 Attaching the trivial patch to fix. I note that we already had a test that 
 catched this (CompactionsTest.testDontPurgeAccidentaly), but that test was 
 racy in that most of the time the compaction was done in the same second than 
 the removal done prior to that and thus the compaction wasn't considering the 
 tombstone gcable even though gcgrace was 0. I've already pushed the addition 
 of a 1 second delay to make sure the patch reliably catch this bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5248) Fix timestamp-based tomstone removal logic

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577586#comment-13577586
 ] 

Jonathan Ellis commented on CASSANDRA-5248:
---

+1

 Fix timestamp-based tomstone removal logic
 --

 Key: CASSANDRA-5248
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5248
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.1
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.2.2

 Attachments: 5248.txt


 Quoting the description of CASSANDRA-4671:
 {quote}
 In other words, we should force CompactionController.shouldPurge() to return 
 true if min_timestamp(non-compacted-overlapping-sstables)  
 max_timestamp(compacted-sstables)
 {quote}
 but somehow this was translating in the code to:
 {noformat}
 if (sstable.getBloomFilter().isPresent(key.key)  sstable.getMinTimestamp() 
 = maxDeletionTimestamp)
 return false;
 {noformat}
 which, well, is reversed.
 Attaching the trivial patch to fix. I note that we already had a test that 
 catched this (CompactionsTest.testDontPurgeAccidentaly), but that test was 
 racy in that most of the time the compaction was done in the same second than 
 the removal done prior to that and thus the compaction wasn't considering the 
 tombstone gcable even though gcgrace was 0. I've already pushed the addition 
 of a 1 second delay to make sure the patch reliably catch this bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5249) Avoid allocateding SSTableBoundedScanner when the range does not intersect the sstable

2013-02-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5249:
--

Labels: repair  (was: )

 Avoid allocateding SSTableBoundedScanner when the range does not intersect 
 the sstable
 --

 Key: CASSANDRA-5249
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5249
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
  Labels: repair
 Fix For: 1.2.2


 See 
 https://issues.apache.org/jira/browse/CASSANDRA-5222?focusedCommentId=13577420page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13577420

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5249) Avoid allocateding SSTableBoundedScanner when the range does not intersect the sstable

2013-02-13 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-5249:
-

 Summary: Avoid allocateding SSTableBoundedScanner when the range 
does not intersect the sstable
 Key: CASSANDRA-5249
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5249
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
 Fix For: 1.2.2


See 
https://issues.apache.org/jira/browse/CASSANDRA-5222?focusedCommentId=13577420page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13577420

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5250) Improve LeveledScanner work estimation

2013-02-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5250:
--

Labels: compaction  (was: )

 Improve LeveledScanner work estimation
 --

 Key: CASSANDRA-5250
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5250
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
  Labels: compaction

 See 
 https://issues.apache.org/jira/browse/CASSANDRA-5222?focusedCommentId=13577420page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13577420

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5250) Improve LeveledScanner work estimation

2013-02-13 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-5250:
-

 Summary: Improve LeveledScanner work estimation
 Key: CASSANDRA-5250
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5250
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis


See 
https://issues.apache.org/jira/browse/CASSANDRA-5222?focusedCommentId=13577420page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13577420

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5222) OOM Exception during repair session with LeveledCompactionStrategy

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577592#comment-13577592
 ] 

Jonathan Ellis commented on CASSANDRA-5222:
---

Created CASSANDRA-5249 and CASSANDRA-5250 for followup.

 OOM Exception during repair session with LeveledCompactionStrategy
 --

 Key: CASSANDRA-5222
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5222
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.0
 Environment: 3Gb Heap(12Gb per node RAM)
 36 nodes, 0.9 Tb of data per node, Leveled compaction strategy, SSTable size 
 =100Mb
Reporter: Ivan Sobolev
Assignee: Jonathan Ellis
 Fix For: 1.1.11

 Attachments: 5222.txt, chunks.json, sstablescanner.png


 1.8 Gb of heap is consumed with 12k SSTableBoundedScanner * 140kbytes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5249) Avoid allocateding SSTableBoundedScanner when the range does not intersect the sstable

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577594#comment-13577594
 ] 

Jonathan Ellis commented on CASSANDRA-5249:
---

Patch to create EmptyCompactionScanner if there are no range intersections.

 Avoid allocateding SSTableBoundedScanner when the range does not intersect 
 the sstable
 --

 Key: CASSANDRA-5249
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5249
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
  Labels: repair
 Fix For: 1.2.2

 Attachments: 5249.txt


 See 
 https://issues.apache.org/jira/browse/CASSANDRA-5222?focusedCommentId=13577420page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13577420

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5249) Avoid allocateding SSTableBoundedScanner when the range does not intersect the sstable

2013-02-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5249:
--

Attachment: 5249.txt

 Avoid allocateding SSTableBoundedScanner when the range does not intersect 
 the sstable
 --

 Key: CASSANDRA-5249
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5249
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
  Labels: repair
 Fix For: 1.2.2

 Attachments: 5249.txt


 See 
 https://issues.apache.org/jira/browse/CASSANDRA-5222?focusedCommentId=13577420page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13577420

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5251) Hadoop support should be able to work with multiple column families

2013-02-13 Thread Illarion Kovalchuk (JIRA)
Illarion Kovalchuk created CASSANDRA-5251:
-

 Summary: Hadoop support should be able to work with multiple 
column families
 Key: CASSANDRA-5251
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5251
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.2.1
Reporter: Illarion Kovalchuk
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Update of ClientOptions by Max Penet

2013-02-13 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The ClientOptions page has been changed by Max Penet:
http://wiki.apache.org/cassandra/ClientOptions?action=diffrev1=164rev2=165

   * Clojure
* clj-hector: https://github.com/pingles/clj-hector
* casyn: https://github.com/mpenet/casyn
+   * alia: https://github.com/mpenet/alia (datastax/java-driver wrapper)
   * .NET
* Aquiles: http://aquiles.codeplex.com/
* Cassandraemon: http://cassandraemon.codeplex.com/


[jira] [Commented] (CASSANDRA-5248) Fix timestamp-based tomstone removal logic

2013-02-13 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577633#comment-13577633
 ] 

Yuki Morishita commented on CASSANDRA-5248:
---

Wait, we do want to delete even if key exits in other sstable but the sstable 
timestamp was older than max deletion time, don't we?
The patch changes the behavior to not delete even if such key exists.

 Fix timestamp-based tomstone removal logic
 --

 Key: CASSANDRA-5248
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5248
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.1
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.2.2

 Attachments: 5248.txt


 Quoting the description of CASSANDRA-4671:
 {quote}
 In other words, we should force CompactionController.shouldPurge() to return 
 true if min_timestamp(non-compacted-overlapping-sstables)  
 max_timestamp(compacted-sstables)
 {quote}
 but somehow this was translating in the code to:
 {noformat}
 if (sstable.getBloomFilter().isPresent(key.key)  sstable.getMinTimestamp() 
 = maxDeletionTimestamp)
 return false;
 {noformat}
 which, well, is reversed.
 Attaching the trivial patch to fix. I note that we already had a test that 
 catched this (CompactionsTest.testDontPurgeAccidentaly), but that test was 
 racy in that most of the time the compaction was done in the same second than 
 the removal done prior to that and thus the compaction wasn't considering the 
 tombstone gcable even though gcgrace was 0. I've already pushed the addition 
 of a 1 second delay to make sure the patch reliably catch this bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4785) Secondary Index Sporadically Doesn't Return Rows

2013-02-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577677#comment-13577677
 ] 

André Cruz commented on CASSANDRA-4785:
---

This happens to us as well. Cassandra 1.1.5, 6 nodes, RF3. We will try to drop 
and rebuild the index, since we have tried everything else.

 Secondary Index Sporadically Doesn't Return Rows
 

 Key: CASSANDRA-4785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4785
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5, 1.1.6
 Environment: Ubuntu 10.04
 Java 6 Sun
 Cassandra 1.1.5 upgraded from 1.1.2 - 1.1.3 - 1.1.5
Reporter: Arya Goudarzi

 I have a ColumnFamily with caching = ALL. I have 2 secondary indexes on it. I 
 have noticed if I query using the secondary index in the where clause, 
 sometimes I get the results and sometimes I don't. Until 2 weeks ago, the 
 caching option on this CF was set to NONE. So, I suspect something happened 
 in secondary index caching scheme. 
 Here are things I tried:
 1. I rebuild indexes for that CF on all nodes;
 2. I set the caching to KEYS_ONLY and rebuild the index again;
 3. I set the caching to NONE and rebuild the index again;
 None of the above helped. I suppose the caching still exists as this behavior 
 looks like cache mistmatch.
 I did a bit research, and found CASSANDRA-4197 that could be related.
 Please advice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-02-13 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577680#comment-13577680
 ] 

Michael Kjellman commented on CASSANDRA-5151:
-

It appears the patch seems to have resolved the FileNotFoundException but I'm 
still able to reproduce the IllegalStateException.

Also, it seems that once a node throws this exception, even after deleting the 
sstables in system/compactions_in_process that node will now throw the 
Exception on startup every time.

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 1.2.2

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577683#comment-13577683
 ] 

Jonathan Ellis commented on CASSANDRA-4937:
---

Thanks, looking good.

v3 attached that cleans up file i/o to rethrow as FSReadError and only preheats 
row data if 90% of the rows in the sstable are under the page size that we're 
fadvising.

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4937:
--

Attachment: 4937-v3.txt

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4785) Secondary Index Sporadically Doesn't Return Rows

2013-02-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577687#comment-13577687
 ] 

Brandon Williams commented on CASSANDRA-4785:
-

Please try this after CASSANDRA-5225 and see if it reproduces.

 Secondary Index Sporadically Doesn't Return Rows
 

 Key: CASSANDRA-4785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4785
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5, 1.1.6
 Environment: Ubuntu 10.04
 Java 6 Sun
 Cassandra 1.1.5 upgraded from 1.1.2 - 1.1.3 - 1.1.5
Reporter: Arya Goudarzi

 I have a ColumnFamily with caching = ALL. I have 2 secondary indexes on it. I 
 have noticed if I query using the secondary index in the where clause, 
 sometimes I get the results and sometimes I don't. Until 2 weeks ago, the 
 caching option on this CF was set to NONE. So, I suspect something happened 
 in secondary index caching scheme. 
 Here are things I tried:
 1. I rebuild indexes for that CF on all nodes;
 2. I set the caching to KEYS_ONLY and rebuild the index again;
 3. I set the caching to NONE and rebuild the index again;
 None of the above helped. I suppose the caching still exists as this behavior 
 looks like cache mistmatch.
 I did a bit research, and found CASSANDRA-4197 that could be related.
 Please advice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4785) Secondary Index Sporadically Doesn't Return Rows

2013-02-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577705#comment-13577705
 ] 

André Cruz commented on CASSANDRA-4785:
---

Bug #5225 mentions RuntimeExceptions and CorruptSSTableExceptions, which does 
not happen to me. This CF in particular is smallish (20k rows, 6MB total), so 
wide rows seem unlikely. 

Also it seems related to 1.2 and my cluster is on 1.1.5 still.

 Secondary Index Sporadically Doesn't Return Rows
 

 Key: CASSANDRA-4785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4785
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5, 1.1.6
 Environment: Ubuntu 10.04
 Java 6 Sun
 Cassandra 1.1.5 upgraded from 1.1.2 - 1.1.3 - 1.1.5
Reporter: Arya Goudarzi

 I have a ColumnFamily with caching = ALL. I have 2 secondary indexes on it. I 
 have noticed if I query using the secondary index in the where clause, 
 sometimes I get the results and sometimes I don't. Until 2 weeks ago, the 
 caching option on this CF was set to NONE. So, I suspect something happened 
 in secondary index caching scheme. 
 Here are things I tried:
 1. I rebuild indexes for that CF on all nodes;
 2. I set the caching to KEYS_ONLY and rebuild the index again;
 3. I set the caching to NONE and rebuild the index again;
 None of the above helped. I suppose the caching still exists as this behavior 
 looks like cache mistmatch.
 I did a bit research, and found CASSANDRA-4197 that could be related.
 Please advice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-02-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577712#comment-13577712
 ] 

Brandon Williams commented on CASSANDRA-5151:
-

I move to revert the non-bugfix portion of this patch from 1.2 and push it to 
trunk, given the fallout we've seen thus far.

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 1.2.2

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-02-13 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577718#comment-13577718
 ] 

Michael Kjellman commented on CASSANDRA-5151:
-

[~brandon.williams] is the fallout due to bugs in the patch/new implementation 
or is it exposing unrelated bugs that were just being skipped before?

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 1.2.2

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5251) Hadoop support should be able to work with multiple column families

2013-02-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577727#comment-13577727
 ] 

Brandon Williams commented on CASSANDRA-5251:
-

CASSANDRA-4208?

 Hadoop support should be able to work with multiple column families
 ---

 Key: CASSANDRA-5251
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5251
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.2.1
Reporter: Illarion Kovalchuk
Priority: Minor



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-02-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577732#comment-13577732
 ] 

Brandon Williams commented on CASSANDRA-5151:
-

I'll let Yuki decide, but the fact that we failed a dtest, a utest, and most 
importantly the [~mkjellman] test is worrisome. ;)

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 1.2.2

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-02-13 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577747#comment-13577747
 ] 

Yuki Morishita commented on CASSANDRA-5151:
---

I'm thinking of the cause of this can be CASSANDRA-5241, since this function 
relies on a lot of concurrent forceBlockingFlush. So there is a chance that 
compaction_in_progress flush is not complete at the end of the compaction.

If that is the case, we should wait until we fix CASSANDRA-5241.

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 1.2.2

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-02-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577757#comment-13577757
 ] 

Brandon Williams commented on CASSANDRA-5151:
-

bq. If that is the case, we should wait until we fix CASSANDRA-5241.

+1, that is a troublesome bug in many ways.

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 1.2.2

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4785) Secondary Index Sporadically Doesn't Return Rows

2013-02-13 Thread Steve Hodecker (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577810#comment-13577810
 ] 

Steve Hodecker commented on CASSANDRA-4785:
---

I am seeing this behavior as wel1, with version 1.1.5.  Following a restart of 
a cassandra node, queries that use secondary indexes started failing for all 
Column Families that had caching set to 'all'.  My workaround was to change the 
caching on these ColumnFamilies to 'keys_only';


 Secondary Index Sporadically Doesn't Return Rows
 

 Key: CASSANDRA-4785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4785
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5, 1.1.6
 Environment: Ubuntu 10.04
 Java 6 Sun
 Cassandra 1.1.5 upgraded from 1.1.2 - 1.1.3 - 1.1.5
Reporter: Arya Goudarzi

 I have a ColumnFamily with caching = ALL. I have 2 secondary indexes on it. I 
 have noticed if I query using the secondary index in the where clause, 
 sometimes I get the results and sometimes I don't. Until 2 weeks ago, the 
 caching option on this CF was set to NONE. So, I suspect something happened 
 in secondary index caching scheme. 
 Here are things I tried:
 1. I rebuild indexes for that CF on all nodes;
 2. I set the caching to KEYS_ONLY and rebuild the index again;
 3. I set the caching to NONE and rebuild the index again;
 None of the above helped. I suppose the caching still exists as this behavior 
 looks like cache mistmatch.
 I did a bit research, and found CASSANDRA-4197 that could be related.
 Please advice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4785) Secondary Index Sporadically Doesn't Return Rows

2013-02-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577813#comment-13577813
 ] 

André Cruz commented on CASSANDRA-4785:
---

I also have caching=ALL on this CF. Nothing else was needed?

Because I've already tried invalidating key and row caches and the problem 
remains. Did you restart the nodes, repaired or did anything else?

 Secondary Index Sporadically Doesn't Return Rows
 

 Key: CASSANDRA-4785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4785
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5, 1.1.6
 Environment: Ubuntu 10.04
 Java 6 Sun
 Cassandra 1.1.5 upgraded from 1.1.2 - 1.1.3 - 1.1.5
Reporter: Arya Goudarzi

 I have a ColumnFamily with caching = ALL. I have 2 secondary indexes on it. I 
 have noticed if I query using the secondary index in the where clause, 
 sometimes I get the results and sometimes I don't. Until 2 weeks ago, the 
 caching option on this CF was set to NONE. So, I suspect something happened 
 in secondary index caching scheme. 
 Here are things I tried:
 1. I rebuild indexes for that CF on all nodes;
 2. I set the caching to KEYS_ONLY and rebuild the index again;
 3. I set the caching to NONE and rebuild the index again;
 None of the above helped. I suppose the caching still exists as this behavior 
 looks like cache mistmatch.
 I did a bit research, and found CASSANDRA-4197 that could be related.
 Please advice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5245) AnitEntropy/MerkleTree Error

2013-02-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5245:
--

Assignee: Sylvain Lebresne

 AnitEntropy/MerkleTree Error
 

 Key: CASSANDRA-5245
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5245
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0, 1.2.1
Reporter: David Röhr
Assignee: Sylvain Lebresne
Priority: Minor

 We are seeing AntiEntropy errors when performing repair jobs in one of our 
 Cassandra clusters. It seems to have started with 1.2. (maybe an issue with 
 vnodes) The exceptions occur almost every time we try to do a repair on all 
 column families in the cluster. Doing the same task on 1.1 does not trigger 
 this.
 6 nodes cluster (vnodes, murmur3, rf:3)
 very low activity
 running a nodetool repair -pr loop on the cluster nodes
 nodetool hangs, and same big stacktrace in logs.
 root 11025 0.0 0.0 106100 1436 pts/0 S+ Feb11 0:00 _ /bin/sh 
 /usr/bin/nodetool -h HOST -p 7199 -pr repair KEYSPACE COLUMN_FAMILY
 ERROR [AntiEntropyStage:3] 2013-02-11 17:08:12,630 CassandraDaemon.java (line 
 133) Exception in thread Thread[AntiEntropyStage:3,5,main]
 java.lang.AssertionError
   at org.apache.cassandra.utils.MerkleTree.inc(MerkleTree.java:137)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:245)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:256)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 org.apache.cassandra.utils.MerkleTree.differenceHelper(MerkleTree.java:267)
   at 
 

[jira] [Comment Edited] (CASSANDRA-4785) Secondary Index Sporadically Doesn't Return Rows

2013-02-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577813#comment-13577813
 ] 

André Cruz edited comment on CASSANDRA-4785 at 2/13/13 6:54 PM:


I also have caching=ALL on this CF. Nothing else was needed?

Because I've already tried invalidating key and row caches and the problem 
remains. Did you restart the nodes, repaired or did anything else?

FYI, I've tried removing the row cache, but the problem remains. I didn't 
restart the node, however, because this is a production cluster.

  was (Author: edevil):
I also have caching=ALL on this CF. Nothing else was needed?

Because I've already tried invalidating key and row caches and the problem 
remains. Did you restart the nodes, repaired or did anything else?
  
 Secondary Index Sporadically Doesn't Return Rows
 

 Key: CASSANDRA-4785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4785
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5, 1.1.6
 Environment: Ubuntu 10.04
 Java 6 Sun
 Cassandra 1.1.5 upgraded from 1.1.2 - 1.1.3 - 1.1.5
Reporter: Arya Goudarzi

 I have a ColumnFamily with caching = ALL. I have 2 secondary indexes on it. I 
 have noticed if I query using the secondary index in the where clause, 
 sometimes I get the results and sometimes I don't. Until 2 weeks ago, the 
 caching option on this CF was set to NONE. So, I suspect something happened 
 in secondary index caching scheme. 
 Here are things I tried:
 1. I rebuild indexes for that CF on all nodes;
 2. I set the caching to KEYS_ONLY and rebuild the index again;
 3. I set the caching to NONE and rebuild the index again;
 None of the above helped. I suppose the caching still exists as this behavior 
 looks like cache mistmatch.
 I did a bit research, and found CASSANDRA-4197 that could be related.
 Please advice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4785) Secondary Index Sporadically Doesn't Return Rows

2013-02-13 Thread Steve Hodecker (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577827#comment-13577827
 ] 

Steve Hodecker commented on CASSANDRA-4785:
---

I updated the column family caching level to 'keys_only' via the command line 
interface, on the single node system on which I had the problem.  Perhaps I 
restarted the node to rebuild the index, but I don't recall that that was 
required.

 Secondary Index Sporadically Doesn't Return Rows
 

 Key: CASSANDRA-4785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4785
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5, 1.1.6
 Environment: Ubuntu 10.04
 Java 6 Sun
 Cassandra 1.1.5 upgraded from 1.1.2 - 1.1.3 - 1.1.5
Reporter: Arya Goudarzi

 I have a ColumnFamily with caching = ALL. I have 2 secondary indexes on it. I 
 have noticed if I query using the secondary index in the where clause, 
 sometimes I get the results and sometimes I don't. Until 2 weeks ago, the 
 caching option on this CF was set to NONE. So, I suspect something happened 
 in secondary index caching scheme. 
 Here are things I tried:
 1. I rebuild indexes for that CF on all nodes;
 2. I set the caching to KEYS_ONLY and rebuild the index again;
 3. I set the caching to NONE and rebuild the index again;
 None of the above helped. I suppose the caching still exists as this behavior 
 looks like cache mistmatch.
 I did a bit research, and found CASSANDRA-4197 that could be related.
 Please advice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5228) Track maximum ttl and use to expire entire sstables

2013-02-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5228:
--

Summary: Track maximum ttl and use to expire entire sstables  (was: Track 
minimum ttl and use to expire entire sstables)

 Track maximum ttl and use to expire entire sstables
 ---

 Key: CASSANDRA-5228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5228
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Priority: Minor

 It would be nice to be able to throw away entire sstables worth of data when 
 we know that it's all expired.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5228) Track maximum ttl and use to expire entire sstables

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577829#comment-13577829
 ] 

Jonathan Ellis commented on CASSANDRA-5228:
---

You're right.  Edited.

 Track maximum ttl and use to expire entire sstables
 ---

 Key: CASSANDRA-5228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5228
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Priority: Minor

 It would be nice to be able to throw away entire sstables worth of data when 
 we know that it's all expired.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5228) Track maximum ttl and use to expire entire sstables

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577829#comment-13577829
 ] 

Jonathan Ellis edited comment on CASSANDRA-5228 at 2/13/13 7:03 PM:


Hints is a different case; I'd rather re-code the delivery mechanism to operate 
sstable-at-a-time, than keep them around until the TTL expires (which is 
usually much, much longer than until we deliver it).

You're right about wanting to track max ttl; edited.

  was (Author: jbellis):
You're right.  Edited.
  
 Track maximum ttl and use to expire entire sstables
 ---

 Key: CASSANDRA-5228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5228
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Priority: Minor

 It would be nice to be able to throw away entire sstables worth of data when 
 we know that it's all expired.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4785) Secondary Index Sporadically Doesn't Return Rows

2013-02-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577832#comment-13577832
 ] 

André Cruz commented on CASSANDRA-4785:
---

So after setting the caching level you then used nodetool to rebuild the index 
(with a possible restart in the middle)? Or did you drop the index and re-added 
it?

 Secondary Index Sporadically Doesn't Return Rows
 

 Key: CASSANDRA-4785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4785
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5, 1.1.6
 Environment: Ubuntu 10.04
 Java 6 Sun
 Cassandra 1.1.5 upgraded from 1.1.2 - 1.1.3 - 1.1.5
Reporter: Arya Goudarzi

 I have a ColumnFamily with caching = ALL. I have 2 secondary indexes on it. I 
 have noticed if I query using the secondary index in the where clause, 
 sometimes I get the results and sometimes I don't. Until 2 weeks ago, the 
 caching option on this CF was set to NONE. So, I suspect something happened 
 in secondary index caching scheme. 
 Here are things I tried:
 1. I rebuild indexes for that CF on all nodes;
 2. I set the caching to KEYS_ONLY and rebuild the index again;
 3. I set the caching to NONE and rebuild the index again;
 None of the above helped. I suppose the caching still exists as this behavior 
 looks like cache mistmatch.
 I did a bit research, and found CASSANDRA-4197 that could be related.
 Please advice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4872) Move manifest into sstable metadata

2013-02-13 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-4872:
---

Attachment: 
0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v1.patch

 Move manifest into sstable metadata
 ---

 Key: CASSANDRA-4872
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4872
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0

 Attachments: 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v1.patch


 Now that we have a metadata component it would be better to keep sstable 
 level there, than in a separate manifest.  With information per-sstable we 
 don't need to do a full re-level if there is corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4872) Move manifest into sstable metadata

2013-02-13 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577835#comment-13577835
 ] 

Marcus Eriksson edited comment on CASSANDRA-4872 at 2/13/13 7:13 PM:
-

moves sstable level into sstablemetadata

makes SSTableMetadata files mutable to be able to send files back to L0 and to 
be able to migrate information from old json file into the metadata file 
without a full compaction/scrub of the sstable

  was (Author: krummas):
moves sstable level into sstablemetadata

makes SSTableMetadata files mutable to be able to send files back to L0 and to 
be able to migrate information from old json file into the metadata file
  
 Move manifest into sstable metadata
 ---

 Key: CASSANDRA-4872
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4872
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0

 Attachments: 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v1.patch


 Now that we have a metadata component it would be better to keep sstable 
 level there, than in a separate manifest.  With information per-sstable we 
 don't need to do a full re-level if there is corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4872) Move manifest into sstable metadata

2013-02-13 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577839#comment-13577839
 ] 

Marcus Eriksson commented on CASSANDRA-4872:


there should be a tool to offline-drop all files back to L0, that is a common 
thing we have done a few times in production (by removing the .json file)

ill do that as a separate ticket if this gets committed

 Move manifest into sstable metadata
 ---

 Key: CASSANDRA-4872
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4872
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0

 Attachments: 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v1.patch


 Now that we have a metadata component it would be better to keep sstable 
 level there, than in a separate manifest.  With information per-sstable we 
 don't need to do a full re-level if there is corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577878#comment-13577878
 ] 

Pavel Yaskevich commented on CASSANDRA-4937:


bq. preheats row data if 90% of the rows in the sstable are under the page size 
that we're fadvising.

I see the reason to do that if we have big rows (index promoted to the Index 
component so we don't touch first page of a row) and we don't know where we 
would be hitting them but this is why I don't think that 90% is a good idea 

  - we don't know distribution of those big rows so if we small row which was 
sharing page with big row it's still good to preheat as we read on page basis.

  - if we still preheat first page that we didn't need it would actually be 
migrated by kernel automatically with adoptive read-ahead for example.

  - if rows grow over time it would be a sadden change (flip-flop) in 
behavior/latencies.

  - even if 90% are bigger of the page size it's quiet possible that keys that 
we actually migrated in the cache are in other 10%.

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577889#comment-13577889
 ] 

Jonathan Ellis commented on CASSANDRA-4937:
---

90% is much more likely to be useful than blindly WILLNEEDing everything, 
though, which would be a shot in the foot for wide-row HDD deployments (of 
which there are many).

In my mind the sane options are:
- Don't bother preheating
- Attempt to preheat only rows where it will do some good

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5248) Fix timestamp-based tomstone removal logic

2013-02-13 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577904#comment-13577904
 ] 

Yuki Morishita commented on CASSANDRA-5248:
---

Sorry, I was wrong. I just got confused. :(
You're right that the comparison should be opposite.


 Fix timestamp-based tomstone removal logic
 --

 Key: CASSANDRA-5248
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5248
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.1
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.2.2

 Attachments: 5248.txt


 Quoting the description of CASSANDRA-4671:
 {quote}
 In other words, we should force CompactionController.shouldPurge() to return 
 true if min_timestamp(non-compacted-overlapping-sstables)  
 max_timestamp(compacted-sstables)
 {quote}
 but somehow this was translating in the code to:
 {noformat}
 if (sstable.getBloomFilter().isPresent(key.key)  sstable.getMinTimestamp() 
 = maxDeletionTimestamp)
 return false;
 {noformat}
 which, well, is reversed.
 Attaching the trivial patch to fix. I note that we already had a test that 
 catched this (CompactionsTest.testDontPurgeAccidentaly), but that test was 
 racy in that most of the time the compaction was done in the same second than 
 the removal done prior to that and thus the compaction wasn't considering the 
 tombstone gcable even though gcgrace was 0. I've already pushed the addition 
 of a 1 second delay to make sure the patch reliably catch this bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4872) Move manifest into sstable metadata

2013-02-13 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-4872:
---

Attachment: 
0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v2.patch

v2 sends back files to L0 when loadNewSSTables is called

also realized that people might drop in sstables manually and then restart 
cassandra, this could make sstables within levels overlap

guess we might need to check for overlapping sstables on startup

 Move manifest into sstable metadata
 ---

 Key: CASSANDRA-4872
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4872
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0

 Attachments: 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v1.patch, 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v2.patch


 Now that we have a metadata component it would be better to keep sstable 
 level there, than in a separate manifest.  With information per-sstable we 
 don't need to do a full re-level if there is corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4872) Move manifest into sstable metadata

2013-02-13 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-4872:
---

Attachment: 
0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v3.patch

v3 checks if the metadatafile actually exists before trying to change level

 Move manifest into sstable metadata
 ---

 Key: CASSANDRA-4872
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4872
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0

 Attachments: 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v1.patch, 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v2.patch, 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v3.patch


 Now that we have a metadata component it would be better to keep sstable 
 level there, than in a separate manifest.  With information per-sstable we 
 don't need to do a full re-level if there is corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4872) Move manifest into sstable metadata

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577930#comment-13577930
 ] 

Jonathan Ellis commented on CASSANDRA-4872:
---

bq. guess we might need to check for overlapping sstables on startup

If we use the first/last keys from the metadata this should be tractable.  (Of 
course we only have to check for overlaps w/in the claimed level.)

 Move manifest into sstable metadata
 ---

 Key: CASSANDRA-4872
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4872
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0

 Attachments: 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v1.patch, 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v2.patch, 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v3.patch


 Now that we have a metadata component it would be better to keep sstable 
 level there, than in a separate manifest.  With information per-sstable we 
 don't need to do a full re-level if there is corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577931#comment-13577931
 ] 

Pavel Yaskevich commented on CASSANDRA-4937:


90% itself doesn't give enough information to heuristic so i see two options 
here 1). make in a config option disabled by default 2). don't bother doing 
that at all.

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577936#comment-13577936
 ] 

Jonathan Ellis commented on CASSANDRA-4937:
---

bq. 90% itself doesn't give enough information to heuristic

I don't think I buy that.  We make similar assumptions all over the place 
(bloom filters, tombstone compaction, redundant requests...)

Of course it's not going to be right all the time; it just needs to be right 
10x as often as it's wrong, for us to win.

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577942#comment-13577942
 ] 

Pavel Yaskevich commented on CASSANDRA-4937:


If that is optimization for big rows than it renders that feature useless if we 
have combination of big/small rows mixed together because it doesn't take into 
account what actually is in key cache. Also it doesn't take into account how 
big rows are distributed inside of the SSTable so if they share a page with few 
small rows (which are in page cache) that we effectively missing out on 
benefits that preheat feature gives us. 

I'm -1 committing that with that option because it's bad from operations 
perspective as it could result in sudden degradation of latencies and would be 
impossible to understand without actually knowing the code once 90% switch 
flips even if read subset of all data is still smaller than page size.

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577945#comment-13577945
 ] 

Jonathan Ellis commented on CASSANDRA-4937:
---

This is useless for wide rows, so what I am trying to do is keep us from 
wasting seeks in that case.

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577955#comment-13577955
 ] 

Pavel Yaskevich commented on CASSANDRA-4937:


I understand that and suggesting to make it option instead of people are aware 
of trade-off, also main idea of preheating first page was to minimize latency 
of random - sequential I/O inside of the row even if it's bigger than one page 
(also pretty useful in 1.1 where the is no promoted indexes) for rows that we 
know we were hitting before.

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5252) Starting Cassandra throws EOF while reading saved cache

2013-02-13 Thread Drew Kutcharian (JIRA)
Drew Kutcharian created CASSANDRA-5252:
--

 Summary: Starting Cassandra throws EOF while reading saved cache
 Key: CASSANDRA-5252
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5252
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Drew Kutcharian
Assignee: Dave Brosius
Priority: Minor
 Fix For: 1.2.1
 Attachments: data.zip

Currently seeing nodes throw an EOF while reading a saved cache on the system 
schema when starting cassandra

 WARN 14:25:54,896 error reading saved cache 
/ssd/saved_caches/system-schema_columns-KeyCache-b.db
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:349)
at 
org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:378)
at 
org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:144)
at 
org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:278)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:393)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:365)
at org.apache.cassandra.db.Table.initCf(Table.java:334)
at org.apache.cassandra.db.Table.init(Table.java:272)
at org.apache.cassandra.db.Table.open(Table.java:102)
at org.apache.cassandra.db.Table.open(Table.java:80)
at org.apache.cassandra.db.SystemTable.checkHealth(SystemTable.java:320)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:203)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:395)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:438)


to reproduce delete all data files, start a cluster, leave cluster up long 
enough to build a cache. nodetool drain, kill cassandra process. start 
cassandra process in foreground and note EOF thrown (see above for stack trace)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5252) Starting Cassandra throws EOF while reading saved cache

2013-02-13 Thread Drew Kutcharian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Drew Kutcharian updated CASSANDRA-5252:
---

Attachment: (was: 4916.txt)

 Starting Cassandra throws EOF while reading saved cache
 ---

 Key: CASSANDRA-5252
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5252
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Drew Kutcharian
Assignee: Dave Brosius
Priority: Minor
 Fix For: 1.2.1

 Attachments: data.zip


 Currently seeing nodes throw an EOF while reading a saved cache on the system 
 schema when starting cassandra
  WARN 14:25:54,896 error reading saved cache 
 /ssd/saved_caches/system-schema_columns-KeyCache-b.db
 java.io.EOFException
   at java.io.DataInputStream.readInt(DataInputStream.java:392)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:349)
   at 
 org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:378)
   at 
 org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:144)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:278)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:393)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:365)
   at org.apache.cassandra.db.Table.initCf(Table.java:334)
   at org.apache.cassandra.db.Table.init(Table.java:272)
   at org.apache.cassandra.db.Table.open(Table.java:102)
   at org.apache.cassandra.db.Table.open(Table.java:80)
   at org.apache.cassandra.db.SystemTable.checkHealth(SystemTable.java:320)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:203)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:395)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:438)
 to reproduce delete all data files, start a cluster, leave cluster up long 
 enough to build a cache. nodetool drain, kill cassandra process. start 
 cassandra process in foreground and note EOF thrown (see above for stack 
 trace)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577978#comment-13577978
 ] 

Jonathan Ellis commented on CASSANDRA-4937:
---

I don't see any point in making it an option.  Users are typically going to 
have a less accurate idea of what is going on, than we do with sstable stats.

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5252) Starting Cassandra throws EOF while reading saved cache

2013-02-13 Thread Drew Kutcharian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Drew Kutcharian updated CASSANDRA-5252:
---

Fix Version/s: (was: 1.2.1)
  Description: 
I just saw this exception happen on Cassandra 1.2.1. I thought this was fixed 
by CASSANDRA-4916. Was this part of the 1.2.1 release?

I'm on Mac OS X 10.8.2, Oracle JDK 1.7.0_11, using snappy-java 1.0.5-M3 from 
Maven (not sure if that's the cause).
I'm attaching my data and log directory as data.zip.

{code}
 WARN [main] 2013-02-12 17:50:11,714 AutoSavingCache.java (line 160) error 
reading saved cache /Users/services/cassandra/data/saved_caches/system-schema
_columnfamilies-KeyCache-b.db
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:349)
at 
org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:378)
at 
org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:144)
at 
org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:277)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:364)
at org.apache.cassandra.db.Table.initCf(Table.java:337)
at org.apache.cassandra.db.Table.init(Table.java:280)
at org.apache.cassandra.db.Table.open(Table.java:110)
at org.apache.cassandra.db.Table.open(Table.java:88)
at org.apache.cassandra.db.SystemTable.checkHealth(SystemTable.java:421)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:177)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:370)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:413)
 INFO [SSTableBatchOpen:1] 2013-02-12 17:50:11,722 SSTableReader.java (line 
164) Opening /Users/services/cassandra/data/data/system/schema_columns/syste
m-schema_columns-ib-6 (193 bytes)
 INFO [SSTableBatchOpen:2] 2013-02-12 17:50:11,722 SSTableReader.java (line 
164) Opening /Users/services/cassandra/data/data/system/schema_columns/syste
m-schema_columns-ib-5 (3840 bytes)
 INFO [main] 2013-02-12 17:50:11,725 AutoSavingCache.java (line 139) reading 
saved cache /Users/services/cassandra/data/saved_caches/system-schema_colum
ns-KeyCache-b.db
 WARN [main] 2013-02-12 17:50:11,725 AutoSavingCache.java (line 160) error 
reading saved cache /Users/services/cassandra/data/saved_caches/system-schema
_columns-KeyCache-b.db
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:349)
at 
org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:378)
at 
org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:144)
at 
org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:277)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:364)
at org.apache.cassandra.db.Table.initCf(Table.java:337)
at org.apache.cassandra.db.Table.init(Table.java:280)
at org.apache.cassandra.db.Table.open(Table.java:110)
at org.apache.cassandra.db.Table.open(Table.java:88)
at org.apache.cassandra.db.SystemTable.checkHealth(SystemTable.java:421)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:177)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:370)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:413)
 INFO [SSTableBatchOpen:1] 2013-02-12 17:50:11,736 SSTableReader.java (line 
164) Opening /Users/services/cassandra/data/data/system/local/system-local-i
b-14 (458 bytes)
 INFO [main] 2013-02-12 17:50:11,738 AutoSavingCache.java (line 139) reading 
saved cache /Users/services/cassandra/data/saved_caches/system-local-KeyCac
he-b.db
 WARN [main] 2013-02-12 17:50:11,739 AutoSavingCache.java (line 160) error 
reading saved cache /Users/services/cassandra/data/saved_caches/system-local-
KeyCache-b.db
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:349)
at 
org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:378)
at 
org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:144)
at 

[jira] [Updated] (CASSANDRA-5252) Starting Cassandra throws EOF while reading saved cache

2013-02-13 Thread Drew Kutcharian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Drew Kutcharian updated CASSANDRA-5252:
---

Description: 
I just saw this exception happen on Cassandra 1.2.1. I thought this was fixed 
by CASSANDRA-4916. Wasn't CASSANDRA-4916 part of the 1.2.1 release?

I'm on Mac OS X 10.8.2, Oracle JDK 1.7.0_11, using snappy-java 1.0.5-M3 from 
Maven (not sure if that's the cause).
I'm attaching my data and log directory as data.zip.


{code}
 WARN [main] 2013-02-12 17:50:11,714 AutoSavingCache.java (line 160) error 
reading saved cache /Users/services/cassandra/data/saved_caches/system-schema
_columnfamilies-KeyCache-b.db
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:349)
at 
org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:378)
at 
org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:144)
at 
org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:277)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:364)
at org.apache.cassandra.db.Table.initCf(Table.java:337)
at org.apache.cassandra.db.Table.init(Table.java:280)
at org.apache.cassandra.db.Table.open(Table.java:110)
at org.apache.cassandra.db.Table.open(Table.java:88)
at org.apache.cassandra.db.SystemTable.checkHealth(SystemTable.java:421)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:177)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:370)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:413)
 INFO [SSTableBatchOpen:1] 2013-02-12 17:50:11,722 SSTableReader.java (line 
164) Opening /Users/services/cassandra/data/data/system/schema_columns/syste
m-schema_columns-ib-6 (193 bytes)
 INFO [SSTableBatchOpen:2] 2013-02-12 17:50:11,722 SSTableReader.java (line 
164) Opening /Users/services/cassandra/data/data/system/schema_columns/syste
m-schema_columns-ib-5 (3840 bytes)
 INFO [main] 2013-02-12 17:50:11,725 AutoSavingCache.java (line 139) reading 
saved cache /Users/services/cassandra/data/saved_caches/system-schema_colum
ns-KeyCache-b.db
 WARN [main] 2013-02-12 17:50:11,725 AutoSavingCache.java (line 160) error 
reading saved cache /Users/services/cassandra/data/saved_caches/system-schema
_columns-KeyCache-b.db
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:349)
at 
org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:378)
at 
org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:144)
at 
org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:277)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:364)
at org.apache.cassandra.db.Table.initCf(Table.java:337)
at org.apache.cassandra.db.Table.init(Table.java:280)
at org.apache.cassandra.db.Table.open(Table.java:110)
at org.apache.cassandra.db.Table.open(Table.java:88)
at org.apache.cassandra.db.SystemTable.checkHealth(SystemTable.java:421)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:177)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:370)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:413)
 INFO [SSTableBatchOpen:1] 2013-02-12 17:50:11,736 SSTableReader.java (line 
164) Opening /Users/services/cassandra/data/data/system/local/system-local-i
b-14 (458 bytes)
 INFO [main] 2013-02-12 17:50:11,738 AutoSavingCache.java (line 139) reading 
saved cache /Users/services/cassandra/data/saved_caches/system-local-KeyCac
he-b.db
 WARN [main] 2013-02-12 17:50:11,739 AutoSavingCache.java (line 160) error 
reading saved cache /Users/services/cassandra/data/saved_caches/system-local-
KeyCache-b.db
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:349)
at 
org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:378)
at 
org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:144)
at 
org.apache.cassandra.db.ColumnFamilyStore.init(ColumnFamilyStore.java:277)

git commit: Simplify auth setup and make system_auth ks alterable

2013-02-13 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 e531be77a - 265964064


Simplify auth setup and make system_auth ks alterable

Patch by Aleksey Yeschenko; reviewed by Jonathan Ellis for
CASSANDRA-5112


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/26596406
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/26596406
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/26596406

Branch: refs/heads/cassandra-1.2
Commit: 265964064bd5012b871101e884d7e2032a44e32a
Parents: e531be7
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Feb 14 01:19:04 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Feb 14 01:19:04 2013 +0300

--
 CHANGES.txt|1 +
 bin/cqlsh  |2 +-
 pylib/cqlshlib/cql3handling.py |   12 +-
 src/java/org/apache/cassandra/auth/Auth.java   |  132 +--
 .../org/apache/cassandra/config/CFMetaData.java|8 +-
 .../cassandra/config/DatabaseDescriptor.java   |4 +-
 .../org/apache/cassandra/config/KSMetaData.java|7 -
 src/java/org/apache/cassandra/config/Schema.java   |3 +-
 .../org/apache/cassandra/cql3/QueryProcessor.java  |4 +-
 .../cql3/statements/ListUsersStatement.java|2 +-
 .../apache/cassandra/service/CassandraDaemon.java  |4 -
 .../org/apache/cassandra/service/ClientState.java  |   14 ++-
 .../apache/cassandra/service/MigrationManager.java |7 +-
 .../apache/cassandra/service/StorageService.java   |   10 +-
 14 files changed, 153 insertions(+), 57 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/26596406/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5dd2499..3d0f633 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -18,6 +18,7 @@
  * Implement caching of authorization results (CASSANDRA-4295)
  * Add support for LZ4 compression (CASSANDRA-5038)
  * Fix missing columns in wide rows queries (CASSANDRA-5225)
+ * Simplify auth setup and make system_auth ks alterable (CASSANDRA-5112)
 
 
 1.2.1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/26596406/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 4f58bdc..6db59a3 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -173,7 +173,7 @@ else:
 
 debug_completion = bool(os.environ.get('CQLSH_DEBUG_COMPLETION', '') == 'YES')
 
-SYSTEM_KEYSPACES = ('system', 'system_traces', 'system_auth')
+SYSTEM_KEYSPACES = ('system', 'system_traces')
 
 # we want the cql parser to understand our cqlsh-specific commands too
 my_commands_ending_with_newline = (

http://git-wip-us.apache.org/repos/asf/cassandra/blob/26596406/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 27bd67b..def573e 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -39,6 +39,7 @@ class UnexpectedTableStructure(UserWarning):
 return 'Unexpected table structure; may not translate correctly to 
CQL. ' + self.msg
 
 SYSTEM_KEYSPACES = ('system', 'system_traces', 'system_auth')
+NONALTERBALE_KEYSPACES = ('system', 'system_traces')
 
 class Cql3ParsingRuleSet(CqlParsingRuleSet):
 keywords = set((
@@ -306,6 +307,8 @@ JUNK ::= /([ 
\t\r\f\v]+|(--|[/][/])[^\n\r]*([\n\r]|$)|[/][*].*?[*][/])/ ;
 
 nonSystemKeyspaceName ::= ksname=cfOrKsName ;
 
+alterableKeyspaceName ::= ksname=cfOrKsName ;
+
 cfOrKsName ::= identifier
| quotedName
| unreservedKeyword;
@@ -686,6 +689,11 @@ def ks_name_completer(ctxt, cass):
 ksnames = [n for n in cass.get_keyspace_names() if n not in 
SYSTEM_KEYSPACES]
 return map(maybe_escape_name, ksnames)
 
+@completer_for('alterableKeyspaceName', 'ksname')
+def ks_name_completer(ctxt, cass):
+ksnames = [n for n in cass.get_keyspace_names() if n not in 
NONALTERBALE_KEYSPACES]
+return map(maybe_escape_name, ksnames)
+
 @completer_for('columnFamilyName', 'ksname')
 def cf_ks_name_completer(ctxt, cass):
 return [maybe_escape_name(ks) + '.' for ks in cass.get_keyspace_names()]
@@ -1242,7 +1250,7 @@ def alter_table_col_completer(ctxt, cass):
 explain_completion('alterInstructions', 'newcol', 'new_column_name')
 
 syntax_rules += r'''
-alterKeyspaceStatement ::= ALTER ( KEYSPACE | SCHEMA ) 
ks=nonSystemKeyspaceName
+alterKeyspaceStatement ::= ALTER ( KEYSPACE | SCHEMA ) 
ks=alterableKeyspaceName
  WITH newPropSpec ( AND newPropSpec )*
;
 '''
@@ -1295,7 +1303,7 @@ syntax_rules += r'''
  ;
 
 dataResource ::= 

[1/3] git commit: Make CompactionsTest.testDontPurgeAccidentaly more reliable with gcgrace=0

2013-02-13 Thread aleksey
Make CompactionsTest.testDontPurgeAccidentaly more reliable with gcgrace=0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e531be77
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e531be77
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e531be77

Branch: refs/heads/trunk
Commit: e531be77a417e45d5a4f8fe7149b489d4e6cf3b1
Parents: 2fe8133
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Feb 13 11:54:59 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Feb 13 11:54:59 2013 +0100

--
 .../cassandra/db/compaction/CompactionsTest.java   |3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e531be77/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
index e543b00..b41bf19 100644
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
@@ -340,6 +340,9 @@ public class CompactionsTest extends SchemaLoader
 ColumnFamily cf = cfs.getColumnFamily(filter);
 assert cf == null || cf.isEmpty() : should be empty:  + cf;
 
+// Sleep one second so that the removal is indeed purgeable even with 
gcgrace == 0
+Thread.sleep(1000);
+
 cfs.forceBlockingFlush();
 
 CollectionSSTableReader sstablesAfter = cfs.getSSTables();



[2/3] git commit: Simplify auth setup and make system_auth ks alterable

2013-02-13 Thread aleksey
Simplify auth setup and make system_auth ks alterable

Patch by Aleksey Yeschenko; reviewed by Jonathan Ellis for
CASSANDRA-5112


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/26596406
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/26596406
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/26596406

Branch: refs/heads/trunk
Commit: 265964064bd5012b871101e884d7e2032a44e32a
Parents: e531be7
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Feb 14 01:19:04 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Feb 14 01:19:04 2013 +0300

--
 CHANGES.txt|1 +
 bin/cqlsh  |2 +-
 pylib/cqlshlib/cql3handling.py |   12 +-
 src/java/org/apache/cassandra/auth/Auth.java   |  132 +--
 .../org/apache/cassandra/config/CFMetaData.java|8 +-
 .../cassandra/config/DatabaseDescriptor.java   |4 +-
 .../org/apache/cassandra/config/KSMetaData.java|7 -
 src/java/org/apache/cassandra/config/Schema.java   |3 +-
 .../org/apache/cassandra/cql3/QueryProcessor.java  |4 +-
 .../cql3/statements/ListUsersStatement.java|2 +-
 .../apache/cassandra/service/CassandraDaemon.java  |4 -
 .../org/apache/cassandra/service/ClientState.java  |   14 ++-
 .../apache/cassandra/service/MigrationManager.java |7 +-
 .../apache/cassandra/service/StorageService.java   |   10 +-
 14 files changed, 153 insertions(+), 57 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/26596406/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5dd2499..3d0f633 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -18,6 +18,7 @@
  * Implement caching of authorization results (CASSANDRA-4295)
  * Add support for LZ4 compression (CASSANDRA-5038)
  * Fix missing columns in wide rows queries (CASSANDRA-5225)
+ * Simplify auth setup and make system_auth ks alterable (CASSANDRA-5112)
 
 
 1.2.1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/26596406/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 4f58bdc..6db59a3 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -173,7 +173,7 @@ else:
 
 debug_completion = bool(os.environ.get('CQLSH_DEBUG_COMPLETION', '') == 'YES')
 
-SYSTEM_KEYSPACES = ('system', 'system_traces', 'system_auth')
+SYSTEM_KEYSPACES = ('system', 'system_traces')
 
 # we want the cql parser to understand our cqlsh-specific commands too
 my_commands_ending_with_newline = (

http://git-wip-us.apache.org/repos/asf/cassandra/blob/26596406/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 27bd67b..def573e 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -39,6 +39,7 @@ class UnexpectedTableStructure(UserWarning):
 return 'Unexpected table structure; may not translate correctly to 
CQL. ' + self.msg
 
 SYSTEM_KEYSPACES = ('system', 'system_traces', 'system_auth')
+NONALTERBALE_KEYSPACES = ('system', 'system_traces')
 
 class Cql3ParsingRuleSet(CqlParsingRuleSet):
 keywords = set((
@@ -306,6 +307,8 @@ JUNK ::= /([ 
\t\r\f\v]+|(--|[/][/])[^\n\r]*([\n\r]|$)|[/][*].*?[*][/])/ ;
 
 nonSystemKeyspaceName ::= ksname=cfOrKsName ;
 
+alterableKeyspaceName ::= ksname=cfOrKsName ;
+
 cfOrKsName ::= identifier
| quotedName
| unreservedKeyword;
@@ -686,6 +689,11 @@ def ks_name_completer(ctxt, cass):
 ksnames = [n for n in cass.get_keyspace_names() if n not in 
SYSTEM_KEYSPACES]
 return map(maybe_escape_name, ksnames)
 
+@completer_for('alterableKeyspaceName', 'ksname')
+def ks_name_completer(ctxt, cass):
+ksnames = [n for n in cass.get_keyspace_names() if n not in 
NONALTERBALE_KEYSPACES]
+return map(maybe_escape_name, ksnames)
+
 @completer_for('columnFamilyName', 'ksname')
 def cf_ks_name_completer(ctxt, cass):
 return [maybe_escape_name(ks) + '.' for ks in cass.get_keyspace_names()]
@@ -1242,7 +1250,7 @@ def alter_table_col_completer(ctxt, cass):
 explain_completion('alterInstructions', 'newcol', 'new_column_name')
 
 syntax_rules += r'''
-alterKeyspaceStatement ::= ALTER ( KEYSPACE | SCHEMA ) 
ks=nonSystemKeyspaceName
+alterKeyspaceStatement ::= ALTER ( KEYSPACE | SCHEMA ) 
ks=alterableKeyspaceName
  WITH newPropSpec ( AND newPropSpec )*
;
 '''
@@ -1295,7 +1303,7 @@ syntax_rules += r'''
  ;
 
 dataResource ::= ( ALL KEYSPACES )
- | ( KEYSPACE nonSystemKeyspaceName )
+   

[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-13 Thread aleksey
Updated Branches:
  refs/heads/trunk 5df067418 - c6204b595


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c6204b59
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c6204b59
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c6204b59

Branch: refs/heads/trunk
Commit: c6204b595dc8bdc8484f97d8aca0e235cafeaa89
Parents: 5df0674 2659640
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Feb 14 01:20:47 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Feb 14 01:20:47 2013 +0300

--
 CHANGES.txt|1 +
 bin/cqlsh  |2 +-
 pylib/cqlshlib/cql3handling.py |   12 +-
 src/java/org/apache/cassandra/auth/Auth.java   |  132 +--
 .../org/apache/cassandra/config/CFMetaData.java|8 +-
 .../cassandra/config/DatabaseDescriptor.java   |4 +-
 .../org/apache/cassandra/config/KSMetaData.java|7 -
 src/java/org/apache/cassandra/config/Schema.java   |3 +-
 .../org/apache/cassandra/cql3/QueryProcessor.java  |4 +-
 .../cql3/statements/ListUsersStatement.java|2 +-
 .../apache/cassandra/service/CassandraDaemon.java  |4 -
 .../org/apache/cassandra/service/ClientState.java  |   14 ++-
 .../apache/cassandra/service/MigrationManager.java |7 +-
 .../apache/cassandra/service/StorageService.java   |   10 +-
 .../cassandra/db/compaction/CompactionsTest.java   |3 +
 15 files changed, 156 insertions(+), 57 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6204b59/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6204b59/pylib/cqlshlib/cql3handling.py
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6204b59/src/java/org/apache/cassandra/config/CFMetaData.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6204b59/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6204b59/src/java/org/apache/cassandra/config/KSMetaData.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6204b59/src/java/org/apache/cassandra/config/Schema.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6204b59/src/java/org/apache/cassandra/cql3/QueryProcessor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6204b59/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --cc src/java/org/apache/cassandra/service/MigrationManager.java
index 2f41743,82d56e3..967ea44
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@@ -197,9 -198,14 +197,14 @@@ public class MigrationManager implement
  
  public static void announceNewKeyspace(KSMetaData ksm) throws 
ConfigurationException
  {
+ announceNewKeyspace(ksm, FBUtilities.timestampMicros());
+ }
+ 
+ public static void announceNewKeyspace(KSMetaData ksm, long timestamp) 
throws ConfigurationException
+ {
  ksm.validate();
  
 -if (Schema.instance.getTableDefinition(ksm.name) != null)
 +if (Schema.instance.getKSMetaData(ksm.name) != null)
  throw new AlreadyExistsException(ksm.name);
  
  logger.info(String.format(Create new Keyspace: %s, ksm));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6204b59/src/java/org/apache/cassandra/service/StorageService.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c6204b59/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
--



[jira] [Resolved] (CASSANDRA-5112) Setting up authentication tables with custom authentication plugin

2013-02-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-5112.
--

Resolution: Fixed
  Reviewer: jbellis

Rolling upgrade issue turned out to be a non-issue - was caused by another 
experiment with removing commitlog.

Committed.

 Setting up authentication tables with custom authentication plugin
 --

 Key: CASSANDRA-5112
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5112
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.2.0
Reporter: Dirkjan Bussink
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 1.2.2


 I'm working on updating https://github.com/nedap/cassandra-auth with the new 
 authentication API's in Cassandra 1.2.0. I have stumbled on an issue and I'm 
 not really sure how to handle it.
 For the authentication I want to setup additional column families for the 
 passwords and permissions. As recommended in the documentation of 
 IAuthorizer, I'm trying to create these tables during setup(): Setup is 
 called once upon system startup to initialize the IAuthorizer. For example, 
 use this method to create any required keyspaces/column families..
 The problem is that doing this seems to be a lot harder than I would think, 
 or I'm perhaps missing something obvious. I've tried various attempts, but 
 all have failed:
 - CQL and QueryProcessor.processInternal to setup additional column families. 
 This fails, since processInternal will throw a UnsupportedOperationException 
 due to it being a SchemaAlteringStatement.
 - CQL and QueryProcessor.process. This works after the system has 
 successfully started, but due to the moment setup() is called in the 
 Cassandra boot process, it will fail. It will throw an AssertionError in 
 MigrationManager.java:320, because the gossiper hasn't been started yet.
 - Internal API's. Mimicking how other column families are set up, using 
 CFMetadata and Schema.load. This seems to get the system in some inconsistent 
 state where some parts do see the additional column family, but others don't.
 Does anyone have a recommendation for the path to follow here? What would be 
 the recommended approach for actually setting up those column families during 
 starting for authentication?
 From working on this, I also have another question. I see the default 
 system_auth keyspace is created with a SimpleStrategy and a replication 
 factor of 1. Is this a deliberate choice? I can imagine that if a node in a 
 cluster dies, losing the authentication information that happens to be 
 available on that code could be very problematic. If I'm missing any 
 reasoning here, please let me know, but it struck me as something that could 
 cause potential problems. I also don't see a way I could reconfigure this at 
 the moment, and API's such as CREATE USER do seem to depend on this keyspace.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577990#comment-13577990
 ] 

Pavel Yaskevich commented on CASSANDRA-4937:


I just don't want to make situation when latencies suddenly spike and stayed in 
that state but there is no indication what is going on (the explanation is that 
rows exceeded page threshold and preheat was turned off).  

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4937) CRAR improvements (object cache + CompressionMetadata chunk offset storage moved off-heap).

2013-02-13 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577998#comment-13577998
 ] 

Jonathan Ellis commented on CASSANDRA-4937:
---

What do you think, [~yukim]?

 CRAR improvements (object cache + CompressionMetadata chunk offset storage 
 moved off-heap).
 ---

 Key: CASSANDRA-4937
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4937
 Project: Cassandra
  Issue Type: Improvement
Reporter: Pavel Yaskevich
Assignee: Pavel Yaskevich
  Labels: core
 Fix For: 1.2.2

 Attachments: 4937-v3.txt, CASSANDRA-4937.patch, 
 CASSANDRA-4937-trunk.patch


 After good amount of testing on one of the clusters it was found that in 
 order to improve read latency we need to minimize allocation rate that 
 compression involves, that minimizes GC (as well as heap usage) and 
 substantially decreases latency on read heavy workloads. 
 I have also discovered that RAR skip cache harms performance in situation 
 when reads are done in parallel with compaction working with relatively big 
 SSTable files (few GB and more). The attached patch removes possibility to 
 skip cache from compressed files (I can also add changes to RAR to remove 
 skip cache functionality as a separate patch). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of ArronCerv by ArronCerv

2013-02-13 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The ArronCerv page has been changed by ArronCerv:
http://wiki.apache.org/cassandra/ArronCerv?action=diffrev1=1rev2=2

- Wassp People !! I am NAOMA GILLIAM. I have a house in Burlington.BR
+ Yo guys !! I am TAMA CRANE. I am staying at Portland.BR
+ One day i would want to do Nature Photography.BR
  BR
+ Check out my blog; [[http://discountlouisvuittonrespectonline.webs.com|louis 
vuitton bags]]
- My hobby is Dominoes.BR
- BR
- My web-site ... [[http://discount-louis-vuitton.blinkweb.com|louis vuitton 
neverfull]]
  


[jira] [Updated] (CASSANDRA-5197) Loading persisted ring state in a mixed cluster can throw AE

2013-02-13 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5197:
-

Reviewer: iamaleksey  (was: slebresne)

 Loading persisted ring state in a mixed cluster can throw AE
 

 Key: CASSANDRA-5197
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5197
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.2.2

 Attachments: 5197.txt


 {noformat}
  INFO 02:07:16,263 Loading persisted ring state
 java.lang.AssertionError
 at 
 org.apache.cassandra.locator.TokenMetadata.updateHostId(TokenMetadata.java:221)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:451)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:406)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:282)
 at 
 org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:315)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:212)
 {noformat}
 We assume every host has a hostid, but this is not always true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5197) Loading persisted ring state in a mixed cluster can throw AE

2013-02-13 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578009#comment-13578009
 ] 

Aleksey Yeschenko commented on CASSANDRA-5197:
--

+1

 Loading persisted ring state in a mixed cluster can throw AE
 

 Key: CASSANDRA-5197
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5197
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.2.2

 Attachments: 5197.txt


 {noformat}
  INFO 02:07:16,263 Loading persisted ring state
 java.lang.AssertionError
 at 
 org.apache.cassandra.locator.TokenMetadata.updateHostId(TokenMetadata.java:221)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:451)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:406)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:282)
 at 
 org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:315)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:212)
 {noformat}
 We assume every host has a hostid, but this is not always true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-13 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.2 265964064 - 828572acd
  refs/heads/trunk c6204b595 - e9777087f


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9777087
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9777087
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9777087

Branch: refs/heads/trunk
Commit: e9777087fa2244e98c5dbf9681e558ec404a9e85
Parents: c6204b5 828572a
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 13 17:01:14 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 13 17:01:14 2013 -0600

--
 .../apache/cassandra/service/StorageService.java   |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9777087/src/java/org/apache/cassandra/service/StorageService.java
--



[2/3] git commit: Avoid throwing AE when hosts don't have a hostId Patch by brandonwilliams, reviewed by iamaleksey for CASSANDRA-5197

2013-02-13 Thread brandonwilliams
Avoid throwing AE when hosts don't have a hostId
Patch by brandonwilliams, reviewed by iamaleksey for CASSANDRA-5197


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/828572ac
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/828572ac
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/828572ac

Branch: refs/heads/trunk
Commit: 828572acdb5bff31f58362425b0c80ce6d606bf8
Parents: 2659640
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 13 16:59:41 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 13 16:59:41 2013 -0600

--
 .../apache/cassandra/service/StorageService.java   |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/828572ac/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 0f3a331..8c1d053 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -450,7 +450,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 else
 {
 tokenMetadata.updateNormalTokens(loadedTokens.get(ep), ep);
-tokenMetadata.updateHostId(loadedHostIds.get(ep), ep);
+if (loadedHostIds.containsKey(ep))
+tokenMetadata.updateHostId(loadedHostIds.get(ep), ep);
 Gossiper.instance.addSavedEndpoint(ep);
 }
 }



[1/3] git commit: Avoid throwing AE when hosts don't have a hostId Patch by brandonwilliams, reviewed by iamaleksey for CASSANDRA-5197

2013-02-13 Thread brandonwilliams
Avoid throwing AE when hosts don't have a hostId
Patch by brandonwilliams, reviewed by iamaleksey for CASSANDRA-5197


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/828572ac
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/828572ac
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/828572ac

Branch: refs/heads/cassandra-1.2
Commit: 828572acdb5bff31f58362425b0c80ce6d606bf8
Parents: 2659640
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 13 16:59:41 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 13 16:59:41 2013 -0600

--
 .../apache/cassandra/service/StorageService.java   |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/828572ac/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 0f3a331..8c1d053 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -450,7 +450,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 else
 {
 tokenMetadata.updateNormalTokens(loadedTokens.get(ep), ep);
-tokenMetadata.updateHostId(loadedHostIds.get(ep), ep);
+if (loadedHostIds.containsKey(ep))
+tokenMetadata.updateHostId(loadedHostIds.get(ep), ep);
 Gossiper.instance.addSavedEndpoint(ep);
 }
 }



git commit: fix format string specifier

2013-02-13 Thread dbrosius
Updated Branches:
  refs/heads/cassandra-1.2 828572acd - 99b3963b7


fix format string specifier


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/99b3963b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/99b3963b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/99b3963b

Branch: refs/heads/cassandra-1.2
Commit: 99b3963b721601303859a3a87bce6984178892ab
Parents: 828572a
Author: Dave Brosius dbros...@apache.org
Authored: Wed Feb 13 20:36:11 2013 -0500
Committer: Dave Brosius dbros...@apache.org
Committed: Wed Feb 13 20:36:11 2013 -0500

--
 .../apache/cassandra/cql3/functions/Functions.java |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/99b3963b/src/java/org/apache/cassandra/cql3/functions/Functions.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/Functions.java 
b/src/java/org/apache/cassandra/cql3/functions/Functions.java
index 5b5e721..3660f5d 100644
--- a/src/java/org/apache/cassandra/cql3/functions/Functions.java
+++ b/src/java/org/apache/cassandra/cql3/functions/Functions.java
@@ -116,7 +116,7 @@ public abstract class Functions
 throw new InvalidRequestException(String.format(Type error: 
cannot assign result of function %s (type %s) to %s (type %s), fun.name(), 
fun.returnType().asCQL3Type(), receiver, receiver.type.asCQL3Type()));
 
 if (providedArgs.size() != fun.argsType().size())
-throw new InvalidRequestException(String.format(Invalid number of 
arguments in call to function %s: %d required but % provided, fun.name(), 
fun.argsType().size(), providedArgs.size()));
+throw new InvalidRequestException(String.format(Invalid number of 
arguments in call to function %s: %d required but %d provided, fun.name(), 
fun.argsType().size(), providedArgs.size()));
 
 for (int i = 0; i  providedArgs.size(); i++)
 {



[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-13 Thread dbrosius
Updated Branches:
  refs/heads/trunk e9777087f - 28bdddef3


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/28bdddef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/28bdddef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/28bdddef

Branch: refs/heads/trunk
Commit: 28bdddef3bfbb40711f7117e11a1699eb1076b56
Parents: e977708 99b3963
Author: Dave Brosius dbros...@apache.org
Authored: Wed Feb 13 20:38:27 2013 -0500
Committer: Dave Brosius dbros...@apache.org
Committed: Wed Feb 13 20:38:27 2013 -0500

--
 .../apache/cassandra/cql3/functions/Functions.java |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--




[1/2] git commit: fix format string specifier

2013-02-13 Thread dbrosius
fix format string specifier


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/99b3963b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/99b3963b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/99b3963b

Branch: refs/heads/trunk
Commit: 99b3963b721601303859a3a87bce6984178892ab
Parents: 828572a
Author: Dave Brosius dbros...@apache.org
Authored: Wed Feb 13 20:36:11 2013 -0500
Committer: Dave Brosius dbros...@apache.org
Committed: Wed Feb 13 20:36:11 2013 -0500

--
 .../apache/cassandra/cql3/functions/Functions.java |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/99b3963b/src/java/org/apache/cassandra/cql3/functions/Functions.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/Functions.java 
b/src/java/org/apache/cassandra/cql3/functions/Functions.java
index 5b5e721..3660f5d 100644
--- a/src/java/org/apache/cassandra/cql3/functions/Functions.java
+++ b/src/java/org/apache/cassandra/cql3/functions/Functions.java
@@ -116,7 +116,7 @@ public abstract class Functions
 throw new InvalidRequestException(String.format(Type error: 
cannot assign result of function %s (type %s) to %s (type %s), fun.name(), 
fun.returnType().asCQL3Type(), receiver, receiver.type.asCQL3Type()));
 
 if (providedArgs.size() != fun.argsType().size())
-throw new InvalidRequestException(String.format(Invalid number of 
arguments in call to function %s: %d required but % provided, fun.name(), 
fun.argsType().size(), providedArgs.size()));
+throw new InvalidRequestException(String.format(Invalid number of 
arguments in call to function %s: %d required but %d provided, fun.name(), 
fun.argsType().size(), providedArgs.size()));
 
 for (int i = 0; i  providedArgs.size(); i++)
 {



[Cassandra Wiki] Trivial Update of MartinDer by MartinDer

2013-02-13 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The MartinDer page has been changed by MartinDer:
http://wiki.apache.org/cassandra/MartinDer?action=diffrev1=1rev2=2

- Hi !! The name is ELSIE TRAVIS. I belong to Baltimore.BR
- I want to study at The Quality Prep School built at Tyler. I like to do 
Chess. My dad name is Frank  and he is a Odontologist. My momy is a 
Etymologist.BR
+ Hey fellas !! I am LARAE FRENCH. I reside in Fort Walton Beach.BR
+ I and my sister go to The Outstanding Institute situated in San Bernardino. I 
like to do Belly Dancing. My father name is Donald  and he is a Floor Manager. 
My mother is a Lord Chamberlain.BR
  BR
- My blog - [[http://www.unitedchem.com/beatbydre.aspx|cheap dre beats]]
+ Here is my weblog [[http://www.unitedchem.com/beatbydre.aspx|beats by dre]]
  


[Cassandra Wiki] Trivial Update of dr_dre_earphones_A7Z8 by MartinDer

2013-02-13 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The dr_dre_earphones_A7Z8 page has been changed by MartinDer:
http://wiki.apache.org/cassandra/dr_dre_earphones_A7Z8

New page:
[[http://www.unitedchem.com/beatbydre.aspx|cheap beats by dre]]BR
[[http://www.unitedchem.com/beatbydre.aspx|beats by dre cheap]]BR
[[http://www.unitedchem.com/beatbydre.aspx|http://www.unitedchem.com/beatbydre.aspx]]BR
[[http://www.unitedchem.com/beatbydre.aspx|beats by dre]]BR
Dialogue some progenitor is very easy, basically no drain words and 
phrases.BR
BR
While searching for Qin Guang Wang, Yama ruler, reincarnation, such as discover 
a needle during the sea generally speaking.BR
BR
Xiao Chen and keep going realized inside the ordinary will be demon, in that 
case 3 bones told your ex to perform a big element, final results meant for 
twenty-four swords and Paradise poker Sacrificed and in addition they really 
don't appear, it enable Xiao Chen if the weird for some time, don't know the 
way to turn at this point mayhem with.BR
BR
Currently any terrible can be clear, no everyday living. Almost all bones 
created goes away completely, giving simply collections of one's absence 
regarding lost.BR
BR
Was unsuccessful.BR
BR
For a second time arrived at a taboo outdoors Haiti, presently that figure 
endless. Just about all maintenance.BR
[[http://www.cleanscreenxcel.com/cheapmonsterbeatsbydre.html|cheap beats by 
dre]]BR
[[http://www.cleanscreenxcel.com/cheapmonsterbeatsbydre.html|beats by dre 
cheap]]BR
[[http://www.cleanscreenxcel.com/cheapmonsterbeatsbydre.html|http://www.cleanscreenxcel.com/cheapmonsterbeatsbydre.html]]BR
[[http://www.cleanscreenxcel.com/cheapmonsterbeatsbydre.html|dr dre]]


git commit: drop keyspace from user-defined compaction API; patch by yukim reviewed by jbellis for CASSANDRA-5139

2013-02-13 Thread yukim
Updated Branches:
  refs/heads/trunk 28bdddef3 - 59af0b9c4


drop keyspace from user-defined compaction API; patch by yukim reviewed by 
jbellis for CASSANDRA-5139


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/59af0b9c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/59af0b9c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/59af0b9c

Branch: refs/heads/trunk
Commit: 59af0b9c4d4cd00ea742e197b2b3cb2f384feec3
Parents: 28bddde
Author: Yuki Morishita yu...@apache.org
Authored: Wed Feb 13 21:27:59 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Feb 13 21:27:59 2013 -0600

--
 CHANGES.txt|1 +
 .../cassandra/db/compaction/CompactionManager.java |   43 ++-
 .../db/compaction/CompactionManagerMBean.java  |6 ++-
 .../cassandra/db/compaction/CompactionsTest.java   |2 +-
 4 files changed, 22 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/59af0b9c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 455aa5d..9a4c475 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -5,6 +5,7 @@
  * add memtable_flush_period_in_ms (CASSANDRA-4237)
  * replace supercolumns internally by composites (CASSANDRA-3237, 5123)
  * upgrade thrift to 0.9.0 (CASSANDRA-3719)
+ * drop unnecessary keyspace from user-defined compaction API (CASSANDRA-5139)
 
 
 1.2.2

http://git-wip-us.apache.org/repos/asf/cassandra/blob/59af0b9c/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 9c5bbe0..01cee9d 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -29,9 +29,7 @@ import javax.management.ObjectName;
 
 import com.google.common.base.Predicates;
 import com.google.common.base.Throwables;
-import com.google.common.collect.ConcurrentHashMultiset;
-import com.google.common.collect.Iterators;
-import com.google.common.collect.Multiset;
+import com.google.common.collect.*;
 import com.google.common.primitives.Longs;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -44,6 +42,7 @@ import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.db.*;
+import org.apache.cassandra.db.Table;
 import org.apache.cassandra.db.commitlog.ReplayPosition;
 import org.apache.cassandra.db.compaction.CompactionInfo.Holder;
 import org.apache.cassandra.db.index.SecondaryIndex;
@@ -361,42 +360,32 @@ public class CompactionManager implements 
CompactionManagerMBean
 return executor.submit(runnable);
 }
 
-public void forceUserDefinedCompaction(String ksname, String dataFiles)
+public void forceUserDefinedCompaction(String dataFiles)
 {
-if (!Schema.instance.getTables().contains(ksname))
-throw new IllegalArgumentException(Unknown keyspace  + ksname);
-
 String[] filenames = dataFiles.split(,);
-CollectionDescriptor descriptors = new 
ArrayListDescriptor(filenames.length);
+MultimapPairString, String, Descriptor descriptors = 
ArrayListMultimap.create();
 
-String cfname = null;
 for (String filename : filenames)
 {
 // extract keyspace and columnfamily name from filename
 Descriptor desc = Descriptor.fromFilename(filename.trim());
-if (!desc.ksname.equals(ksname))
-{
-throw new IllegalArgumentException(Given keyspace  + ksname 
+  does not match with file  + filename);
-}
-if (cfname == null)
-{
-cfname = desc.cfname;
-}
-else if (!cfname.equals(desc.cfname))
+if (Schema.instance.getCFMetaData(desc) == null)
 {
-throw new IllegalArgumentException(All provided sstables 
should be for the same column family);
+logger.warn(Schema does not exist for file {}. Skipping., 
filename);
+continue;
 }
-File directory = new File(ksname + File.separator + cfname);
+File directory = new File(desc.ksname + File.separator + 
desc.cfname);
+// group by keyspace/columnfamily
 PairDescriptor, String p = Descriptor.fromFilename(directory, 
filename.trim());
-if (!p.right.equals(Component.DATA.name()))
-{
-   

[Cassandra Wiki] Update of Committers by mkjellman

2013-02-13 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Committers page has been changed by mkjellman:
http://wiki.apache.org/cassandra/Committers?action=diffrev1=30rev2=31

  ||Dave Brosius||May 2012||Independent||Also a 
[[http://commons.apache.org|Commons]] committer||
  ||Yuki Morishita||May 2012||Datastax
  ||Aleksey Yeschenko||Nov 2012||Datastax|| ||
+ ||Jason Brown||Feb 2012||Netflix|| ||
  


[jira] [Commented] (CASSANDRA-5129) newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING cluster

2013-02-13 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578124#comment-13578124
 ] 

Michael Kjellman commented on CASSANDRA-5129:
-

it appears this is related to secondary indexes. after the bootstrapping node 
finishes streaming it submits an index build. This gets submitted but never 
makes any progress and hangs indefinitely.

{code}
 INFO [Thread-382] 2013-02-13 18:02:57,205 StreamInSession.java (line 199) 
Finished streaming session 4ae0be23-75fb-11e2-ba65-8f73c0b9d93d from 
/10.138.12.10
 INFO [Thread-540] 2013-02-13 18:17:42,526 SecondaryIndexManager.java (line 
137) Submitting index build of [domain_metadata.classificationIdx, 
domain_metadata.domaintypeIdx] for data in 
SSTableReader(path='/data/cassandra/brts/domain_metadata/brts-domain_metadata-ib-1-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-2-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-3-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-4-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-5-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-6-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-7-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-8-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-9-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-10-Data.db')
{code}

 newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING 
 cluster  
 --

 Key: CASSANDRA-5129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5129
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: Michael Kjellman

 bootstrapping a new node causes it to hang indefinitely in STATUS:BOOT
 Nodes streaming to the new node report 
 {code}
 Mode: NORMAL
  Nothing streaming to /10.8.30.16
 Not receiving any streams.
 Pool NameActive   Pending  Completed
 Commandsn/a 01843990
 Responses   n/a 2 661750
 {code}
 the node being streamed to stuck in the JOINING state reports:
 {code}
 Mode: JOINING
 Not sending any streams.
  Nothing streaming from /10.8.30.103
  Nothing streaming from /10.8.30.102
 Pool NameActive   Pending  Completed
 Commandsn/a 0 10
 Responses   n/a 0 613577
 {code}
 it appears that the nodes in the nothing streaming state never sends a 
 finished streaming to the joining node.
 no exceptions are thrown during the streaming on either node while the node 
 is in this state.
 {code:name=full gossip state of bootstrapping node}
 /10.8.30.16
   NET_VERSION:6
   RELEASE_VERSION:1.2.0
   STATUS:BOOT,127605887595351923798765477786913079289
   RACK:RAC1
   RPC_ADDRESS:0.0.0.0
   DC:DC1
   SCHEMA:5cd8420d-ce3c-3625-8293-67558a24816b
   HOST_ID:e20817ce-7454-4dc4-a1c6-b1dec35c4491
   LOAD:1.11824041581E11
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5129) newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING cluster

2013-02-13 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578124#comment-13578124
 ] 

Michael Kjellman edited comment on CASSANDRA-5129 at 2/14/13 3:37 AM:
--

it appears this is related to secondary indexes. after the bootstrapping node 
finishes streaming it submits an index build. This gets submitted but never 
makes any progress and hangs indefinitely.

{code}
 INFO [Thread-382] 2013-02-13 18:02:57,205 StreamInSession.java (line 199) 
Finished streaming session 4ae0be23-75fb-11e2-ba65-8f73c0b9d93d from 
/10.138.12.10
 INFO [Thread-540] 2013-02-13 18:17:42,526 SecondaryIndexManager.java (line 
137) Submitting index build of [domain_metadata.classificationIdx, 
domain_metadata.domaintypeIdx] for data in 
SSTableReader(path='/data/cassandra/brts/domain_metadata/brts-domain_metadata-ib-1-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-2-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-3-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-4-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-5-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-6-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-7-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-8-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-9-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-10-Data.db')
{code}

{code}
#nodetool compactionstats
pending tasks: 23
Active compaction remaining time :n/a
{code}

  was (Author: mkjellman):
it appears this is related to secondary indexes. after the bootstrapping 
node finishes streaming it submits an index build. This gets submitted but 
never makes any progress and hangs indefinitely.

{code}
 INFO [Thread-382] 2013-02-13 18:02:57,205 StreamInSession.java (line 199) 
Finished streaming session 4ae0be23-75fb-11e2-ba65-8f73c0b9d93d from 
/10.138.12.10
 INFO [Thread-540] 2013-02-13 18:17:42,526 SecondaryIndexManager.java (line 
137) Submitting index build of [domain_metadata.classificationIdx, 
domain_metadata.domaintypeIdx] for data in 
SSTableReader(path='/data/cassandra/brts/domain_metadata/brts-domain_metadata-ib-1-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-2-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-3-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-4-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-5-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-6-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-7-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-8-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-9-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-10-Data.db')
{code}
  
 newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING 
 cluster  
 --

 Key: CASSANDRA-5129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5129
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: Michael Kjellman

 bootstrapping a new node causes it to hang indefinitely in STATUS:BOOT
 Nodes streaming to the new node report 
 {code}
 Mode: NORMAL
  Nothing streaming to /10.8.30.16
 Not receiving any streams.
 Pool NameActive   Pending  Completed
 Commandsn/a 01843990
 Responses   n/a 2 661750
 {code}
 the node being streamed to stuck in the JOINING state reports:
 {code}
 Mode: JOINING
 Not sending any streams.
  Nothing streaming from /10.8.30.103
  Nothing streaming from /10.8.30.102
 Pool NameActive   Pending  Completed
 Commandsn/a 0 10
 Responses   n/a 0 613577
 {code}
 it appears that the nodes in the nothing streaming state never sends a 
 finished streaming to the joining node.
 no exceptions are thrown during the streaming on either node while the node 
 is 

[jira] [Commented] (CASSANDRA-5129) newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING cluster

2013-02-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578126#comment-13578126
 ] 

Brandon Williams commented on CASSANDRA-5129:
-

Easily repros with toy data from stress:

{noformat}
 INFO 03:30:47,313 JOINING: Starting to bootstrap...
 INFO 03:30:48,522 Submitting index build of [Standard1.Idx1] for data in 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-ib-1-Data.db'),
 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-ib-2-Data.db'),
 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-ib-3-Data.db'),
 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-ib-4-Data.db')
 INFO 03:30:48,526 Enqueuing flush of 
Memtable-compactions_in_progress@893461718(177/177 serialized/live bytes, 7 ops)
 INFO 03:30:48,527 Writing Memtable-compactions_in_progress@893461718(177/177 
serialized/live bytes, 7 ops)
 INFO 03:30:48,546 Completed flushing 
/var/lib/cassandra/data/system/compactions_in_progress/system-compactions_in_progress-ib-1-Data.db
 (176 bytes) for commitlog position ReplayPosition(segmentId=1360812614633, 
position=75619)
 INFO 03:30:48,547 Compacting 
[SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-ib-3-Data.db'),
 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-ib-1-Data.db'),
 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-ib-4-Data.db'),
 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-ib-2-Data.db')]
{noformat}

and stays like that forever.

 newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING 
 cluster  
 --

 Key: CASSANDRA-5129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5129
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: Michael Kjellman

 bootstrapping a new node causes it to hang indefinitely in STATUS:BOOT
 Nodes streaming to the new node report 
 {code}
 Mode: NORMAL
  Nothing streaming to /10.8.30.16
 Not receiving any streams.
 Pool NameActive   Pending  Completed
 Commandsn/a 01843990
 Responses   n/a 2 661750
 {code}
 the node being streamed to stuck in the JOINING state reports:
 {code}
 Mode: JOINING
 Not sending any streams.
  Nothing streaming from /10.8.30.103
  Nothing streaming from /10.8.30.102
 Pool NameActive   Pending  Completed
 Commandsn/a 0 10
 Responses   n/a 0 613577
 {code}
 it appears that the nodes in the nothing streaming state never sends a 
 finished streaming to the joining node.
 no exceptions are thrown during the streaming on either node while the node 
 is in this state.
 {code:name=full gossip state of bootstrapping node}
 /10.8.30.16
   NET_VERSION:6
   RELEASE_VERSION:1.2.0
   STATUS:BOOT,127605887595351923798765477786913079289
   RACK:RAC1
   RPC_ADDRESS:0.0.0.0
   DC:DC1
   SCHEMA:5cd8420d-ce3c-3625-8293-67558a24816b
   HOST_ID:e20817ce-7454-4dc4-a1c6-b1dec35c4491
   LOAD:1.11824041581E11
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5129) newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING cluster

2013-02-13 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-5129:
---

Assignee: Yuki Morishita

 newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING 
 cluster  
 --

 Key: CASSANDRA-5129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5129
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: Michael Kjellman
Assignee: Yuki Morishita

 bootstrapping a new node causes it to hang indefinitely in STATUS:BOOT
 Nodes streaming to the new node report 
 {code}
 Mode: NORMAL
  Nothing streaming to /10.8.30.16
 Not receiving any streams.
 Pool NameActive   Pending  Completed
 Commandsn/a 01843990
 Responses   n/a 2 661750
 {code}
 the node being streamed to stuck in the JOINING state reports:
 {code}
 Mode: JOINING
 Not sending any streams.
  Nothing streaming from /10.8.30.103
  Nothing streaming from /10.8.30.102
 Pool NameActive   Pending  Completed
 Commandsn/a 0 10
 Responses   n/a 0 613577
 {code}
 it appears that the nodes in the nothing streaming state never sends a 
 finished streaming to the joining node.
 no exceptions are thrown during the streaming on either node while the node 
 is in this state.
 {code:name=full gossip state of bootstrapping node}
 /10.8.30.16
   NET_VERSION:6
   RELEASE_VERSION:1.2.0
   STATUS:BOOT,127605887595351923798765477786913079289
   RACK:RAC1
   RPC_ADDRESS:0.0.0.0
   DC:DC1
   SCHEMA:5cd8420d-ce3c-3625-8293-67558a24816b
   HOST_ID:e20817ce-7454-4dc4-a1c6-b1dec35c4491
   LOAD:1.11824041581E11
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5129) newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING cluster

2013-02-13 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578124#comment-13578124
 ] 

Michael Kjellman edited comment on CASSANDRA-5129 at 2/14/13 3:39 AM:
--

it appears this is related to secondary indexes. after the bootstrapping node 
finishes streaming it submits an index build. This gets submitted but never 
makes any progress and hangs indefinitely.

{code}
 INFO [Thread-382] 2013-02-13 18:02:57,205 StreamInSession.java (line 199) 
Finished streaming session 4ae0be23-75fb-11e2-ba65-8f73c0b9d93d from 
/10.138.12.10
 INFO [Thread-540] 2013-02-13 18:17:42,526 SecondaryIndexManager.java (line 
137) Submitting index build of [domain_metadata.classificationIdx, 
domain_metadata.domaintypeIdx] for data in 
SSTableReader(path='/data/cassandra/brts/domain_metadata/brts-domain_metadata-ib-1-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-2-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-3-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-4-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-5-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-6-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-7-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-8-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-9-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-10-Data.db')
{code}

{code}
#nodetool compactionstats
pending tasks: 23
Active compaction remaining time :n/a
{code}

also when C* is killed, the node hung with nothing streaming logs:
{code}
ERROR 19:37:42,274 Exception in thread Thread[Streaming to 
/10.138.12.11:1,5,main]
java.lang.RuntimeException: java.io.EOFException
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.cassandra.streaming.FileStreamTask.receiveReply(FileStreamTask.java:193)
at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:101)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
... 3 more
{code}

  was (Author: mkjellman):
it appears this is related to secondary indexes. after the bootstrapping 
node finishes streaming it submits an index build. This gets submitted but 
never makes any progress and hangs indefinitely.

{code}
 INFO [Thread-382] 2013-02-13 18:02:57,205 StreamInSession.java (line 199) 
Finished streaming session 4ae0be23-75fb-11e2-ba65-8f73c0b9d93d from 
/10.138.12.10
 INFO [Thread-540] 2013-02-13 18:17:42,526 SecondaryIndexManager.java (line 
137) Submitting index build of [domain_metadata.classificationIdx, 
domain_metadata.domaintypeIdx] for data in 
SSTableReader(path='/data/cassandra/brts/domain_metadata/brts-domain_metadata-ib-1-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-2-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-3-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-4-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-5-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-6-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-7-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-8-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-9-Data.db'),
 
SSTableReader(path='/data2/cassandra/brts/domain_metadata/brts-domain_metadata-ib-10-Data.db')
{code}

{code}
#nodetool compactionstats
pending tasks: 23
Active compaction remaining time :n/a
{code}
  
 newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING 
 cluster  
 --

 Key: CASSANDRA-5129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5129
 Project: Cassandra
  Issue 

[jira] [Commented] (CASSANDRA-5129) newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING cluster

2013-02-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578127#comment-13578127
 ] 

Brandon Williams commented on CASSANDRA-5129:
-

Thread dump indicates this is actually CASSANDRA-5244 which has a good 
analysis, but is more severe than we thought.

 newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING 
 cluster  
 --

 Key: CASSANDRA-5129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5129
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: Michael Kjellman
Assignee: Yuki Morishita

 bootstrapping a new node causes it to hang indefinitely in STATUS:BOOT
 Nodes streaming to the new node report 
 {code}
 Mode: NORMAL
  Nothing streaming to /10.8.30.16
 Not receiving any streams.
 Pool NameActive   Pending  Completed
 Commandsn/a 01843990
 Responses   n/a 2 661750
 {code}
 the node being streamed to stuck in the JOINING state reports:
 {code}
 Mode: JOINING
 Not sending any streams.
  Nothing streaming from /10.8.30.103
  Nothing streaming from /10.8.30.102
 Pool NameActive   Pending  Completed
 Commandsn/a 0 10
 Responses   n/a 0 613577
 {code}
 it appears that the nodes in the nothing streaming state never sends a 
 finished streaming to the joining node.
 no exceptions are thrown during the streaming on either node while the node 
 is in this state.
 {code:name=full gossip state of bootstrapping node}
 /10.8.30.16
   NET_VERSION:6
   RELEASE_VERSION:1.2.0
   STATUS:BOOT,127605887595351923798765477786913079289
   RACK:RAC1
   RPC_ADDRESS:0.0.0.0
   DC:DC1
   SCHEMA:5cd8420d-ce3c-3625-8293-67558a24816b
   HOST_ID:e20817ce-7454-4dc4-a1c6-b1dec35c4491
   LOAD:1.11824041581E11
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5129) newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING cluster

2013-02-13 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-5129:
---

Assignee: Brandon Williams  (was: Yuki Morishita)

 newly bootstrapping nodes hang indefinitely in STATUS:BOOT while JOINING 
 cluster  
 --

 Key: CASSANDRA-5129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5129
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: Michael Kjellman
Assignee: Brandon Williams

 bootstrapping a new node causes it to hang indefinitely in STATUS:BOOT
 Nodes streaming to the new node report 
 {code}
 Mode: NORMAL
  Nothing streaming to /10.8.30.16
 Not receiving any streams.
 Pool NameActive   Pending  Completed
 Commandsn/a 01843990
 Responses   n/a 2 661750
 {code}
 the node being streamed to stuck in the JOINING state reports:
 {code}
 Mode: JOINING
 Not sending any streams.
  Nothing streaming from /10.8.30.103
  Nothing streaming from /10.8.30.102
 Pool NameActive   Pending  Completed
 Commandsn/a 0 10
 Responses   n/a 0 613577
 {code}
 it appears that the nodes in the nothing streaming state never sends a 
 finished streaming to the joining node.
 no exceptions are thrown during the streaming on either node while the node 
 is in this state.
 {code:name=full gossip state of bootstrapping node}
 /10.8.30.16
   NET_VERSION:6
   RELEASE_VERSION:1.2.0
   STATUS:BOOT,127605887595351923798765477786913079289
   RACK:RAC1
   RPC_ADDRESS:0.0.0.0
   DC:DC1
   SCHEMA:5cd8420d-ce3c-3625-8293-67558a24816b
   HOST_ID:e20817ce-7454-4dc4-a1c6-b1dec35c4491
   LOAD:1.11824041581E11
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5244) Compactions don't work while node is bootstrapping

2013-02-13 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-5244:


Priority: Critical  (was: Minor)

 Compactions don't work while node is bootstrapping
 --

 Key: CASSANDRA-5244
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5244
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Jouni Hartikainen
Assignee: Brandon Williams
Priority: Critical
  Labels: gossip
 Fix For: 1.2.2


 It seems that there is a race condition in StorageService that prevents 
 compactions from completing while node is in a bootstrap state.
 I have been able to reproduce this multiple times by throttling streaming 
 throughput to extend the bootstrap time while simultaneously inserting data 
 to the cluster.
 The problems lies in the synchronization of initServer(int delay) and 
 reportSeverity(double incr) methods as they both try to acquire the instance 
 lock of StorageService through the use of synchronized keyword. As initServer 
 does not return until the bootstrap has completed, all calls to 
 reportSeverity will block until that. However, reportSeverity is called when 
 starting compactions in CompactionInfo and thus all compactions block until 
 bootstrap completes. 
 This might severely degrade node's performance after bootstrap as it might 
 have lots of compactions pending while simultaneously starting to serve reads.
 I have been able to solve the issue by adding a separate lock for 
 reportSeverity and removing its class level synchronization. This of course 
 is not a valid approach if we must assume that any of Gossiper's 
 IEndpointStateChangeSubscribers could potentially end up calling back to 
 StorageService's synchronized methods. However, at least at the moment, that 
 does not seem to be the case.
 Maybe somebody with more experience about the codebase comes up with a better 
 solution?
 (This might affect DynamicEndpointSnitch as well, as it also calls to 
 reportSeverity in its setSeverity method)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5244) Compactions don't work while node is bootstrapping

2013-02-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578128#comment-13578128
 ] 

Brandon Williams commented on CASSANDRA-5244:
-

This is more severe than we originally though, and causes CASSANDRA-5129 when 
there is a secondary index:

{noformat}
CompactionExecutor:1 daemon prio=10 tid=0x7effbc03c800 nid=0x7abf waiting 
for monitor entry [0x7effc843a000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.cassandra.service.StorageService.reportSeverity(StorageService.java:905)
- waiting to lock 0xca576ac8 (a 
org.apache.cassandra.service.StorageService)
at 
org.apache.cassandra.db.compaction.CompactionInfo$Holder.started(CompactionInfo.java:141)
at 
org.apache.cassandra.metrics.CompactionMetrics.beginCompaction(CompactionMetrics.java:90)
at 
org.apache.cassandra.db.compaction.CompactionManager$9.run(CompactionManager.java:813)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{noformat}

 Compactions don't work while node is bootstrapping
 --

 Key: CASSANDRA-5244
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5244
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Jouni Hartikainen
Assignee: Brandon Williams
Priority: Critical
  Labels: gossip
 Fix For: 1.2.2


 It seems that there is a race condition in StorageService that prevents 
 compactions from completing while node is in a bootstrap state.
 I have been able to reproduce this multiple times by throttling streaming 
 throughput to extend the bootstrap time while simultaneously inserting data 
 to the cluster.
 The problems lies in the synchronization of initServer(int delay) and 
 reportSeverity(double incr) methods as they both try to acquire the instance 
 lock of StorageService through the use of synchronized keyword. As initServer 
 does not return until the bootstrap has completed, all calls to 
 reportSeverity will block until that. However, reportSeverity is called when 
 starting compactions in CompactionInfo and thus all compactions block until 
 bootstrap completes. 
 This might severely degrade node's performance after bootstrap as it might 
 have lots of compactions pending while simultaneously starting to serve reads.
 I have been able to solve the issue by adding a separate lock for 
 reportSeverity and removing its class level synchronization. This of course 
 is not a valid approach if we must assume that any of Gossiper's 
 IEndpointStateChangeSubscribers could potentially end up calling back to 
 StorageService's synchronized methods. However, at least at the moment, that 
 does not seem to be the case.
 Maybe somebody with more experience about the codebase comes up with a better 
 solution?
 (This might affect DynamicEndpointSnitch as well, as it also calls to 
 reportSeverity in its setSeverity method)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5244) Compactions don't work while node is bootstrapping

2013-02-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578128#comment-13578128
 ] 

Brandon Williams edited comment on CASSANDRA-5244 at 2/14/13 3:43 AM:
--

This is more severe than we originally thought, and causes CASSANDRA-5129 when 
there is a secondary index:

{noformat}
CompactionExecutor:1 daemon prio=10 tid=0x7effbc03c800 nid=0x7abf waiting 
for monitor entry [0x7effc843a000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.cassandra.service.StorageService.reportSeverity(StorageService.java:905)
- waiting to lock 0xca576ac8 (a 
org.apache.cassandra.service.StorageService)
at 
org.apache.cassandra.db.compaction.CompactionInfo$Holder.started(CompactionInfo.java:141)
at 
org.apache.cassandra.metrics.CompactionMetrics.beginCompaction(CompactionMetrics.java:90)
at 
org.apache.cassandra.db.compaction.CompactionManager$9.run(CompactionManager.java:813)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{noformat}

  was (Author: brandon.williams):
This is more severe than we originally though, and causes CASSANDRA-5129 
when there is a secondary index:

{noformat}
CompactionExecutor:1 daemon prio=10 tid=0x7effbc03c800 nid=0x7abf waiting 
for monitor entry [0x7effc843a000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.cassandra.service.StorageService.reportSeverity(StorageService.java:905)
- waiting to lock 0xca576ac8 (a 
org.apache.cassandra.service.StorageService)
at 
org.apache.cassandra.db.compaction.CompactionInfo$Holder.started(CompactionInfo.java:141)
at 
org.apache.cassandra.metrics.CompactionMetrics.beginCompaction(CompactionMetrics.java:90)
at 
org.apache.cassandra.db.compaction.CompactionManager$9.run(CompactionManager.java:813)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{noformat}
  
 Compactions don't work while node is bootstrapping
 --

 Key: CASSANDRA-5244
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5244
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Jouni Hartikainen
Assignee: Brandon Williams
Priority: Critical
  Labels: gossip
 Fix For: 1.2.2


 It seems that there is a race condition in StorageService that prevents 
 compactions from completing while node is in a bootstrap state.
 I have been able to reproduce this multiple times by throttling streaming 
 throughput to extend the bootstrap time while simultaneously inserting data 
 to the cluster.
 The problems lies in the synchronization of initServer(int delay) and 
 reportSeverity(double incr) methods as they both try to acquire the instance 
 lock of StorageService through the use of synchronized keyword. As initServer 
 does not return until the bootstrap has completed, all calls to 
 reportSeverity will block until that. However, reportSeverity is called when 
 starting compactions in CompactionInfo and thus all compactions block until 
 bootstrap completes. 
 This might severely degrade node's performance after bootstrap as it might 
 have lots of compactions pending while simultaneously starting to serve reads.
 I have been able to solve the issue by adding a separate lock for 
 reportSeverity and removing its class level synchronization. This of course 
 is not a valid approach if we must assume that any of Gossiper's 
 IEndpointStateChangeSubscribers could potentially end up calling back to 
 StorageService's synchronized methods. However, at least at the moment, that 
 does not seem to be the case.
 Maybe somebody with more experience about the codebase comes up with a better 
 solution?
 (This might affect DynamicEndpointSnitch as well, as it also calls to 
 reportSeverity in its setSeverity method)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact 

[Cassandra Wiki] Trivial Update of MartinDer by MartinDer

2013-02-13 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The MartinDer page has been changed by MartinDer:
http://wiki.apache.org/cassandra/MartinDer?action=diffrev1=2rev2=3

+ Wassp People !! I am YUKIKO HOGAN. I live in Danbury.BR
- Hey fellas !! I am LARAE FRENCH. I reside in Fort Walton Beach.BR
- I and my sister go to The Outstanding Institute situated in San Bernardino. I 
like to do Belly Dancing. My father name is Donald  and he is a Floor Manager. 
My mother is a Lord Chamberlain.BR
  BR
- Here is my weblog [[http://www.unitedchem.com/beatbydre.aspx|beats by dre]]
+ I might take night schooling in The Mimic Preparatory of Unparalleled 
Education which has a branch in Santa Barbara. I also like to Tesla Coils. My 
dad name is Paul-Andre and he is a Physiotherapist. My mother is a Telephone 
Operator.BR
+ BR
+ Feel free to surf to my web-site :: 
[[http://www.unitedchem.com/beatbydre.aspx|cheap dr dre beats]]
  


[jira] [Updated] (CASSANDRA-5244) Compactions don't work while node is bootstrapping

2013-02-13 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-5244:


Attachment: 5244.txt

It seems to me the only reason we're synchronizing here is for the increment, 
and we don't need to get our own severity out of gossip, so we can just track a 
local AtomicDouble instead.

 Compactions don't work while node is bootstrapping
 --

 Key: CASSANDRA-5244
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5244
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Jouni Hartikainen
Assignee: Brandon Williams
Priority: Critical
  Labels: gossip
 Fix For: 1.2.2

 Attachments: 5244.txt


 It seems that there is a race condition in StorageService that prevents 
 compactions from completing while node is in a bootstrap state.
 I have been able to reproduce this multiple times by throttling streaming 
 throughput to extend the bootstrap time while simultaneously inserting data 
 to the cluster.
 The problems lies in the synchronization of initServer(int delay) and 
 reportSeverity(double incr) methods as they both try to acquire the instance 
 lock of StorageService through the use of synchronized keyword. As initServer 
 does not return until the bootstrap has completed, all calls to 
 reportSeverity will block until that. However, reportSeverity is called when 
 starting compactions in CompactionInfo and thus all compactions block until 
 bootstrap completes. 
 This might severely degrade node's performance after bootstrap as it might 
 have lots of compactions pending while simultaneously starting to serve reads.
 I have been able to solve the issue by adding a separate lock for 
 reportSeverity and removing its class level synchronization. This of course 
 is not a valid approach if we must assume that any of Gossiper's 
 IEndpointStateChangeSubscribers could potentially end up calling back to 
 StorageService's synchronized methods. However, at least at the moment, that 
 does not seem to be the case.
 Maybe somebody with more experience about the codebase comes up with a better 
 solution?
 (This might affect DynamicEndpointSnitch as well, as it also calls to 
 reportSeverity in its setSeverity method)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5244) Compactions don't work while node is bootstrapping

2013-02-13 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-5244:


Reviewer: vijay2...@yahoo.com

 Compactions don't work while node is bootstrapping
 --

 Key: CASSANDRA-5244
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5244
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Jouni Hartikainen
Assignee: Brandon Williams
Priority: Critical
  Labels: gossip
 Fix For: 1.2.2

 Attachments: 5244.txt


 It seems that there is a race condition in StorageService that prevents 
 compactions from completing while node is in a bootstrap state.
 I have been able to reproduce this multiple times by throttling streaming 
 throughput to extend the bootstrap time while simultaneously inserting data 
 to the cluster.
 The problems lies in the synchronization of initServer(int delay) and 
 reportSeverity(double incr) methods as they both try to acquire the instance 
 lock of StorageService through the use of synchronized keyword. As initServer 
 does not return until the bootstrap has completed, all calls to 
 reportSeverity will block until that. However, reportSeverity is called when 
 starting compactions in CompactionInfo and thus all compactions block until 
 bootstrap completes. 
 This might severely degrade node's performance after bootstrap as it might 
 have lots of compactions pending while simultaneously starting to serve reads.
 I have been able to solve the issue by adding a separate lock for 
 reportSeverity and removing its class level synchronization. This of course 
 is not a valid approach if we must assume that any of Gossiper's 
 IEndpointStateChangeSubscribers could potentially end up calling back to 
 StorageService's synchronized methods. However, at least at the moment, that 
 does not seem to be the case.
 Maybe somebody with more experience about the codebase comes up with a better 
 solution?
 (This might affect DynamicEndpointSnitch as well, as it also calls to 
 reportSeverity in its setSeverity method)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5244) Compactions don't work while node is bootstrapping

2013-02-13 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578147#comment-13578147
 ] 

Vijay commented on CASSANDRA-5244:
--

+1

 Compactions don't work while node is bootstrapping
 --

 Key: CASSANDRA-5244
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5244
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Jouni Hartikainen
Assignee: Brandon Williams
Priority: Critical
  Labels: gossip
 Fix For: 1.2.2

 Attachments: 5244.txt


 It seems that there is a race condition in StorageService that prevents 
 compactions from completing while node is in a bootstrap state.
 I have been able to reproduce this multiple times by throttling streaming 
 throughput to extend the bootstrap time while simultaneously inserting data 
 to the cluster.
 The problems lies in the synchronization of initServer(int delay) and 
 reportSeverity(double incr) methods as they both try to acquire the instance 
 lock of StorageService through the use of synchronized keyword. As initServer 
 does not return until the bootstrap has completed, all calls to 
 reportSeverity will block until that. However, reportSeverity is called when 
 starting compactions in CompactionInfo and thus all compactions block until 
 bootstrap completes. 
 This might severely degrade node's performance after bootstrap as it might 
 have lots of compactions pending while simultaneously starting to serve reads.
 I have been able to solve the issue by adding a separate lock for 
 reportSeverity and removing its class level synchronization. This of course 
 is not a valid approach if we must assume that any of Gossiper's 
 IEndpointStateChangeSubscribers could potentially end up calling back to 
 StorageService's synchronized methods. However, at least at the moment, that 
 does not seem to be the case.
 Maybe somebody with more experience about the codebase comes up with a better 
 solution?
 (This might affect DynamicEndpointSnitch as well, as it also calls to 
 reportSeverity in its setSeverity method)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5244) Compactions don't work while node is bootstrapping

2013-02-13 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-5244:


Attachment: 5244.txt

 Compactions don't work while node is bootstrapping
 --

 Key: CASSANDRA-5244
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5244
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Jouni Hartikainen
Assignee: Brandon Williams
Priority: Critical
  Labels: gossip
 Fix For: 1.2.2

 Attachments: 5244.txt


 It seems that there is a race condition in StorageService that prevents 
 compactions from completing while node is in a bootstrap state.
 I have been able to reproduce this multiple times by throttling streaming 
 throughput to extend the bootstrap time while simultaneously inserting data 
 to the cluster.
 The problems lies in the synchronization of initServer(int delay) and 
 reportSeverity(double incr) methods as they both try to acquire the instance 
 lock of StorageService through the use of synchronized keyword. As initServer 
 does not return until the bootstrap has completed, all calls to 
 reportSeverity will block until that. However, reportSeverity is called when 
 starting compactions in CompactionInfo and thus all compactions block until 
 bootstrap completes. 
 This might severely degrade node's performance after bootstrap as it might 
 have lots of compactions pending while simultaneously starting to serve reads.
 I have been able to solve the issue by adding a separate lock for 
 reportSeverity and removing its class level synchronization. This of course 
 is not a valid approach if we must assume that any of Gossiper's 
 IEndpointStateChangeSubscribers could potentially end up calling back to 
 StorageService's synchronized methods. However, at least at the moment, that 
 does not seem to be the case.
 Maybe somebody with more experience about the codebase comes up with a better 
 solution?
 (This might affect DynamicEndpointSnitch as well, as it also calls to 
 reportSeverity in its setSeverity method)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5244) Compactions don't work while node is bootstrapping

2013-02-13 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-5244:


Attachment: (was: 5244.txt)

 Compactions don't work while node is bootstrapping
 --

 Key: CASSANDRA-5244
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5244
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0 beta 1
Reporter: Jouni Hartikainen
Assignee: Brandon Williams
Priority: Critical
  Labels: gossip
 Fix For: 1.2.2

 Attachments: 5244.txt


 It seems that there is a race condition in StorageService that prevents 
 compactions from completing while node is in a bootstrap state.
 I have been able to reproduce this multiple times by throttling streaming 
 throughput to extend the bootstrap time while simultaneously inserting data 
 to the cluster.
 The problems lies in the synchronization of initServer(int delay) and 
 reportSeverity(double incr) methods as they both try to acquire the instance 
 lock of StorageService through the use of synchronized keyword. As initServer 
 does not return until the bootstrap has completed, all calls to 
 reportSeverity will block until that. However, reportSeverity is called when 
 starting compactions in CompactionInfo and thus all compactions block until 
 bootstrap completes. 
 This might severely degrade node's performance after bootstrap as it might 
 have lots of compactions pending while simultaneously starting to serve reads.
 I have been able to solve the issue by adding a separate lock for 
 reportSeverity and removing its class level synchronization. This of course 
 is not a valid approach if we must assume that any of Gossiper's 
 IEndpointStateChangeSubscribers could potentially end up calling back to 
 StorageService's synchronized methods. However, at least at the moment, that 
 does not seem to be the case.
 Maybe somebody with more experience about the codebase comes up with a better 
 solution?
 (This might affect DynamicEndpointSnitch as well, as it also calls to 
 reportSeverity in its setSeverity method)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[2/3] git commit: Stop compactions from handing during bootstrap. Patch by brandonwilliams, reviewed by Vijay for CASSANDRA-5244

2013-02-13 Thread brandonwilliams
Stop compactions from handing during bootstrap.
Patch by brandonwilliams, reviewed by Vijay for CASSANDRA-5244


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3925f560
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3925f560
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3925f560

Branch: refs/heads/trunk
Commit: 3925f56061307971008fdd0db48c4e29e0700443
Parents: 99b3963
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 13 22:55:52 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 13 22:56:40 2013 -0600

--
 CHANGES.txt|1 +
 .../apache/cassandra/service/StorageService.java   |8 +---
 2 files changed, 6 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3925f560/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3d0f633..9281b5e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -19,6 +19,7 @@
  * Add support for LZ4 compression (CASSANDRA-5038)
  * Fix missing columns in wide rows queries (CASSANDRA-5225)
  * Simplify auth setup and make system_auth ks alterable (CASSANDRA-5112)
+ * Stop compactions from hanging during bootstrap (CASSANDRA-5244)
 
 
 1.2.1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3925f560/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 8c1d053..9ce4bf0 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -36,6 +36,7 @@ import javax.management.ObjectName;
 
 import com.google.common.collect.*;
 
+import com.google.common.util.concurrent.AtomicDouble;
 import org.apache.cassandra.db.index.SecondaryIndex;
 import org.apache.log4j.Level;
 import org.apache.commons.lang.StringUtils;
@@ -93,6 +94,8 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 /* JMX notification serial number counter */
 private final AtomicLong notificationSerialNumber = new AtomicLong();
 
+private final AtomicDouble severity = new AtomicDouble();
+
 private static int getRingDelay()
 {
 String newdelay = System.getProperty(cassandra.ring_delay_ms);
@@ -901,12 +904,11 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 /**
  * Gossip about the known severity of the events in this node
  */
-public synchronized boolean reportSeverity(double incr)
+public boolean reportSeverity(double incr)
 {
 if (!Gossiper.instance.isEnabled())
 return false;
-double update = getSeverity(FBUtilities.getBroadcastAddress()) + incr;
-VersionedValue updated = 
StorageService.instance.valueFactory.severity(update);
+VersionedValue updated = 
StorageService.instance.valueFactory.severity(severity.addAndGet(incr));
 Gossiper.instance.addLocalApplicationState(ApplicationState.SEVERITY, 
updated);
 return true;
 }



  1   2   >