[jira] [Commented] (CASSANDRA-8692) Coalesce intra-cluster network messages

2015-03-13 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360796#comment-14360796
 ] 

Benedict commented on CASSANDRA-8692:
-

OK, I'm going to put us out of our misery and commit this to 3.0, since I think 
the only thing we're waiting on is consensus for a release target?

 Coalesce intra-cluster network messages
 ---

 Key: CASSANDRA-8692
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8692
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 2.1.4

 Attachments: batching-benchmark.png


 While researching CASSANDRA-8457 we found that it is effective and can be 
 done without introducing additional latency at low concurrency/throughput.
 The patch from that was used and found to be useful in a real life scenario 
 so I propose we implement this in 2.1 in addition to 3.0.
 The change set is a single file and is small enough to be reviewable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Fix SSTableRewriter when early re-open disabled

2015-03-13 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk 994d8f503 - 5d6f9284f


Fix SSTableRewriter when early re-open disabled

Patch by jmckenzie; reviewed by marcuse for CASSANDRA-8535


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d97e7cb6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d97e7cb6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d97e7cb6

Branch: refs/heads/trunk
Commit: d97e7cb69ebe8794adeb5be00b58a1b828bffd26
Parents: 9caf045
Author: Joshua McKenzie jmcken...@apache.org
Authored: Fri Mar 13 13:01:04 2015 -0500
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Fri Mar 13 13:01:04 2015 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/io/sstable/SSTableRewriter.java   | 34 +++-
 2 files changed, 27 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d97e7cb6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 04861f0..d7ab277 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Fix SSTableRewriter with disabled early open (CASSANDRA-8535)
  * Allow invalidating permissions and cache time (CASSANDRA-8722)
  * Log warning when queries that will require ALLOW FILTERING in Cassandra 3.0
are executed (CASSANDRA-8418)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d97e7cb6/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
index 914ce1f..641dd7c 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
@@ -20,7 +20,6 @@ package org.apache.cassandra.io.sstable;
 import java.util.*;
 
 import com.google.common.annotations.VisibleForTesting;
-import com.google.common.base.Functions;
 import com.google.common.base.Throwables;
 import com.google.common.collect.ImmutableList;
 
@@ -56,7 +55,7 @@ public class SSTableRewriter
 static
 {
 long interval = 
DatabaseDescriptor.getSSTablePreempiveOpenIntervalInMB() * (1L  20);
-if (interval  0)
+if (interval  0 || FBUtilities.isWindows())
 interval = Long.MAX_VALUE;
 preemptiveOpenInterval = interval;
 }
@@ -79,6 +78,7 @@ public class SSTableRewriter
 private SSTableReader currentlyOpenedEarly; // the reader for the most 
recent (re)opening of the target file
 private long currentlyOpenedEarlyAt; // the position (in MB) in the target 
file we last (re)opened at
 
+private final ListSSTableReader finishedReaders = new ArrayList();
 private final QueueFinished finishedEarly = new ArrayDeque();
 // as writers are closed from finishedEarly, their last readers are moved
 // into discard, so that abort can cleanup after us safely
@@ -159,7 +159,7 @@ public class SSTableRewriter
 
 private void maybeReopenEarly(DecoratedKey key)
 {
-if (!FBUtilities.isWindows()  writer.getFilePointer() - 
currentlyOpenedEarlyAt  preemptiveOpenInterval)
+if (writer.getFilePointer() - currentlyOpenedEarlyAt  
preemptiveOpenInterval)
 {
 if (isOffline)
 {
@@ -365,13 +365,22 @@ public class SSTableRewriter
 return;
 }
 
-// we leave it as a tmp file, but we open it and add it to the 
dataTracker
 if (writer.getFilePointer() != 0)
 {
-SSTableReader reader = 
writer.finish(SSTableWriter.FinishType.EARLY, maxAge, -1);
-replaceEarlyOpenedFile(currentlyOpenedEarly, reader);
-moveStarts(reader, reader.last, false);
-finishedEarly.add(new Finished(writer, reader));
+// If early re-open is disabled, simply finalize the writer and 
store it
+if (preemptiveOpenInterval == Long.MAX_VALUE)
+{
+SSTableReader reader = 
writer.finish(SSTableWriter.FinishType.NORMAL, maxAge, -1);
+finishedReaders.add(reader);
+}
+else
+{
+// we leave it as a tmp file, but we open it and add it to the 
dataTracker
+SSTableReader reader = 
writer.finish(SSTableWriter.FinishType.EARLY, maxAge, -1);
+replaceEarlyOpenedFile(currentlyOpenedEarly, reader);
+moveStarts(reader, reader.last, false);
+finishedEarly.add(new Finished(writer, reader));
+}
 }
 else
 {
@@ -427,6 +436,15 @@ public class SSTableRewriter
 if (throwEarly)
 

cassandra git commit: Fix SSTableRewriter when early re-open disabled

2015-03-13 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 9caf0457a - d97e7cb69


Fix SSTableRewriter when early re-open disabled

Patch by jmckenzie; reviewed by marcuse for CASSANDRA-8535


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d97e7cb6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d97e7cb6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d97e7cb6

Branch: refs/heads/cassandra-2.1
Commit: d97e7cb69ebe8794adeb5be00b58a1b828bffd26
Parents: 9caf045
Author: Joshua McKenzie jmcken...@apache.org
Authored: Fri Mar 13 13:01:04 2015 -0500
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Fri Mar 13 13:01:04 2015 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/io/sstable/SSTableRewriter.java   | 34 +++-
 2 files changed, 27 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d97e7cb6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 04861f0..d7ab277 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Fix SSTableRewriter with disabled early open (CASSANDRA-8535)
  * Allow invalidating permissions and cache time (CASSANDRA-8722)
  * Log warning when queries that will require ALLOW FILTERING in Cassandra 3.0
are executed (CASSANDRA-8418)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d97e7cb6/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
index 914ce1f..641dd7c 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
@@ -20,7 +20,6 @@ package org.apache.cassandra.io.sstable;
 import java.util.*;
 
 import com.google.common.annotations.VisibleForTesting;
-import com.google.common.base.Functions;
 import com.google.common.base.Throwables;
 import com.google.common.collect.ImmutableList;
 
@@ -56,7 +55,7 @@ public class SSTableRewriter
 static
 {
 long interval = 
DatabaseDescriptor.getSSTablePreempiveOpenIntervalInMB() * (1L  20);
-if (interval  0)
+if (interval  0 || FBUtilities.isWindows())
 interval = Long.MAX_VALUE;
 preemptiveOpenInterval = interval;
 }
@@ -79,6 +78,7 @@ public class SSTableRewriter
 private SSTableReader currentlyOpenedEarly; // the reader for the most 
recent (re)opening of the target file
 private long currentlyOpenedEarlyAt; // the position (in MB) in the target 
file we last (re)opened at
 
+private final ListSSTableReader finishedReaders = new ArrayList();
 private final QueueFinished finishedEarly = new ArrayDeque();
 // as writers are closed from finishedEarly, their last readers are moved
 // into discard, so that abort can cleanup after us safely
@@ -159,7 +159,7 @@ public class SSTableRewriter
 
 private void maybeReopenEarly(DecoratedKey key)
 {
-if (!FBUtilities.isWindows()  writer.getFilePointer() - 
currentlyOpenedEarlyAt  preemptiveOpenInterval)
+if (writer.getFilePointer() - currentlyOpenedEarlyAt  
preemptiveOpenInterval)
 {
 if (isOffline)
 {
@@ -365,13 +365,22 @@ public class SSTableRewriter
 return;
 }
 
-// we leave it as a tmp file, but we open it and add it to the 
dataTracker
 if (writer.getFilePointer() != 0)
 {
-SSTableReader reader = 
writer.finish(SSTableWriter.FinishType.EARLY, maxAge, -1);
-replaceEarlyOpenedFile(currentlyOpenedEarly, reader);
-moveStarts(reader, reader.last, false);
-finishedEarly.add(new Finished(writer, reader));
+// If early re-open is disabled, simply finalize the writer and 
store it
+if (preemptiveOpenInterval == Long.MAX_VALUE)
+{
+SSTableReader reader = 
writer.finish(SSTableWriter.FinishType.NORMAL, maxAge, -1);
+finishedReaders.add(reader);
+}
+else
+{
+// we leave it as a tmp file, but we open it and add it to the 
dataTracker
+SSTableReader reader = 
writer.finish(SSTableWriter.FinishType.EARLY, maxAge, -1);
+replaceEarlyOpenedFile(currentlyOpenedEarly, reader);
+moveStarts(reader, reader.last, false);
+finishedEarly.add(new Finished(writer, reader));
+}
 }
 else
 {
@@ -427,6 +436,15 @@ public class SSTableRewriter
 if 

[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-13 Thread jmckenzie
Merge branch 'cassandra-2.1' into trunk

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5d6f9284
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5d6f9284
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5d6f9284

Branch: refs/heads/trunk
Commit: 5d6f9284fcf7eb893de2520b1f983ffa9b3ee5a7
Parents: 994d8f5 d97e7cb
Author: Joshua McKenzie jmcken...@apache.org
Authored: Fri Mar 13 13:02:30 2015 -0500
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Fri Mar 13 13:02:30 2015 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/io/sstable/SSTableRewriter.java   | 34 +++-
 2 files changed, 27 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5d6f9284/CHANGES.txt
--
diff --cc CHANGES.txt
index 9caa127,d7ab277..7c0191e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,76 -1,8 +1,77 @@@
 +3.0
 + * Add WriteFailureException to native protocol, notify coordinator of
 +   write failures (CASSANDRA-8592)
 + * Convert SequentialWriter to nio (CASSANDRA-8709)
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849, 
8761, 8850)
 + * Record client ip address in tracing sessions (CASSANDRA-8162)
 + * Indicate partition key columns in response metadata for prepared
 +   statements (CASSANDRA-7660)
 + * Merge UUIDType and TimeUUIDType parse logic (CASSANDRA-8759)
 + * Avoid memory allocation when searching index summary (CASSANDRA-8793)
 + * Optimise (Time)?UUIDType Comparisons (CASSANDRA-8730)
 + * Make CRC32Ex into a separate maven dependency (CASSANDRA-8836)
 + * Use preloaded jemalloc w/ Unsafe (CASSANDRA-8714)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 
7929,
 +   7924, 7812, 8063, 7813, 7708)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo 

[jira] [Commented] (CASSANDRA-8692) Coalesce intra-cluster network messages

2015-03-13 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360886#comment-14360886
 ] 

Ariel Weisberg commented on CASSANDRA-8692:
---

I think we have consensus for including in 2.1? Now that we have an accepted 
version for 3.0 I will rebase for 2.1.

 Coalesce intra-cluster network messages
 ---

 Key: CASSANDRA-8692
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8692
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 2.1.4

 Attachments: batching-benchmark.png


 While researching CASSANDRA-8457 we found that it is effective and can be 
 done without introducing additional latency at low concurrency/throughput.
 The patch from that was used and found to be useful in a real life scenario 
 so I propose we implement this in 2.1 in addition to 3.0.
 The change set is a single file and is small enough to be reviewable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6809) Compressed Commit Log

2015-03-13 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360969#comment-14360969
 ] 

Ariel Weisberg commented on CASSANDRA-6809:
---

I don't think it works as a hard limit. Filesystems can hiccup for a long time 
and if you buffer to private memory you avoid seeing the hiccups.

A high watermark isn't great either because you commit memory that isn't needed 
most of the time. Maybe I am not following what you are suggesting.

When we have ponies we will be writing to private memory, probably around 128 
megabytes, to avoid being at the mercy of the filesystem.

Once compression is asynchronous to the filesystem and parallel the # of 
buffers can be small because compression will tear through fast enough to make 
the buffers available again. So you would have memory waiting to drain to the 
filesystem (128 megabytes) and a small number of buffers to aggregate log 
records until they are sent for compression.

 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: docs-impacting, performance
 Fix For: 3.0

 Attachments: ComitLogStress.java, logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8746) SSTableReader.cloneWithNewStart can drop too much page cache for compressed files

2015-03-13 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360221#comment-14360221
 ] 

Benedict edited comment on CASSANDRA-8746 at 3/13/15 5:53 PM:
--

Patch available [here|https://github.com/belliottsmith/cassandra/tree/8746]

Basically I just move the dropPageCache() call inside of SegmentedFile (which 
is better encapsulation anyway); the compressed versions override this to 
lookup the start position of the relevant segment and only drop data prior to 
this


was (Author: benedict):
Patch available [here|github.com/belliottsmith/cassandra/tree/8746]

Basically I just move the dropPageCache() call inside of SegmentedFile (which 
is better encapsulation anyway); the compressed versions override this to 
lookup the start position of the relevant segment and only drop data prior to 
this

 SSTableReader.cloneWithNewStart can drop too much page cache for compressed 
 files
 -

 Key: CASSANDRA-8746
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8746
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8968) Cassandra cqlsh query return different result randomly

2015-03-13 Thread Jeff Liu (JIRA)
Jeff Liu created CASSANDRA-8968:
---

 Summary: Cassandra cqlsh query return different result randomly
 Key: CASSANDRA-8968
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8968
 Project: Cassandra
  Issue Type: Wish
 Environment: Cassandra 2.0
Reporter: Jeff Liu


Noticed that a select query in cqlsh returns different result randomly. It 
would be nice to get consistent result when same query is performed.

{noformat}

cqlsh select * from cass_dc.cass_dc ;

 key | value
-+---
   c |   ccc
   b |   bbb


Tracing session: d20490f0-c9ac-11e4-a527-23b8e6fcc12b

 activity| timestamp| 
source   | source_elapsed
-+--+--+
  execute_cql3_query | 18:14:42,305 | 
54.92.168.12 |  0
 Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:42,306 | 
54.92.168.12 |855
 Preparing statement | 18:14:42,307 | 
54.92.168.12 |   1950
   Determining replicas to query | 18:14:42,307 | 
54.92.168.12 |   2101
 Message received from /54.92.168.12 | 18:14:42,308 | 
54.82.42.121 | 56
  Enqueuing request to /54.82.42.121 | 18:14:42,308 | 
54.92.168.12 |   2685
Sending message to /54.82.42.121 | 18:14:42,308 | 
54.92.168.12 |   2825
 Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:42,309 | 
54.82.42.121 |556
  Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
54.82.42.121 |868
  Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
54.82.42.121 |956
Scanned 2 rows and matched 2 | 18:14:42,309 | 
54.82.42.121 |989
 Enqueuing response to /54.92.168.12 | 18:14:42,309 | 
54.82.42.121 |   1007
Sending message to /54.92.168.12 | 18:14:42,310 | 
54.82.42.121 |   1287
 Message received from /54.82.42.121 | 18:14:42,319 | 
54.92.168.12 |  13656
  Processing response from /54.82.42.121 | 18:14:42,319 | 
54.92.168.12 |  14133
Request complete | 18:14:42,319 | 
54.92.168.12 |  14808

cqlsh select * from cass_dc.cass_dc ;

 key | value
-+---
   a |   aaa
   c |   ccc
   b |   bbb


Tracing session: d4ecbcc0-c9ac-11e4-a527-23b8e6fcc12b

 activity| timestamp| 
source   | source_elapsed
-+--+--+
  execute_cql3_query | 18:14:47,180 | 
54.92.168.12 |  0
 Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:47,180 | 
54.92.168.12 | 81
 Preparing statement | 18:14:47,181 | 
54.92.168.12 |224
   Determining replicas to query | 18:14:47,181 | 
54.92.168.12 |383
 Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:47,184 | 
54.92.168.12 |   3611
  Read 1 live and 0 tombstoned cells | 18:14:47,184 | 
54.92.168.12 |   4073
  Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
54.92.168.12 |   4239
  Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
54.92.168.12 |   4559
Scanned 3 rows and matched 3 | 18:14:47,185 | 
54.92.168.12 |   4601
Request complete | 18:14:47,185 | 
54.92.168.12 |   5812

cqlsh select * from cass_dc.cass_dc ;

 key | value
-+---
   a |   aaa
   c |   ccc
   b |   bbb


Tracing session: 10247f30-c9ad-11e4-a527-23b8e6fcc12b

 activity| timestamp| 
source   | source_elapsed
-+--+--+
  execute_cql3_query | 18:16:26,531 | 
54.92.168.12 |  0
 Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:16:26,531 | 
54.92.168.12 |116
 Preparing statement | 18:16:26,531 | 
54.92.168.12 |237
   Determining replicas to query | 18:16:26,531 | 
54.92.168.12 |386
 Executing seq scan across 0 sstables for [min(-1), 

[jira] [Commented] (CASSANDRA-8968) Cassandra cqlsh query return different result randomly

2015-03-13 Thread Jeff Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360948#comment-14360948
 ] 

Jeff Liu commented on CASSANDRA-8968:
-

That works.  Thanks.

 Cassandra cqlsh query return different result randomly
 --

 Key: CASSANDRA-8968
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8968
 Project: Cassandra
  Issue Type: Wish
 Environment: Cassandra 2.0
Reporter: Jeff Liu

 Noticed that a select query in cqlsh returns different result randomly. It 
 would be nice to get consistent result when same query is performed.
 {noformat}
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
c |   ccc
b |   bbb
 Tracing session: d20490f0-c9ac-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:14:42,305 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:42,306 | 
 54.92.168.12 |855
  Preparing statement | 18:14:42,307 | 
 54.92.168.12 |   1950
Determining replicas to query | 18:14:42,307 | 
 54.92.168.12 |   2101
  Message received from /x.x.x.12 | 18:14:42,308 | 
 54.82.42.121 | 56
   Enqueuing request to /x.x.x.121 | 18:14:42,308 | 
 54.92.168.12 |   2685
 Sending message to /x.x.x.121 | 18:14:42,308 | 
 54.92.168.12 |   2825
  Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:42,309 | 
 54.82.42.121 |556
   Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
 54.82.42.121 |868
   Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
 54.82.42.121 |956
 Scanned 2 rows and matched 2 | 18:14:42,309 | 
 54.82.42.121 |989
  Enqueuing response to /x.x.x.12 | 18:14:42,309 | 
 54.82.42.121 |   1007
 Sending message to /x.x.x.12 | 18:14:42,310 | 
 54.82.42.121 |   1287
  Message received from /x.x.x.121 | 18:14:42,319 | 
 54.92.168.12 |  13656
   Processing response from /x.x.x.121 | 18:14:42,319 | 
 54.92.168.12 |  14133
 Request complete | 18:14:42,319 | 
 54.92.168.12 |  14808
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
a |   aaa
c |   ccc
b |   bbb
 Tracing session: d4ecbcc0-c9ac-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:14:47,180 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:47,180 | 
 54.92.168.12 | 81
  Preparing statement | 18:14:47,181 | 
 54.92.168.12 |224
Determining replicas to query | 18:14:47,181 | 
 54.92.168.12 |383
  Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:47,184 | 
 54.92.168.12 |   3611
   Read 1 live and 0 tombstoned cells | 18:14:47,184 | 
 54.92.168.12 |   4073
   Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
 54.92.168.12 |   4239
   Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
 54.92.168.12 |   4559
 Scanned 3 rows and matched 3 | 18:14:47,185 | 
 54.92.168.12 |   4601
 Request complete | 18:14:47,185 | 
 54.92.168.12 |   5812
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
a |   aaa
c |   ccc
b |   bbb
 Tracing session: 10247f30-c9ad-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:16:26,531 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:16:26,531 | 
 54.92.168.12 

[jira] [Commented] (CASSANDRA-8838) Resumable bootstrap streaming

2015-03-13 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360691#comment-14360691
 ] 

Yuki Morishita commented on CASSANDRA-8838:
---

You are right, I need reset code for replacing also. Updated branch with the 
fix. This time I put reset check right before starting bootstrap.

I tested manually with ccm and manual intervention.
I also ran dtest's {{replace_address_test.py}} and {{bootstrap_test.py}}, and 
both ran successfully in my local machine.

I think I can add some dtests about the feature, similar to what I've done in 
CASSANDRA-8942.

 Resumable bootstrap streaming
 -

 Key: CASSANDRA-8838
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8838
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
  Labels: dense-storage
 Fix For: 3.0


 This allows the bootstrapping node not to be streamed already received data.
 The bootstrapping node records received keyspace/ranges as one stream session 
 completes. When some sessions with other nodes fail, bootstrapping fails 
 also, though next time it re-bootstraps, already received keyspace/ranges are 
 skipped to be streamed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-5310) New authentication module does not wok in multi datacenters in case of network outage

2015-03-13 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-5310:
--

Assignee: Sam Tunnicliffe  (was: Aleksey Yeschenko)

 New authentication module does not wok in multi datacenters in case of 
 network outage
 -

 Key: CASSANDRA-5310
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5310
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.2
 Environment: Ubuntu 12.04
 Cluster of 16 nodes in 2 datacenters (8 nodes in each datacenter)
Reporter: jal
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 1.2.3

 Attachments: auth_fix_consistency.patch


 With 1.2.2, I am using the new authentication backend PasswordAuthenticator 
 with the authorizer CassandraAuthorizer
 In case of network outage, we are no more able to connect to Cassandra.
 Here is the error message we get when I want to connect through cqlsh:
 Traceback (most recent call last):
   File ./cqlsh, line 2262, in module
 main(*read_options(sys.argv[1:], os.environ))
   File ./cqlsh, line 2248, in main
 display_float_precision=options.float_precision)
   File ./cqlsh, line 483, in __init__
 cql_version=cqlver, transport=transport)
 File ./../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/connection.py, line 
 143, in connect
   File ./../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/connection.py, 
 line 59, in __init__
   File ./../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/thrifteries.py, 
 line 157, in establish_connection
   File 
 ./../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py, 
 line 455, in login
   File 
 ./../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py, 
 line 476, in recv_login
 cql.cassandra.ttypes.AuthenticationException: 
 AuthenticationException(why='org.apache.cassandra.exceptions.UnavailableException:
  Cannot achieve consistency level QUORUM')



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-5310) New authentication module does not wok in multi datacenters in case of network outage

2015-03-13 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-5310:
---
Assignee: Aleksey Yeschenko  (was: Sam Tunnicliffe)

 New authentication module does not wok in multi datacenters in case of 
 network outage
 -

 Key: CASSANDRA-5310
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5310
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.2
 Environment: Ubuntu 12.04
 Cluster of 16 nodes in 2 datacenters (8 nodes in each datacenter)
Reporter: jal
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 1.2.3

 Attachments: auth_fix_consistency.patch


 With 1.2.2, I am using the new authentication backend PasswordAuthenticator 
 with the authorizer CassandraAuthorizer
 In case of network outage, we are no more able to connect to Cassandra.
 Here is the error message we get when I want to connect through cqlsh:
 Traceback (most recent call last):
   File ./cqlsh, line 2262, in module
 main(*read_options(sys.argv[1:], os.environ))
   File ./cqlsh, line 2248, in main
 display_float_precision=options.float_precision)
   File ./cqlsh, line 483, in __init__
 cql_version=cqlver, transport=transport)
 File ./../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/connection.py, line 
 143, in connect
   File ./../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/connection.py, 
 line 59, in __init__
   File ./../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/thrifteries.py, 
 line 157, in establish_connection
   File 
 ./../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py, 
 line 455, in login
   File 
 ./../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py, 
 line 476, in recv_login
 cql.cassandra.ttypes.AuthenticationException: 
 AuthenticationException(why='org.apache.cassandra.exceptions.UnavailableException:
  Cannot achieve consistency level QUORUM')



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8692) Coalesce intra-cluster network messages

2015-03-13 Thread Tupshin Harper (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360966#comment-14360966
 ] 

Tupshin Harper commented on CASSANDRA-8692:
---

I commented on CASSANDRA-7032 that It seems like there might be a way to 
constrain vnode RDF (replication distribution factor) in the general scope of 
this ticket as well.

I feel like there are some very compelling availability arguments (in addition 
to these possible performance optimizations) in favor of being able to 
constrain how many other nodes (within a DC) that a given vnode-enabled node 
actually replicates with. 

e.g. you could have 256 vnodes, but guarantee that those 256 would only 
replicate to 32 (out of possibly thousands) of other nodes.

 Coalesce intra-cluster network messages
 ---

 Key: CASSANDRA-8692
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8692
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 2.1.4

 Attachments: batching-benchmark.png


 While researching CASSANDRA-8457 we found that it is effective and can be 
 done without introducing additional latency at low concurrency/throughput.
 The patch from that was used and found to be useful in a real life scenario 
 so I propose we implement this in 2.1 in addition to 3.0.
 The change set is a single file and is small enough to be reviewable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8969) Add indication in cassandra.yaml that rpc timeouts going too high will cause memory build up

2015-03-13 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-8969:

Summary: Add indication in cassandra.yaml that rpc timeouts going too high 
will cause memory build up  (was: Add indication in cassandra.yaml that rpc 
timeouts going to high will cause memory build up)

 Add indication in cassandra.yaml that rpc timeouts going too high will cause 
 memory build up
 

 Key: CASSANDRA-8969
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8969
 Project: Cassandra
  Issue Type: Improvement
  Components: Config
Reporter: Jeremy Hanna
Assignee: Jeremy Hanna

 It would be helpful to communicate that setting the rpc timeouts too high may 
 cause memory problems on the server as it can become overloaded and has to 
 retain the in flight requests in memory.  I'll get this done but just adding 
 the ticket as a placeholder for memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8968) Cassandra cqlsh query return different result randomly

2015-03-13 Thread Jeff Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Liu resolved CASSANDRA-8968.
-
Resolution: Fixed

 Cassandra cqlsh query return different result randomly
 --

 Key: CASSANDRA-8968
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8968
 Project: Cassandra
  Issue Type: Wish
 Environment: Cassandra 2.0
Reporter: Jeff Liu

 Noticed that a select query in cqlsh returns different result randomly. It 
 would be nice to get consistent result when same query is performed.
 {noformat}
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
c |   ccc
b |   bbb
 Tracing session: d20490f0-c9ac-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:14:42,305 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:42,306 | 
 54.92.168.12 |855
  Preparing statement | 18:14:42,307 | 
 54.92.168.12 |   1950
Determining replicas to query | 18:14:42,307 | 
 54.92.168.12 |   2101
  Message received from /x.x.x.12 | 18:14:42,308 | 
 54.82.42.121 | 56
   Enqueuing request to /x.x.x.121 | 18:14:42,308 | 
 54.92.168.12 |   2685
 Sending message to /x.x.x.121 | 18:14:42,308 | 
 54.92.168.12 |   2825
  Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:42,309 | 
 54.82.42.121 |556
   Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
 54.82.42.121 |868
   Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
 54.82.42.121 |956
 Scanned 2 rows and matched 2 | 18:14:42,309 | 
 54.82.42.121 |989
  Enqueuing response to /x.x.x.12 | 18:14:42,309 | 
 54.82.42.121 |   1007
 Sending message to /x.x.x.12 | 18:14:42,310 | 
 54.82.42.121 |   1287
  Message received from /x.x.x.121 | 18:14:42,319 | 
 54.92.168.12 |  13656
   Processing response from /x.x.x.121 | 18:14:42,319 | 
 54.92.168.12 |  14133
 Request complete | 18:14:42,319 | 
 54.92.168.12 |  14808
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
a |   aaa
c |   ccc
b |   bbb
 Tracing session: d4ecbcc0-c9ac-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:14:47,180 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:47,180 | 
 54.92.168.12 | 81
  Preparing statement | 18:14:47,181 | 
 54.92.168.12 |224
Determining replicas to query | 18:14:47,181 | 
 54.92.168.12 |383
  Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:47,184 | 
 54.92.168.12 |   3611
   Read 1 live and 0 tombstoned cells | 18:14:47,184 | 
 54.92.168.12 |   4073
   Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
 54.92.168.12 |   4239
   Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
 54.92.168.12 |   4559
 Scanned 3 rows and matched 3 | 18:14:47,185 | 
 54.92.168.12 |   4601
 Request complete | 18:14:47,185 | 
 54.92.168.12 |   5812
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
a |   aaa
c |   ccc
b |   bbb
 Tracing session: 10247f30-c9ad-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:16:26,531 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:16:26,531 | 
 54.92.168.12 |116
 

[jira] [Updated] (CASSANDRA-8968) Cassandra cqlsh query return different result randomly

2015-03-13 Thread Jeff Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Liu updated CASSANDRA-8968:

Description: 
Noticed that a select query in cqlsh returns different result randomly. It 
would be nice to get consistent result when same query is performed.

{noformat}

cqlsh select * from cass_dc.cass_dc ;

 key | value
-+---
   c |   ccc
   b |   bbb


Tracing session: d20490f0-c9ac-11e4-a527-23b8e6fcc12b

 activity| timestamp| 
source   | source_elapsed
-+--+--+
  execute_cql3_query | 18:14:42,305 | 
54.92.168.12 |  0
 Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:42,306 | 
54.92.168.12 |855
 Preparing statement | 18:14:42,307 | 
54.92.168.12 |   1950
   Determining replicas to query | 18:14:42,307 | 
54.92.168.12 |   2101
 Message received from /x.x.x.12 | 18:14:42,308 | 
54.82.42.121 | 56
  Enqueuing request to /x.x.x.121 | 18:14:42,308 | 
54.92.168.12 |   2685
Sending message to /x.x.x.121 | 18:14:42,308 | 
54.92.168.12 |   2825
 Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:42,309 | 
54.82.42.121 |556
  Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
54.82.42.121 |868
  Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
54.82.42.121 |956
Scanned 2 rows and matched 2 | 18:14:42,309 | 
54.82.42.121 |989
 Enqueuing response to /x.x.x.12 | 18:14:42,309 | 
54.82.42.121 |   1007
Sending message to /x.x.x.12 | 18:14:42,310 | 
54.82.42.121 |   1287
 Message received from /x.x.x.121 | 18:14:42,319 | 
54.92.168.12 |  13656
  Processing response from /x.x.x.121 | 18:14:42,319 | 
54.92.168.12 |  14133
Request complete | 18:14:42,319 | 
54.92.168.12 |  14808

cqlsh select * from cass_dc.cass_dc ;

 key | value
-+---
   a |   aaa
   c |   ccc
   b |   bbb


Tracing session: d4ecbcc0-c9ac-11e4-a527-23b8e6fcc12b

 activity| timestamp| 
source   | source_elapsed
-+--+--+
  execute_cql3_query | 18:14:47,180 | 
54.92.168.12 |  0
 Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:47,180 | 
54.92.168.12 | 81
 Preparing statement | 18:14:47,181 | 
54.92.168.12 |224
   Determining replicas to query | 18:14:47,181 | 
54.92.168.12 |383
 Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:47,184 | 
54.92.168.12 |   3611
  Read 1 live and 0 tombstoned cells | 18:14:47,184 | 
54.92.168.12 |   4073
  Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
54.92.168.12 |   4239
  Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
54.92.168.12 |   4559
Scanned 3 rows and matched 3 | 18:14:47,185 | 
54.92.168.12 |   4601
Request complete | 18:14:47,185 | 
54.92.168.12 |   5812

cqlsh select * from cass_dc.cass_dc ;

 key | value
-+---
   a |   aaa
   c |   ccc
   b |   bbb


Tracing session: 10247f30-c9ad-11e4-a527-23b8e6fcc12b

 activity| timestamp| 
source   | source_elapsed
-+--+--+
  execute_cql3_query | 18:16:26,531 | 
54.92.168.12 |  0
 Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:16:26,531 | 
54.92.168.12 |116
 Preparing statement | 18:16:26,531 | 
54.92.168.12 |237
   Determining replicas to query | 18:16:26,531 | 
54.92.168.12 |386
 Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:16:26,532 | 
54.92.168.12 |   1280
  Read 1 live and 0 tombstoned cells | 18:16:26,533 | 
54.92.168.12 |   1447
  Read 

[jira] [Created] (CASSANDRA-8969) Add indication in cassandra.yaml that rpc timeouts going to high will cause memory build up

2015-03-13 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-8969:
---

 Summary: Add indication in cassandra.yaml that rpc timeouts going 
to high will cause memory build up
 Key: CASSANDRA-8969
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8969
 Project: Cassandra
  Issue Type: Improvement
  Components: Config
Reporter: Jeremy Hanna
Assignee: Jeremy Hanna


It would be helpful to communicate that setting the rpc timeouts too high may 
cause memory problems on the server as it can become overloaded and has to 
retain the in flight requests in memory.  I'll get this done but just adding 
the ticket as a placeholder for memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8968) Cassandra cqlsh query return different result randomly

2015-03-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360915#comment-14360915
 ] 

Brandon Williams commented on CASSANDRA-8968:
-

The default consistency level in cqlsh is ONE, see if increasing that helps.

 Cassandra cqlsh query return different result randomly
 --

 Key: CASSANDRA-8968
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8968
 Project: Cassandra
  Issue Type: Wish
 Environment: Cassandra 2.0
Reporter: Jeff Liu

 Noticed that a select query in cqlsh returns different result randomly. It 
 would be nice to get consistent result when same query is performed.
 {noformat}
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
c |   ccc
b |   bbb
 Tracing session: d20490f0-c9ac-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:14:42,305 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:42,306 | 
 54.92.168.12 |855
  Preparing statement | 18:14:42,307 | 
 54.92.168.12 |   1950
Determining replicas to query | 18:14:42,307 | 
 54.92.168.12 |   2101
  Message received from /x.x.x.12 | 18:14:42,308 | 
 54.82.42.121 | 56
   Enqueuing request to /x.x.x.121 | 18:14:42,308 | 
 54.92.168.12 |   2685
 Sending message to /x.x.x.121 | 18:14:42,308 | 
 54.92.168.12 |   2825
  Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:42,309 | 
 54.82.42.121 |556
   Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
 54.82.42.121 |868
   Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
 54.82.42.121 |956
 Scanned 2 rows and matched 2 | 18:14:42,309 | 
 54.82.42.121 |989
  Enqueuing response to /x.x.x.12 | 18:14:42,309 | 
 54.82.42.121 |   1007
 Sending message to /x.x.x.12 | 18:14:42,310 | 
 54.82.42.121 |   1287
  Message received from /x.x.x.121 | 18:14:42,319 | 
 54.92.168.12 |  13656
   Processing response from /x.x.x.121 | 18:14:42,319 | 
 54.92.168.12 |  14133
 Request complete | 18:14:42,319 | 
 54.92.168.12 |  14808
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
a |   aaa
c |   ccc
b |   bbb
 Tracing session: d4ecbcc0-c9ac-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:14:47,180 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:47,180 | 
 54.92.168.12 | 81
  Preparing statement | 18:14:47,181 | 
 54.92.168.12 |224
Determining replicas to query | 18:14:47,181 | 
 54.92.168.12 |383
  Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:47,184 | 
 54.92.168.12 |   3611
   Read 1 live and 0 tombstoned cells | 18:14:47,184 | 
 54.92.168.12 |   4073
   Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
 54.92.168.12 |   4239
   Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
 54.92.168.12 |   4559
 Scanned 3 rows and matched 3 | 18:14:47,185 | 
 54.92.168.12 |   4601
 Request complete | 18:14:47,185 | 
 54.92.168.12 |   5812
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
a |   aaa
c |   ccc
b |   bbb
 Tracing session: 10247f30-c9ad-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:16:26,531 | 
 54.92.168.12 |  0
  Parsing 

[jira] [Updated] (CASSANDRA-8934) COPY command has inherent 128KB field size limit

2015-03-13 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8934:
---
Fix Version/s: (was: 2.0.13)
   2.0.14

 COPY command has inherent 128KB field size limit
 

 Key: CASSANDRA-8934
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8934
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter:  Brian Hess
Assignee: Philip Thompson
  Labels: cqlsh
 Fix For: 2.1.4, 2.0.14


 In using the COPY command as follows:
 {{cqlsh -e COPY test.test1mb(pkey, ccol, data) FROM 
 'in/data1MB/data1MB_9.csv'}}
 the following error is thrown:
 {{stdin:1:field larger than field limit (131072)}}
 The data file contains a field that is greater than 128KB (it's more like 
 almost 1MB).
 A work-around (thanks to [~jjordan] and [~thobbs] is to modify the cqlsh 
 script and add the line
 {{csv.field_size_limit(10)}}
 anywhere after the line
 {{import csv}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8839) DatabaseDescriptor throws NPE when rpc_interface is used

2015-03-13 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361002#comment-14361002
 ] 

Carl Yeksigian commented on CASSANDRA-8839:
---

The changes to select the interface, as well as the test looks good.

For the IPv6 changes, I think it would make sense to use the option 
{{-Djava.net.preferIPv4Stack}} instead. I tried that with the unit tests, and 
they succeeded without having to worry about IPv4 and v6 addresses. There is 
[an issue|https://github.com/netty/netty/pull/3473] with Netty and that flag, 
but if we are specifying the address already, we probably won't hit it.

Also, when testing the class we get for {{InetAddress}}, I think it would be 
better to use {{instanceof}}.

 DatabaseDescriptor throws NPE when rpc_interface is used
 

 Key: CASSANDRA-8839
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8839
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: 2.1.3
Reporter: Jan Kesten
Assignee: Ariel Weisberg
 Fix For: 2.1.4


 Copy from mail to dev mailinglist. 
 When using
 - listen_interface instead of listen_address
 - rpc_interface instead of rpc_address
 starting 2.1.3 throws an NPE:
 {code}
 ERROR [main] 2015-02-20 07:50:09,661 DatabaseDescriptor.java:144 - Fatal 
 error during configuration loading
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:411)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
 at 
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
  ~[apache-cassandra-2.1.3.jar:2.1.3]
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:110) 
 [apache-cassandra-2.1.3.jar:2.1.3]
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:465)
  [apache-cassandra-2.1.3.jar:2.1.3]
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:554) 
 [apache-cassandra-2.1.3.jar:2.1.3]
 {code}
 Occurs on debian package as well as in tar.gz distribution. 
 {code}
 /* Local IP, hostname or interface to bind RPC server to */
 if(conf.rpc_address !=null conf.rpc_interface !=null)
 {
 throw newConfigurationException(Set rpc_address OR rpc_interface, not 
 both);
 }
 else if(conf.rpc_address !=null)
 {
 try
 {
 rpcAddress = InetAddress.getByName(conf.rpc_address);
 }
 catch(UnknownHostException e)
 {
 throw newConfigurationException(Unknown host in rpc_address + 
 conf.rpc_address);
 }
 }
 else if(conf.rpc_interface !=null)
 {
 listenAddress = 
 getNetworkInterfaceAddress(conf.rpc_interface,rpc_interface);
 }
 else
 {
 rpcAddress = FBUtilities.getLocalAddress();
 }
 {code}
 I think that listenAddress in the second else block is an error. In my case 
 rpc_interface is eth0, so listenAddress gets set, and rpcAddress remains 
 unset. The result is NPE in line 411:
 {code}
 if(rpcAddress.isAnyLocalAddress())
 {code}
 After changing rpc_interface to rpc_address everything works as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8449) Allow zero-copy reads again

2015-03-13 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361107#comment-14361107
 ] 

T Jake Luciani commented on CASSANDRA-8449:
---

I've updated my branch with the oporder approach for netty and message service 
requests.  There are still edges like dropped messages and internal cql 
requests I need to cover.  As well as hints and batchlog replay (though those 
are compressed so shouldn't be an issue regardless).  

I've added a unit test using ByteMan http://byteman.jboss.org/ to inject pauses 
in the pipeline that cause the jvm to crash without the correct oporder code.

 Allow zero-copy reads again
 ---

 Key: CASSANDRA-8449
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8449
 Project: Cassandra
  Issue Type: Improvement
Reporter: T Jake Luciani
Assignee: T Jake Luciani
Priority: Minor
  Labels: performance
 Fix For: 3.0


 We disabled zero-copy reads in CASSANDRA-3179 due to in flight reads 
 accessing a ByteBuffer when the data was unmapped by compaction.  Currently 
 this code path is only used for uncompressed reads.
 The actual bytes are in fact copied to the client output buffers for both 
 netty and thrift before being sent over the wire, so the only issue really is 
 the time it takes to process the read internally.  
 This patch adds a slow network read test and changes the tidy() method to 
 actually delete a sstable once the readTimeout has elapsed giving plenty of 
 time to serialize the read.
 Removing this copy causes significantly less GC on the read path and improves 
 the tail latencies:
 http://cstar.datastax.com/graph?stats=c0c8ce16-7fea-11e4-959d-42010af0688fmetric=gc_countoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=109.34ymin=0ymax=5.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-6809) Compressed Commit Log

2015-03-13 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360980#comment-14360980
 ] 

Benedict edited comment on CASSANDRA-6809 at 3/13/15 7:31 PM:
--

The idea of making it a hard limit instead of the concrete number is to fix 
it as something much larger than you would like it to be, but no larger than 
you really must expect it to go, so that it can scale gracefully with some 
blips and avoid those blips having severe negative repurcussions. My 
expectation is that whatever strategy we use here we will also use for 
non-compressed once we migrate to manually managed memory buffers, and it seems 
to me always having deallocations trail utilisation by 1 (so the first to 
become unused we do not deallocate, but if another becomes unused we release 
one of the two) probably gives us pretty good behaviour without any extra 
tuning knobs


was (Author: benedict):
The idea of making it a hard limit instead of the concrete number is to fix 
it as something much larger than you would like it to be, but no larger than 
you really must expect it to go, so that it can scale gracefully with some 
blips and avoid those blips having severe negative repurcussions. My 
expectation is that whatever strategy we use here we will also use for 
non-compressed once we migrate to manually managed memory buffers, and it seems 
to me always having deallocations trail utilisation by 1 (so the first to 
become unused we do not deallocate, but if another becomes unused we release 
one of the two)

 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: docs-impacting, performance
 Fix For: 3.0

 Attachments: ComitLogStress.java, logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8692) Coalesce intra-cluster network messages

2015-03-13 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360976#comment-14360976
 ] 

Ariel Weisberg commented on CASSANDRA-8692:
---

Maybe a naive question. Why not just have 32 vnodes?

 Coalesce intra-cluster network messages
 ---

 Key: CASSANDRA-8692
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8692
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 2.1.4

 Attachments: batching-benchmark.png


 While researching CASSANDRA-8457 we found that it is effective and can be 
 done without introducing additional latency at low concurrency/throughput.
 The patch from that was used and found to be useful in a real life scenario 
 so I propose we implement this in 2.1 in addition to 3.0.
 The change set is a single file and is small enough to be reviewable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6809) Compressed Commit Log

2015-03-13 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360980#comment-14360980
 ] 

Benedict commented on CASSANDRA-6809:
-

The idea of making it a hard limit instead of the concrete number is to fix 
it as something much larger than you would like it to be, but no larger than 
you really must expect it to go, so that it can scale gracefully with some 
blips and avoid those blips having severe negative repurcussions. My 
expectation is that whatever strategy we use here we will also use for 
non-compressed once we migrate to manually managed memory buffers, and it seems 
to me always having deallocations trail utilisation by 1 (so the first to 
become unused we do not deallocate, but if another becomes unused we release 
one of the two)

 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: docs-impacting, performance
 Fix For: 3.0

 Attachments: ComitLogStress.java, logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-8968) Cassandra cqlsh query return different result randomly

2015-03-13 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reopened CASSANDRA-8968:


 Cassandra cqlsh query return different result randomly
 --

 Key: CASSANDRA-8968
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8968
 Project: Cassandra
  Issue Type: Wish
 Environment: Cassandra 2.0
Reporter: Jeff Liu

 Noticed that a select query in cqlsh returns different result randomly. It 
 would be nice to get consistent result when same query is performed.
 {noformat}
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
c |   ccc
b |   bbb
 Tracing session: d20490f0-c9ac-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:14:42,305 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:42,306 | 
 54.92.168.12 |855
  Preparing statement | 18:14:42,307 | 
 54.92.168.12 |   1950
Determining replicas to query | 18:14:42,307 | 
 54.92.168.12 |   2101
  Message received from /x.x.x.12 | 18:14:42,308 | 
 54.82.42.121 | 56
   Enqueuing request to /x.x.x.121 | 18:14:42,308 | 
 54.92.168.12 |   2685
 Sending message to /x.x.x.121 | 18:14:42,308 | 
 54.92.168.12 |   2825
  Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:42,309 | 
 54.82.42.121 |556
   Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
 54.82.42.121 |868
   Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
 54.82.42.121 |956
 Scanned 2 rows and matched 2 | 18:14:42,309 | 
 54.82.42.121 |989
  Enqueuing response to /x.x.x.12 | 18:14:42,309 | 
 54.82.42.121 |   1007
 Sending message to /x.x.x.12 | 18:14:42,310 | 
 54.82.42.121 |   1287
  Message received from /x.x.x.121 | 18:14:42,319 | 
 54.92.168.12 |  13656
   Processing response from /x.x.x.121 | 18:14:42,319 | 
 54.92.168.12 |  14133
 Request complete | 18:14:42,319 | 
 54.92.168.12 |  14808
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
a |   aaa
c |   ccc
b |   bbb
 Tracing session: d4ecbcc0-c9ac-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:14:47,180 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:47,180 | 
 54.92.168.12 | 81
  Preparing statement | 18:14:47,181 | 
 54.92.168.12 |224
Determining replicas to query | 18:14:47,181 | 
 54.92.168.12 |383
  Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:47,184 | 
 54.92.168.12 |   3611
   Read 1 live and 0 tombstoned cells | 18:14:47,184 | 
 54.92.168.12 |   4073
   Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
 54.92.168.12 |   4239
   Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
 54.92.168.12 |   4559
 Scanned 3 rows and matched 3 | 18:14:47,185 | 
 54.92.168.12 |   4601
 Request complete | 18:14:47,185 | 
 54.92.168.12 |   5812
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
a |   aaa
c |   ccc
b |   bbb
 Tracing session: 10247f30-c9ad-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:16:26,531 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:16:26,531 | 
 54.92.168.12 |116
  

[jira] [Resolved] (CASSANDRA-8968) Cassandra cqlsh query return different result randomly

2015-03-13 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-8968.

Resolution: Not a Problem

 Cassandra cqlsh query return different result randomly
 --

 Key: CASSANDRA-8968
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8968
 Project: Cassandra
  Issue Type: Wish
 Environment: Cassandra 2.0
Reporter: Jeff Liu

 Noticed that a select query in cqlsh returns different result randomly. It 
 would be nice to get consistent result when same query is performed.
 {noformat}
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
c |   ccc
b |   bbb
 Tracing session: d20490f0-c9ac-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:14:42,305 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:42,306 | 
 54.92.168.12 |855
  Preparing statement | 18:14:42,307 | 
 54.92.168.12 |   1950
Determining replicas to query | 18:14:42,307 | 
 54.92.168.12 |   2101
  Message received from /x.x.x.12 | 18:14:42,308 | 
 54.82.42.121 | 56
   Enqueuing request to /x.x.x.121 | 18:14:42,308 | 
 54.92.168.12 |   2685
 Sending message to /x.x.x.121 | 18:14:42,308 | 
 54.92.168.12 |   2825
  Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:42,309 | 
 54.82.42.121 |556
   Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
 54.82.42.121 |868
   Read 1 live and 0 tombstoned cells | 18:14:42,309 | 
 54.82.42.121 |956
 Scanned 2 rows and matched 2 | 18:14:42,309 | 
 54.82.42.121 |989
  Enqueuing response to /x.x.x.12 | 18:14:42,309 | 
 54.82.42.121 |   1007
 Sending message to /x.x.x.12 | 18:14:42,310 | 
 54.82.42.121 |   1287
  Message received from /x.x.x.121 | 18:14:42,319 | 
 54.92.168.12 |  13656
   Processing response from /x.x.x.121 | 18:14:42,319 | 
 54.92.168.12 |  14133
 Request complete | 18:14:42,319 | 
 54.92.168.12 |  14808
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
a |   aaa
c |   ccc
b |   bbb
 Tracing session: d4ecbcc0-c9ac-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:14:47,180 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:14:47,180 | 
 54.92.168.12 | 81
  Preparing statement | 18:14:47,181 | 
 54.92.168.12 |224
Determining replicas to query | 18:14:47,181 | 
 54.92.168.12 |383
  Executing seq scan across 0 sstables for [min(-1), min(-1)] | 18:14:47,184 | 
 54.92.168.12 |   3611
   Read 1 live and 0 tombstoned cells | 18:14:47,184 | 
 54.92.168.12 |   4073
   Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
 54.92.168.12 |   4239
   Read 1 live and 0 tombstoned cells | 18:14:47,185 | 
 54.92.168.12 |   4559
 Scanned 3 rows and matched 3 | 18:14:47,185 | 
 54.92.168.12 |   4601
 Request complete | 18:14:47,185 | 
 54.92.168.12 |   5812
 cqlsh select * from cass_dc.cass_dc ;
  key | value
 -+---
a |   aaa
c |   ccc
b |   bbb
 Tracing session: 10247f30-c9ad-11e4-a527-23b8e6fcc12b
  activity| timestamp| 
 source   | source_elapsed
 -+--+--+
   execute_cql3_query | 18:16:26,531 | 
 54.92.168.12 |  0
  Parsing select * from cass_dc.cass_dc  LIMIT 1; | 18:16:26,531 | 
 54.92.168.12 |116
   

cassandra git commit: Merge 8722

2015-03-13 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/trunk 5d6f9284f - d919cc998


Merge 8722


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d919cc99
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d919cc99
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d919cc99

Branch: refs/heads/trunk
Commit: d919cc998e80eacec09ba374ee0b92248eb8bad1
Parents: 5d6f928
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Mar 13 18:32:30 2015 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Mar 13 18:32:30 2015 -0500

--
 .../cassandra/auth/AuthenticatedUser.java   |  5 +-
 .../apache/cassandra/auth/PermissionsCache.java | 69 
 .../cassandra/auth/PermissionsCacheMBean.java   | 31 +
 .../org/apache/cassandra/config/Config.java |  4 +-
 .../cassandra/config/DatabaseDescriptor.java| 10 +++
 5 files changed, 101 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d919cc99/src/java/org/apache/cassandra/auth/AuthenticatedUser.java
--
diff --git a/src/java/org/apache/cassandra/auth/AuthenticatedUser.java 
b/src/java/org/apache/cassandra/auth/AuthenticatedUser.java
index ee62503..5e57308 100644
--- a/src/java/org/apache/cassandra/auth/AuthenticatedUser.java
+++ b/src/java/org/apache/cassandra/auth/AuthenticatedUser.java
@@ -38,10 +38,7 @@ public class AuthenticatedUser
 public static final AuthenticatedUser ANONYMOUS_USER = new 
AuthenticatedUser(ANONYMOUS_USERNAME);
 
 // User-level permissions cache.
-private static final PermissionsCache permissionsCache = new 
PermissionsCache(DatabaseDescriptor.getPermissionsValidity(),
-   
   DatabaseDescriptor.getPermissionsUpdateInterval(),
-   
   DatabaseDescriptor.getPermissionsCacheMaxEntries(),
-   
   DatabaseDescriptor.getAuthorizer());
+private static final PermissionsCache permissionsCache = new 
PermissionsCache(DatabaseDescriptor.getAuthorizer());
 
 private final String name;
 // primary Role of the logged in user

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d919cc99/src/java/org/apache/cassandra/auth/PermissionsCache.java
--
diff --git a/src/java/org/apache/cassandra/auth/PermissionsCache.java 
b/src/java/org/apache/cassandra/auth/PermissionsCache.java
index 9e0dfa9..bc96d82 100644
--- a/src/java/org/apache/cassandra/auth/PermissionsCache.java
+++ b/src/java/org/apache/cassandra/auth/PermissionsCache.java
@@ -17,9 +17,11 @@
  */
 package org.apache.cassandra.auth;
 
+import java.lang.management.ManagementFactory;
 import java.util.Set;
 import java.util.concurrent.*;
 
+import org.apache.cassandra.config.DatabaseDescriptor;
 import com.google.common.cache.CacheBuilder;
 import com.google.common.cache.CacheLoader;
 import com.google.common.cache.LoadingCache;
@@ -31,19 +33,33 @@ import org.slf4j.LoggerFactory;
 import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor;
 import org.apache.cassandra.utils.Pair;
 
-public class PermissionsCache
+import javax.management.MBeanServer;
+import javax.management.ObjectName;
+
+public class PermissionsCache implements PermissionsCacheMBean
 {
 private static final Logger logger = 
LoggerFactory.getLogger(PermissionsCache.class);
 
+private final String MBEAN_NAME = 
org.apache.cassandra.auth:type=PermissionsCache;
+
 private final ThreadPoolExecutor cacheRefreshExecutor = new 
DebuggableThreadPoolExecutor(PermissionsCacheRefresh,

  Thread.NORM_PRIORITY);
 private final IAuthorizer authorizer;
-private final LoadingCachePairAuthenticatedUser, IResource, 
SetPermission cache;
+private volatile LoadingCachePairAuthenticatedUser, IResource, 
SetPermission cache;
 
-public PermissionsCache(int validityPeriod, int updateInterval, int 
maxEntries, IAuthorizer authorizer)
+public PermissionsCache(IAuthorizer authorizer)
 {
 this.authorizer = authorizer;
-this.cache = initCache(validityPeriod, updateInterval, maxEntries);
+this.cache = initCache(null);
+try
+{
+MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
+mbs.registerMBean(this, new ObjectName(MBEAN_NAME));
+}
+catch (Exception e)
+{
+throw new RuntimeException(e);
+}
 }
 
 public 

[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2015-03-13 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361275#comment-14361275
 ] 

Benedict commented on CASSANDRA-8099:
-

Like I said, it's a shame; I lament not having longer to criticise the less 
optimal decisions. That's not at all to suggest they will cumulatively sabotage 
this patch to worse than the status quo. But the bar for improvement is much 
higher once a round of changes goes in (not least because the effort of 
maintaining compatibility each time, but also because it has to be justified 
afresh, and be worth the risk, argumentation, redevelopment, etc.), and so we 
will find ourselves settling more readily than had we considered our options 
more carefully up front, especially when there are so many aspects to discuss. 
I don't think there is much to be done about it now, though, given the time 
constraints, and we will simply have to do our best.

Anyway, I'll try to properly digest the patch over the next week or so, so I 
can give some actual concrete feedback. On the whole I _do_ think it is a huge 
step forward (well, perhaps not the naming :)). I just wish we weren't rushing 
this part after waiting so long for it, and that we had at least discussed some 
of the more concrete aspects of the design in advance.

The concern I have about the scope being too large to vet effectively is 
somewhat uncorrelated, but I don't have a good answer for that either. My 
experience is that review's capacity for finding problems doesn't scale 
linearly with the scope and complexity of a patch, and I don't think we've ever 
had a patch as large as this (it's basically a whole version jump on its own). 
Of course, if you're planing to break 3.0 just to make me feel better about 
breaking 2.1, I'm cool with that :)

 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0

 Attachments: 8099-nit


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of per-column cells, 
 though obviously not exactly the same kind of cell we have today).
 # Make the engine more iterative. What I mean here is that in the read path, 
 we end up reading all cells in memory (we put them in a ColumnFamily object), 
 but there is really no reason to. If instead we were working with iterators 
 all the way through, we could get to a point where we're basically 
 transferring data from disk to the network, and we should be able to reduce 
 GC substantially.
 Please note that such refactor should provide some performance improvements 
 right off the bat but it's not it's primary goal either. It's primary goal is 
 to simplify the storage engine and adds abstraction that are better suited to 
 further optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2015-03-13 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361197#comment-14361197
 ] 

Sylvain Lebresne commented on CASSANDRA-8099:
-

I've just rebased and (force) pushed the current version of the patch to [the 
usual branch|https://github.com/pcmanus/cassandra/tree/8099_engine_refactor].  
It still doesn't handle thrift and misses backward compatibility code for the 
internal messages (and I'll start working on those) but it's basically complete 
otherwise. It passes all the CQL tests (unit and dtests) we have in particular. 
 Also seems to be passing other dtests (that don't use thrift) but I'll admit I 
haven't had the patience to run them all locally and jenkins since to be in a 
bad mood recently, so a couple might require attention but that's likely minor. 
 Also, I haven't taken the time to upgrade most of our unit tests and this will 
be done next with the help of some others, but hopefully the CQL tests and 
dtests exercise the code changed enough that they shouldn't be major surprises.

Overall the missing parts are sufficiently isolated that I think initial review 
can be started. I've actually written 
[here|https://github.com/pcmanus/cassandra/blob/8099_engine_refactor/guide_8099.md]
 some kind of overview/guide for the sake of making diving into the patch 
easier.  I'll be happy to update it if there is something missing that would 
help.


 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0

 Attachments: 8099-nit


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of per-column cells, 
 though obviously not exactly the same kind of cell we have today).
 # Make the engine more iterative. What I mean here is that in the read path, 
 we end up reading all cells in memory (we put them in a ColumnFamily object), 
 but there is really no reason to. If instead we were working with iterators 
 all the way through, we could get to a point where we're basically 
 transferring data from disk to the network, and we should be able to reduce 
 GC substantially.
 Please note that such refactor should provide some performance improvements 
 right off the bat but it's not it's primary goal either. It's primary goal is 
 to simplify the storage engine and adds abstraction that are better suited to 
 further optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2015-03-13 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14361195#comment-14361195
 ] 

Sylvain Lebresne commented on CASSANDRA-8099:
-

bq. I think it's a shame this patch wasn't attempted at least a little more 
incrementally.

I certainly understand the criticism, this is definitively not as incremental 
as it should be. My lame defence is that since this structurally changes the 
main abstraction used by the storage engine, it quickly trickles down to 
everything else, so that I just wasn't sure how to attack this more 
incrementally in practice. For the serialization formats, I could indeed have 
stick to serializing to the old format, but given the mismatch between the old 
format and the new abstractions, it was actually simpler to just write in a 
meaningful format right away (it allowed me to get something working faster).  
And since the new serialization format details are fairly well encapsulated 
(mostly in {{AtomSerializer.java}}), I'll admit it didn't felt like a huge deal 
overall. But in any case, I probably haven't tried hard enough and/or I'm not 
smart enough to have figured out how to make that happen more incrementally and 
for that, I apologize.

bq. I'm also worried I'm finding myself saying too close to release to 
question this decision

I agree that not questioning a decision that you think is worth questioning 
should be avoided, but I also don't think that this needs to be the case. If 
you think a decision make things worth than it is in current trunk, then by all 
mean, let's bring it. If there is enough such concerns voiced that makes us 
think this patch won't be a net improvement over the status quo and there is no 
time to address those concerns, then I'll be the first to suggest that, as sad 
as that would make me, we should consider pushing it after 3.0 (but I do have 
the weakness to think that the patch is a net improvement).

Now, I don't pretend that every choice made here is absolutely optimal (I'm 
afraid I'm not that smart) so there will things that can be improved (and maybe 
some will require subsequent changes). But as long as something doesn't make 
things worth than they currently are, I'd suggest is probably ok to just create 
tickets for those improvements. After all, this isn't meant at all to be the 
definitive version of Cassandra ode, it just pretend to be cleaner grounds to 
improve upon than we currently have.

Don't get me wrong, I'm not trying to say that such a big patch is ideal, it's 
not. I just didn't figured out how to do better.

 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0

 Attachments: 8099-nit


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of per-column cells, 
 though obviously not exactly the same kind of cell we have today).
 # Make the engine more iterative. What I mean here is that in the read path, 
 we end up reading all cells in memory (we put them in a ColumnFamily object), 
 but there is really no reason to. If instead we were working with iterators 
 all the way through, we could get to a point where we're basically 
 transferring data from disk to the network, and we should be able to reduce 
 GC 

[jira] [Commented] (CASSANDRA-8961) Data rewrite case causes almost non-functional compaction

2015-03-13 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359995#comment-14359995
 ] 

Aleksey Yeschenko commented on CASSANDRA-8961:
--

CASSANDRA-8099 will make that query not use range tombstones, but you'll have 
to wait for 3.0 to get that.

 Data rewrite case causes almost non-functional compaction
 -

 Key: CASSANDRA-8961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8961
 Project: Cassandra
  Issue Type: Bug
 Environment: Centos 6.6, Cassandra 2.0.12 (Also seen in Cassandra 2.1)
Reporter: Dan Kinder
Priority: Minor

 There seems to be a bug of some kind where compaction grinds to a halt in 
 this use case: from time to time we have a set of rows we need to migrate, 
 changing their primary key by deleting the row and inserting a new row with 
 the same partition key and different cluster key. The python script below 
 demonstrates this; it takes a bit of time to run (didn't try to optimize it) 
 but when it's done it will be trying to compact a few hundred megs of data 
 for a long time... on the order of days, or it will never finish.
 Not verified by this sandboxed experiment but it seems that compression 
 settings do not matter and that this seems to happen to STCS as well, not 
 just LCS. I am still testing if other patterns cause this terrible compaction 
 performance, like deleting all rows then inserting or vice versa.
 Even if it isn't a bug per se, is there a way to fix or work around this 
 behavior?
 {code}
 import string
 import random
 from cassandra.cluster import Cluster
 cluster = Cluster(['localhost'])
 db = cluster.connect('walker')
 db.execute(DROP KEYSPACE IF EXISTS trial)
 db.execute(CREATE KEYSPACE trial
   WITH REPLICATION = { 'class': 'SimpleStrategy', 
 'replication_factor': 1 })
 db.execute(CREATE TABLE trial.tbl (
 pk text,
 data text,
 PRIMARY KEY(pk, data)
   ) WITH compaction = { 'class' : 'LeveledCompactionStrategy' }
 AND compression = {'sstable_compression': ''})
 # Number of rows to insert and move
 n = 20  
 
 # Insert n rows with the same partition key, 1KB of unique data in cluster key
 for i in range(n):
 db.execute(INSERT INTO trial.tbl (pk, data) VALUES ('thepk', %s),
 [str(i).zfill(1024)])
 # Update those n rows, deleting each and replacing with a very similar row
 for i in range(n):
 val = str(i).zfill(1024)
 db.execute(DELETE FROM trial.tbl WHERE pk = 'thepk' AND data = %s, 
 [val])
 db.execute(INSERT INTO trial.tbl (pk, data) VALUES ('thepk', %s), [1 
 + val])
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8833) Stop opening compaction results early

2015-03-13 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360030#comment-14360030
 ] 

Marcus Eriksson commented on CASSANDRA-8833:


[~thobbs] do we have any statistics on how much we gain in real life by the 
redistribution?

 Stop opening compaction results early
 -

 Key: CASSANDRA-8833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8833
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
 Fix For: 2.1.4


 We should simplify the code base by not doing early opening of compaction 
 results. It makes it very hard to reason about sstable life cycles since they 
 can be in many different states, opened early, starts moved, shadowed, 
 final, instead of as before, basically just one (tmp files are not really 
 'live' yet so I don't count those). The ref counting of shared resources 
 between sstables in these different states is also hard to reason about. This 
 has caused quite a few issues since we released 2.1
 I think it all boils down to a performance vs code complexity issue, is 
 opening compaction results early really 'worth it' wrt the performance gain? 
 The results in CASSANDRA-6916 sure look like the benefits are big enough, but 
 the difference should not be as big for people on SSDs (which most people who 
 care about latencies are)
 WDYT [~benedict] [~jbellis] [~iamaleksey] [~JoshuaMcKenzie]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8900) AssertionError when binding nested collection in a DELETE

2015-03-13 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360130#comment-14360130
 ] 

Benjamin Lerer commented on CASSANDRA-8900:
---

Hi Stefania,

The {{@Test}} annotations are missing on your unit test methods.
When I run them {{testFrozenListInMap}}, {{testFrozenSetInMap}} and 
{{testFrozenSetInSet}} are failling.
I had a quick look at the 2 first ones and it looks that it is a problem with 
the test itself. Be carefull with {{UPDATE}} statements they behave in reallity 
like an {{UPSERT}} (e.g. if the row does not exist Cassandra will insert it 
with the values specified in the statement).
I did not look at the last failling test.

Otherwise I agree with you, you can remove the DTest. 

 AssertionError when binding nested collection in a DELETE
 -

 Key: CASSANDRA-8900
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8900
 Project: Cassandra
  Issue Type: Bug
Reporter: Olivier Michallat
Assignee: Stefania
Priority: Minor
 Fix For: 2.1.4


 Running this with the Java driver:
 {code}
 session.execute(create table if not exists foo2(k int primary key, m 
 mapfrozenlistint, int););
 PreparedStatement pst = session.prepare(delete m[?] from foo2 where k = 1);
 session.execute(pst.bind(ImmutableList.of(1)));
 {code}
 Produces a server error. Server-side stack trace:
 {code}
 ERROR [SharedPool-Worker-4] 2015-03-03 13:33:24,740 Message.java:538 - 
 Unexpected exception during request; channel = [id: 0xf9e92e61, 
 /127.0.0.1:58163 = /127.0.0.1:9042]
 java.lang.AssertionError: null
 at 
 org.apache.cassandra.cql3.Maps$DiscarderByKey.execute(Maps.java:381) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:85)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:654)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:487)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:473)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
  [main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
  [main/:na]
 at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_60]
 at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  [main/:na]
 at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [main/:na]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]
 {code}
 A simple statement (i.e. QUERY message with values) produces the same result:
 {code}
 session.execute(delete m[?] from foo2 where k = 1, ImmutableList.of(1));
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8964) SSTable count rises during compactions and max open files exceeded

2015-03-13 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360195#comment-14360195
 ] 

Benedict commented on CASSANDRA-8964:
-

Were there any other exceptions in the log? It looks to me like it may be a 
problem with cleanup of the compaction writer, that is fixed in 2.1.3 to my 
knowledge, but really 2.1.4 will be the correct solution if that's the case, 
and should be coming very soon. 

If not, it would seem like some kind of major compaction was happening and a 
single compaction workload was writing a very large number of files. In this 
case I would suggest raising your max file handle count, since you seem to have 
it configured very low anyway. 

It could be viewed that having multiple instances of the directory file handle 
open is itself a bug. But a minor one. I'll file a ticket separately for that.

 SSTable count rises during compactions and max open files exceeded
 --

 Key: CASSANDRA-8964
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8964
 Project: Cassandra
  Issue Type: Bug
 Environment: Apache Cassandra 2.1.2
 Centos 6
 AWS EC2
 i2.2xlarge
Reporter: Anthony Fisk
Priority: Critical
 Attachments: lsof_with_tmp.txt, lsof_without_tmp.txt, 
 nodetool_cfstats.zip


 LCS compaction was not able to keep up with the prolonged insert load on one 
 of our tables called log, resulting in 2,185 SSTables for that table and 
 1,779 pending compactions all together during a test we were running.
 We stopped our load, unthrottled compaction throughput, increased the 
 concurrent compactors from 2 to 8, and let it compact the SSTables.
 All was going well until the number of SSTables count for our log table got 
 down to around 97, then began rising again until it had reached 758 SSTables 
 1.5 hours later... (we've been recording the cfstats output every half hour, 
 [attached|^nodetool_cfstats.zip])
 Eventually we exceeded the number of open files:
 {code}
 ERROR [MemtableFlushWriter:286] 2015-03-12 13:44:36,748 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:286,5,main]
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 /mnt/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-tmp-ka-6618-Index.db
  (Too many open files)
 at 
 org.apache.cassandra.io.util.SequentialWriter.init(SequentialWriter.java:75)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:104) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:99) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.init(SSTableWriter.java:552)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.init(SSTableWriter.java:134) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:390)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:329)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:313) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1037)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_51]
 Caused by: java.io.FileNotFoundException: 
 /mnt/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-tmp-ka-6618-Index.db
  (Too many open files)
 at java.io.RandomAccessFile.open(Native Method) ~[na:1.7.0_51]
 at java.io.RandomAccessFile.init(RandomAccessFile.java:241) 
 ~[na:1.7.0_51]
 at 
 org.apache.cassandra.io.util.SequentialWriter.init(SequentialWriter.java:71)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 ... 14 common frames omitted
 ERROR [MemtableFlushWriter:286] 

[jira] [Updated] (CASSANDRA-8915) Improve MergeIterator performance

2015-03-13 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8915:

Reviewer: Benedict

 Improve MergeIterator performance
 -

 Key: CASSANDRA-8915
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8915
 Project: Cassandra
  Issue Type: Improvement
Reporter: Branimir Lambov
Assignee: Branimir Lambov
Priority: Minor

 The implementation of {{MergeIterator}} uses a priority queue and applies a 
 pair of {{poll}}+{{add}} operations for every item in the resulting sequence. 
 This is quite inefficient as {{poll}} necessarily applies at least {{log N}} 
 comparisons (up to {{2log N}}), and {{add}} often requires another {{log N}}, 
 for example in the case where the inputs largely don't overlap (where {{N}} 
 is the number of iterators being merged).
 This can easily be replaced with a simple custom structure that can perform 
 replacement of the top of the queue in a single step, which will very often 
 complete after a couple of comparisons and in the worst case scenarios will 
 match the complexity of the current implementation.
 This should significantly improve merge performance for iterators with 
 limited overlap (e.g. levelled compaction).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8833) Stop opening compaction results early

2015-03-13 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360188#comment-14360188
 ] 

Benedict commented on CASSANDRA-8833:
-

At the risk of sounding like a broken record (in case my earlier statement was 
missed): bloom filter resizing would need to break this assumption also, and is 
sort of intrinsically linked to the discussion around summary resizing. Whether 
or not we want this is another matter I'll leave aside for the moment. 

Just to outline the remainder of my strategy for mitigating any and all of 
these risks:

* In CASSANDRA-8568, I will:
** introduce a stable set of sstables that never changes, and all 
non-hot-path accesses will use these so they don't risk confusion, and don't 
have to worry about first/last issues
** ensure compaction strategies are only informed of changes to this stable 
set of readers
** remove the shadowed state of an sstable
** make the modification of tracker state transactional and more declarative, 
so both easier to follow and much harder to let get into a bad state
* CASSANDRA-8893, CASSANDRA-7066 and some related work will:
** eliminate the distinction between early open, temporary, and final files on 
disk, so eliminate at least one layer of the cleanup logic (i.e. make its 
requirements equivalent to summary/bf resizing)
** which also permits us to simplify the early open logic, by special casing it 
much less

In conjunction with the major overhaul of resource cleanup, AFAICT this 
mitigates most of the problems: 

* resource counting is now much easier to reason about, and soon will be even 
easier. it is also safer to get it wrong.
* only paths we know are safe to use overlapping sstables will do so (and in 
parallel we also enforce the non-overlapping rule)
* compaction doesn't have to even be aware it is happening
* what I think has been the biggest problem, the actual safe application of 
state changes (which were never atomic and could actually screw themselves up 
willfully through assertions) will be transactional and ensure exceptions do 
not interrupt their execution. it will also encapsulate its own safe rollback, 
so if we screw up somewhere, it will fix it for us.

I don't pretend it'll be 100% first time, but I think this new state will be 
safer by a significant margin than the pre-early-open state, which we are still 
seeing bug reports for in the 2.0 line, and has been the cause of many serious 
bugs (and at least one major public downtime of a well known deployment). I 
very much hope all of these changes will restore confidence in not only the 
early open feature, but resource management in general, and hopefully reduce 
the burden on all maintainers.

 Stop opening compaction results early
 -

 Key: CASSANDRA-8833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8833
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
 Fix For: 2.1.4


 We should simplify the code base by not doing early opening of compaction 
 results. It makes it very hard to reason about sstable life cycles since they 
 can be in many different states, opened early, starts moved, shadowed, 
 final, instead of as before, basically just one (tmp files are not really 
 'live' yet so I don't count those). The ref counting of shared resources 
 between sstables in these different states is also hard to reason about. This 
 has caused quite a few issues since we released 2.1
 I think it all boils down to a performance vs code complexity issue, is 
 opening compaction results early really 'worth it' wrt the performance gain? 
 The results in CASSANDRA-6916 sure look like the benefits are big enough, but 
 the difference should not be as big for people on SSDs (which most people who 
 care about latencies are)
 WDYT [~benedict] [~jbellis] [~iamaleksey] [~JoshuaMcKenzie]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8900) AssertionError when binding nested collection in a DELETE

2015-03-13 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360164#comment-14360164
 ] 

Stefania commented on CASSANDRA-8900:
-

Thanks for spotting this, I was running the tests one by one in the debugger 
and I must have missed those problems. The first two like you said are trivial 
and I fixed the test cases.

The last one I can make it to pass by replacing {{Sets.Discarder}} with 
{{Maps.DiscarderByKey}} in {{ElementDeletion}}. I'm pretty sure the problem is 
there, the Sets discarder cannot distinguish the set subtraction from the 
deletion by key. 

I committed the code for now but I want to take a bit more time on Monday to 
review the discarder names for Sets and Maps:

https://github.com/apache/cassandra/commit/4371536a6bb1157373b175ef420c446f18dd907a

In the meantime if you have any more suggestions let me know.

 AssertionError when binding nested collection in a DELETE
 -

 Key: CASSANDRA-8900
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8900
 Project: Cassandra
  Issue Type: Bug
Reporter: Olivier Michallat
Assignee: Stefania
Priority: Minor
 Fix For: 2.1.4


 Running this with the Java driver:
 {code}
 session.execute(create table if not exists foo2(k int primary key, m 
 mapfrozenlistint, int););
 PreparedStatement pst = session.prepare(delete m[?] from foo2 where k = 1);
 session.execute(pst.bind(ImmutableList.of(1)));
 {code}
 Produces a server error. Server-side stack trace:
 {code}
 ERROR [SharedPool-Worker-4] 2015-03-03 13:33:24,740 Message.java:538 - 
 Unexpected exception during request; channel = [id: 0xf9e92e61, 
 /127.0.0.1:58163 = /127.0.0.1:9042]
 java.lang.AssertionError: null
 at 
 org.apache.cassandra.cql3.Maps$DiscarderByKey.execute(Maps.java:381) 
 ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:85)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:654)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:487)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:473)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
  ~[main/:na]
 at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134)
  ~[main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
  [main/:na]
 at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
  [main/:na]
 at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_60]
 at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
  [main/:na]
 at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
 [main/:na]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]
 {code}
 A simple statement (i.e. QUERY message with values) produces the same result:
 {code}
 session.execute(delete m[?] from foo2 where k = 1, ImmutableList.of(1));
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8238) NPE in SizeTieredCompactionStrategy.filterColdSSTables

2015-03-13 Thread Fredrik LS (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359370#comment-14359370
 ] 

Fredrik LS edited comment on CASSANDRA-8238 at 3/13/15 10:24 AM:
-

Reproduced this in 2.1.3.
I'm not very familiar with the Cassandra code base but took me the liberty to 
do some debugging.
This occurs when doing bulkloading from JMX. 
The problem is that the SSTableLoader has a static initializer setting 
{code}
static
{
Config.setClientMode(true);
}
{code} 
(I guess going through JMX shouldn't be considered client mode but only when 
running SSTableLoader standalone).
Every SSTableReader created after the clientMode flag is set will have the 
readMeter set to null according to the SSTableReader constructor. The 
SSTableReader for SSTables existing at startup will have the readMeter set to 
some value but when JMX bulkloading is used, there will be a mix of 
SSTableReader for the same CF both with readMeter with a value and readMeter 
with null. That in combination with hot and cold SSTables in 
{code}SizeTieredCompactionStrategy.filterColdSSTables(...){code} will trigger 
the NullPointerException when CompactionExecutor kicks in trying to compact the 
hot SSTables already existing from startup which have a readMeter set and the 
just streamed cold SSTables from JMX bulkloading which have readMeter set to 
null.

Regards
/Fredrik


was (Author: fredrikl74):
Reproduced this in 2.1.3.
I'm not very familiar with the Cassandra code base but took me the liberty to 
do some debugging.
This occurs when doing bulkloading from JMX. 
The problem is that the SSTableLoader has a static initializer setting 
{code}
static
{
Config.setClientMode(true);
}
{code} 
(I guess going through JMX shouldn't be considered client mode but only when 
running SSTableLoader standalone).
Every SSTableReader created after the clientMode flag is set will have the 
readMeter set to null according to the SSTableReader constructor. The 
SSTableReader for SSTables existing at startup will have the readMeter set to 
some value but when JMX bulkloading is used, there will be a mix of 
SSTableReader for the same CF both with readMeter with a value and readMeter 
with null. That in combination with hot and cold SSTables in 
{code}SizeTieredCompactionStrategy.filterColdSSTables(...){code} will trigger 
the NullPointerException when CompactionExecutor kicks in trying to compact the 
hot SSTables already existing from startup which have a readMeter set and the 
just streamed cold SSTables from JMX bulkloading which have readMeter set to 
null.

Hope my analyze is correct  and that the code formatting isn't too bad.

Regards
/Fredrik

 NPE in SizeTieredCompactionStrategy.filterColdSSTables
 --

 Key: CASSANDRA-8238
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8238
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-assert-that-readMeter-is-not-null.patch


 {noformat}
 ERROR [CompactionExecutor:15] 2014-10-31 15:28:32,318 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:15,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:181)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_72]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_72]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8965) Cassandra retains a file handle to the directory its writing to for each writer instance

2015-03-13 Thread Benedict (JIRA)
Benedict created CASSANDRA-8965:
---

 Summary: Cassandra retains a file handle to the directory its 
writing to for each writer instance
 Key: CASSANDRA-8965
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8965
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Priority: Trivial
 Fix For: 3.0


We could either share this amongst the CF object, or have a shared ref-counted 
cache that opens a reference and shares it amongst all writer instances, 
closing it once they all close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8966) SequentialWriter should be Ref counted

2015-03-13 Thread Benedict (JIRA)
Benedict created CASSANDRA-8966:
---

 Summary: SequentialWriter should be Ref counted
 Key: CASSANDRA-8966
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8966
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Priority: Minor
 Fix For: 3.0


A LHF to introduce some more resource safety



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-03-13 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1376b8ef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1376b8ef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1376b8ef

Branch: refs/heads/trunk
Commit: 1376b8efff9768ec941d5f41adc7b9b6cc4b9e72
Parents: cbd4de8 2199a87
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Mar 13 08:02:46 2015 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Mar 13 08:02:46 2015 -0500

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/gms/EndpointState.java | 12 ++
 src/java/org/apache/cassandra/gms/Gossiper.java | 25 +++-
 3 files changed, 32 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1376b8ef/CHANGES.txt
--
diff --cc CHANGES.txt
index cd4b551,8843908..cd29e9d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,46 -1,5 +1,47 @@@
 +2.1.4
 + * Log warning when queries that will require ALLOW FILTERING in Cassandra 3.0
 +   are executed (CASSANDRA-8418)
 + * Fix cassandra-stress so it respects the CL passed in user mode 
(CASSANDRA-8948)
 + * Fix rare NPE in ColumnDefinition#hasIndexOption() (CASSANDRA-8786)
 + * cassandra-stress reports per-operation statistics, plus misc 
(CASSANDRA-8769)
 + * Add SimpleDate (cql date) and Time (cql time) types (CASSANDRA-7523)
 + * Use long for key count in cfstats (CASSANDRA-8913)
 + * Make SSTableRewriter.abort() more robust to failure (CASSANDRA-8832)
 + * Remove cold_reads_to_omit from STCS (CASSANDRA-8860)
 + * Make EstimatedHistogram#percentile() use ceil instead of floor 
(CASSANDRA-8883)
 + * Fix top partitions reporting wrong cardinality (CASSANDRA-8834)
 + * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067)
 + * Pick sstables for validation as late as possible inc repairs 
(CASSANDRA-8366)
 + * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856)
 + * Fix parallelism adjustment in range and secondary index queries
 +   when the first fetch does not satisfy the limit (CASSANDRA-8856)
 + * Check if the filtered sstables is non-empty in STCS (CASSANDRA-8843)
 + * Upgrade java-driver used for cassandra-stress (CASSANDRA-8842)
 + * Fix CommitLog.forceRecycleAllSegments() memory access error 
(CASSANDRA-8812)
 + * Improve assertions in Memory (CASSANDRA-8792)
 + * Fix SSTableRewriter cleanup (CASSANDRA-8802)
 + * Introduce SafeMemory for CompressionMetadata.Writer (CASSANDRA-8758)
 + * 'nodetool info' prints exception against older node (CASSANDRA-8796)
 + * Ensure SSTableReader.last corresponds exactly with the file end 
(CASSANDRA-8750)
 + * Make SSTableWriter.openEarly more robust and obvious (CASSANDRA-8747)
 + * Enforce SSTableReader.first/last (CASSANDRA-8744)
 + * Cleanup SegmentedFile API (CASSANDRA-8749)
 + * Avoid overlap with early compaction replacement (CASSANDRA-8683)
 + * Safer Resource Management++ (CASSANDRA-8707)
 + * Write partition size estimates into a system table (CASSANDRA-7688)
 + * cqlsh: Fix keys() and full() collection indexes in DESCRIBE output
 +   (CASSANDRA-8154)
 + * Show progress of streaming in nodetool netstats (CASSANDRA-8886)
 + * IndexSummaryBuilder utilises offheap memory, and shares data between
 +   each IndexSummary opened from it (CASSANDRA-8757)
 + * markCompacting only succeeds if the exact SSTableReader instances being 
 +   marked are in the live set (CASSANDRA-8689)
 + * cassandra-stress support for varint (CASSANDRA-8882)
 + * Fix Adler32 digest for compressed sstables (CASSANDRA-8778)
 + * Add nodetool statushandoff/statusbackup (CASSANDRA-8912)
 +Merged from 2.0:
  2.0.14:
+  * Fix duplicate up/down messages sent to native clients (CASSANDRA-7816)
   * Expose commit log archive status via JMX (CASSANDRA-8734)
   * Provide better exceptions for invalid replication strategy parameters
 (CASSANDRA-8909)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1376b8ef/src/java/org/apache/cassandra/gms/EndpointState.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1376b8ef/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --cc src/java/org/apache/cassandra/gms/Gossiper.java
index a7c58fc,97dc506..ac98c53
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@@ -47,10 -48,7 +48,8 @@@ import org.apache.cassandra.net.Message
  import org.apache.cassandra.net.MessagingService;
  import org.apache.cassandra.service.StorageService;
  import org.apache.cassandra.utils.FBUtilities;
 +import 

[2/6] cassandra git commit: Fix duplicate up/down messages sent to native clients

2015-03-13 Thread brandonwilliams
Fix duplicate up/down messages sent to native clients

Patch by Stefania, reviewed by brandonwilliams for CASSANDRA-7816


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2199a87a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2199a87a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2199a87a

Branch: refs/heads/cassandra-2.1
Commit: 2199a87aab8322c41f1b590c0fd8f08f448952ca
Parents: 77c66bf
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Mar 13 08:02:12 2015 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Mar 13 08:02:12 2015 -0500

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/gms/EndpointState.java | 12 ++
 src/java/org/apache/cassandra/gms/Gossiper.java | 25 +++-
 3 files changed, 32 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2199a87a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 382b3dd..8843908 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.14:
+ * Fix duplicate up/down messages sent to native clients (CASSANDRA-7816)
  * Expose commit log archive status via JMX (CASSANDRA-8734)
  * Provide better exceptions for invalid replication strategy parameters
(CASSANDRA-8909)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2199a87a/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 3df9155..518e575 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -46,12 +46,14 @@ public class EndpointState
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
 private volatile boolean isAlive;
+private volatile boolean hasPendingEcho;
 
 EndpointState(HeartBeatState initialHbState)
 {
 hbState = initialHbState;
 updateTimestamp = System.nanoTime();
 isAlive = true;
+hasPendingEcho = false;
 }
 
 HeartBeatState getHeartBeatState()
@@ -113,6 +115,16 @@ public class EndpointState
 isAlive = false;
 }
 
+public boolean hasPendingEcho()
+{
+return hasPendingEcho;
+}
+
+public void markPendingEcho(boolean val)
+{
+hasPendingEcho = val;
+}
+
 public String toString()
 {
 return EndpointState: HeartBeatState =  + hbState + , AppStateMap = 
 + applicationState;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2199a87a/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index a478405..97dc506 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -29,6 +29,7 @@ import javax.management.MBeanServer;
 import javax.management.ObjectName;
 
 import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.ImmutableList;
 import com.google.common.util.concurrent.Uninterruptibles;
 
 import org.apache.cassandra.utils.Pair;
@@ -48,8 +49,6 @@ import org.apache.cassandra.net.MessagingService;
 import org.apache.cassandra.service.StorageService;
 import org.apache.cassandra.utils.FBUtilities;
 
-import com.google.common.collect.ImmutableList;
-
 /**
  * This module is responsible for Gossiping information for the local 
endpoint. This abstraction
  * maintains the list of live and dead endpoints. Periodically i.e. every 1 
second this module
@@ -878,6 +877,12 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 return;
 }
 
+if (localState.hasPendingEcho())
+{
+logger.debug({} has already a pending echo, skipping it, 
localState);
+return;
+}
+
 localState.markDead();
 
 MessageOutEchoMessage echoMessage = new 
MessageOutEchoMessage(MessagingService.Verb.ECHO, new EchoMessage(), 
EchoMessage.serializer);
@@ -891,9 +896,12 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 
 public void response(MessageIn msg)
 {
+localState.markPendingEcho(false);
 realMarkAlive(addr, localState);
 }
 };
+
+localState.markPendingEcho(true);
 MessagingService.instance().sendRR(echoMessage, addr, echoHandler);
 }
 
@@ -936,9 

[1/6] cassandra git commit: Fix duplicate up/down messages sent to native clients

2015-03-13 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 77c66bf9f - 2199a87aa
  refs/heads/cassandra-2.1 cbd4de8f5 - 1376b8eff
  refs/heads/trunk c059a5689 - 65d5ef26c


Fix duplicate up/down messages sent to native clients

Patch by Stefania, reviewed by brandonwilliams for CASSANDRA-7816


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2199a87a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2199a87a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2199a87a

Branch: refs/heads/cassandra-2.0
Commit: 2199a87aab8322c41f1b590c0fd8f08f448952ca
Parents: 77c66bf
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Mar 13 08:02:12 2015 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Mar 13 08:02:12 2015 -0500

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/gms/EndpointState.java | 12 ++
 src/java/org/apache/cassandra/gms/Gossiper.java | 25 +++-
 3 files changed, 32 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2199a87a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 382b3dd..8843908 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.14:
+ * Fix duplicate up/down messages sent to native clients (CASSANDRA-7816)
  * Expose commit log archive status via JMX (CASSANDRA-8734)
  * Provide better exceptions for invalid replication strategy parameters
(CASSANDRA-8909)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2199a87a/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 3df9155..518e575 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -46,12 +46,14 @@ public class EndpointState
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
 private volatile boolean isAlive;
+private volatile boolean hasPendingEcho;
 
 EndpointState(HeartBeatState initialHbState)
 {
 hbState = initialHbState;
 updateTimestamp = System.nanoTime();
 isAlive = true;
+hasPendingEcho = false;
 }
 
 HeartBeatState getHeartBeatState()
@@ -113,6 +115,16 @@ public class EndpointState
 isAlive = false;
 }
 
+public boolean hasPendingEcho()
+{
+return hasPendingEcho;
+}
+
+public void markPendingEcho(boolean val)
+{
+hasPendingEcho = val;
+}
+
 public String toString()
 {
 return EndpointState: HeartBeatState =  + hbState + , AppStateMap = 
 + applicationState;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2199a87a/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index a478405..97dc506 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -29,6 +29,7 @@ import javax.management.MBeanServer;
 import javax.management.ObjectName;
 
 import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.ImmutableList;
 import com.google.common.util.concurrent.Uninterruptibles;
 
 import org.apache.cassandra.utils.Pair;
@@ -48,8 +49,6 @@ import org.apache.cassandra.net.MessagingService;
 import org.apache.cassandra.service.StorageService;
 import org.apache.cassandra.utils.FBUtilities;
 
-import com.google.common.collect.ImmutableList;
-
 /**
  * This module is responsible for Gossiping information for the local 
endpoint. This abstraction
  * maintains the list of live and dead endpoints. Periodically i.e. every 1 
second this module
@@ -878,6 +877,12 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 return;
 }
 
+if (localState.hasPendingEcho())
+{
+logger.debug({} has already a pending echo, skipping it, 
localState);
+return;
+}
+
 localState.markDead();
 
 MessageOutEchoMessage echoMessage = new 
MessageOutEchoMessage(MessagingService.Verb.ECHO, new EchoMessage(), 
EchoMessage.serializer);
@@ -891,9 +896,12 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 
 public void response(MessageIn msg)
 {
+localState.markPendingEcho(false);
 realMarkAlive(addr, 

[5/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-03-13 Thread brandonwilliams
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1376b8ef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1376b8ef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1376b8ef

Branch: refs/heads/cassandra-2.1
Commit: 1376b8efff9768ec941d5f41adc7b9b6cc4b9e72
Parents: cbd4de8 2199a87
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Mar 13 08:02:46 2015 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Mar 13 08:02:46 2015 -0500

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/gms/EndpointState.java | 12 ++
 src/java/org/apache/cassandra/gms/Gossiper.java | 25 +++-
 3 files changed, 32 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1376b8ef/CHANGES.txt
--
diff --cc CHANGES.txt
index cd4b551,8843908..cd29e9d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,46 -1,5 +1,47 @@@
 +2.1.4
 + * Log warning when queries that will require ALLOW FILTERING in Cassandra 3.0
 +   are executed (CASSANDRA-8418)
 + * Fix cassandra-stress so it respects the CL passed in user mode 
(CASSANDRA-8948)
 + * Fix rare NPE in ColumnDefinition#hasIndexOption() (CASSANDRA-8786)
 + * cassandra-stress reports per-operation statistics, plus misc 
(CASSANDRA-8769)
 + * Add SimpleDate (cql date) and Time (cql time) types (CASSANDRA-7523)
 + * Use long for key count in cfstats (CASSANDRA-8913)
 + * Make SSTableRewriter.abort() more robust to failure (CASSANDRA-8832)
 + * Remove cold_reads_to_omit from STCS (CASSANDRA-8860)
 + * Make EstimatedHistogram#percentile() use ceil instead of floor 
(CASSANDRA-8883)
 + * Fix top partitions reporting wrong cardinality (CASSANDRA-8834)
 + * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067)
 + * Pick sstables for validation as late as possible inc repairs 
(CASSANDRA-8366)
 + * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856)
 + * Fix parallelism adjustment in range and secondary index queries
 +   when the first fetch does not satisfy the limit (CASSANDRA-8856)
 + * Check if the filtered sstables is non-empty in STCS (CASSANDRA-8843)
 + * Upgrade java-driver used for cassandra-stress (CASSANDRA-8842)
 + * Fix CommitLog.forceRecycleAllSegments() memory access error 
(CASSANDRA-8812)
 + * Improve assertions in Memory (CASSANDRA-8792)
 + * Fix SSTableRewriter cleanup (CASSANDRA-8802)
 + * Introduce SafeMemory for CompressionMetadata.Writer (CASSANDRA-8758)
 + * 'nodetool info' prints exception against older node (CASSANDRA-8796)
 + * Ensure SSTableReader.last corresponds exactly with the file end 
(CASSANDRA-8750)
 + * Make SSTableWriter.openEarly more robust and obvious (CASSANDRA-8747)
 + * Enforce SSTableReader.first/last (CASSANDRA-8744)
 + * Cleanup SegmentedFile API (CASSANDRA-8749)
 + * Avoid overlap with early compaction replacement (CASSANDRA-8683)
 + * Safer Resource Management++ (CASSANDRA-8707)
 + * Write partition size estimates into a system table (CASSANDRA-7688)
 + * cqlsh: Fix keys() and full() collection indexes in DESCRIBE output
 +   (CASSANDRA-8154)
 + * Show progress of streaming in nodetool netstats (CASSANDRA-8886)
 + * IndexSummaryBuilder utilises offheap memory, and shares data between
 +   each IndexSummary opened from it (CASSANDRA-8757)
 + * markCompacting only succeeds if the exact SSTableReader instances being 
 +   marked are in the live set (CASSANDRA-8689)
 + * cassandra-stress support for varint (CASSANDRA-8882)
 + * Fix Adler32 digest for compressed sstables (CASSANDRA-8778)
 + * Add nodetool statushandoff/statusbackup (CASSANDRA-8912)
 +Merged from 2.0:
  2.0.14:
+  * Fix duplicate up/down messages sent to native clients (CASSANDRA-7816)
   * Expose commit log archive status via JMX (CASSANDRA-8734)
   * Provide better exceptions for invalid replication strategy parameters
 (CASSANDRA-8909)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1376b8ef/src/java/org/apache/cassandra/gms/EndpointState.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1376b8ef/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --cc src/java/org/apache/cassandra/gms/Gossiper.java
index a7c58fc,97dc506..ac98c53
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@@ -47,10 -48,7 +48,8 @@@ import org.apache.cassandra.net.Message
  import org.apache.cassandra.net.MessagingService;
  import org.apache.cassandra.service.StorageService;
  import org.apache.cassandra.utils.FBUtilities;
 

[jira] [Commented] (CASSANDRA-8964) SSTable count rises during compactions and max open files exceeded

2015-03-13 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360313#comment-14360313
 ] 

Brandon Williams commented on CASSANDRA-8964:
-

You may just not have enough file handles, 6442 isn't exactly a lot for LCS.  
The debian packaging, for instance, sets it to unlimited.

 SSTable count rises during compactions and max open files exceeded
 --

 Key: CASSANDRA-8964
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8964
 Project: Cassandra
  Issue Type: Bug
 Environment: Apache Cassandra 2.1.2
 Centos 6
 AWS EC2
 i2.2xlarge
Reporter: Anthony Fisk
Priority: Critical
 Attachments: lsof_with_tmp.txt, lsof_without_tmp.txt, 
 nodetool_cfstats.zip


 LCS compaction was not able to keep up with the prolonged insert load on one 
 of our tables called log, resulting in 2,185 SSTables for that table and 
 1,779 pending compactions all together during a test we were running.
 We stopped our load, unthrottled compaction throughput, increased the 
 concurrent compactors from 2 to 8, and let it compact the SSTables.
 All was going well until the number of SSTables count for our log table got 
 down to around 97, then began rising again until it had reached 758 SSTables 
 1.5 hours later... (we've been recording the cfstats output every half hour, 
 [attached|^nodetool_cfstats.zip])
 Eventually we exceeded the number of open files:
 {code}
 ERROR [MemtableFlushWriter:286] 2015-03-12 13:44:36,748 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:286,5,main]
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 /mnt/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-tmp-ka-6618-Index.db
  (Too many open files)
 at 
 org.apache.cassandra.io.util.SequentialWriter.init(SequentialWriter.java:75)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:104) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:99) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.init(SSTableWriter.java:552)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.init(SSTableWriter.java:134) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:390)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:329)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:313) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1037)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_51]
 Caused by: java.io.FileNotFoundException: 
 /mnt/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-tmp-ka-6618-Index.db
  (Too many open files)
 at java.io.RandomAccessFile.open(Native Method) ~[na:1.7.0_51]
 at java.io.RandomAccessFile.init(RandomAccessFile.java:241) 
 ~[na:1.7.0_51]
 at 
 org.apache.cassandra.io.util.SequentialWriter.init(SequentialWriter.java:71)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 ... 14 common frames omitted
 ERROR [MemtableFlushWriter:286] 2015-03-12 13:44:36,750 
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.  Exiting 
 forcefully due to:
 java.io.FileNotFoundException: 
 /mnt/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-tmp-ka-6618-Index.db
  (Too many open files)
 at java.io.RandomAccessFile.open(Native Method) ~[na:1.7.0_51]
 at java.io.RandomAccessFile.init(RandomAccessFile.java:241) 
 ~[na:1.7.0_51]
 at 
 

[jira] [Updated] (CASSANDRA-8746) SSTableReader.cloneWithNewStart can drop too much page cache for compressed files

2015-03-13 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8746:
--
Reviewer: Marcus Eriksson

 SSTableReader.cloneWithNewStart can drop too much page cache for compressed 
 files
 -

 Key: CASSANDRA-8746
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8746
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7168) Add repair aware consistency levels

2015-03-13 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360422#comment-14360422
 ] 

T Jake Luciani commented on CASSANDRA-7168:
---

bq. Do we actually need to add a special ConsistencyLevel?

There is nothing requiring it but seems pragmatic before making it the default 
(or at least a way to opt-out).  This might make a good option once/if we get 
to CASSANDRA-8119
 
There might be cases where we want the old behavior that I haven't though of 
yet... 

 Add repair aware consistency levels
 ---

 Key: CASSANDRA-7168
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7168
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: T Jake Luciani
  Labels: performance
 Fix For: 3.0


 With CASSANDRA-5351 and CASSANDRA-2424 I think there is an opportunity to 
 avoid a lot of extra disk I/O when running queries with higher consistency 
 levels.  
 Since repaired data is by definition consistent and we know which sstables 
 are repaired, we can optimize the read path by having a REPAIRED_QUORUM which 
 breaks reads into two phases:
  
   1) Read from one replica the result from the repaired sstables. 
   2) Read from a quorum only the un-repaired data.
 For the node performing 1) we can pipeline the call so it's a single hop.
 In the long run (assuming data is repaired regularly) we will end up with 
 much closer to CL.ONE performance while maintaining consistency.
 Some things to figure out:
   - If repairs fail on some nodes we can have a situation where we don't have 
 a consistent repaired state across the replicas.  
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8238) NPE in SizeTieredCompactionStrategy.filterColdSSTables

2015-03-13 Thread Fredrik LS (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359370#comment-14359370
 ] 

Fredrik LS edited comment on CASSANDRA-8238 at 3/13/15 11:12 AM:
-

Reproduced this in 2.1.3.
I'm not very familiar with the Cassandra codebase but took me the liberty to do 
some debugging.
This occurs when doing bulkloading from JMX. 
The problem is that the SSTableLoader has a static initializer setting 
{code}
static
{
Config.setClientMode(true);
}
{code} 
(I guess going through JMX shouldn't be considered client mode but only when 
running SSTableLoader standalone).
Every SSTableReader created after the clientMode flag is set will have the 
readMeter set to null according to the SSTableReader constructor. The 
SSTableReader for SSTables existing at startup will have the readMeter set to 
some value but when JMX bulkloading is used, there will be a mix of 
SSTableReader for the same CF both with readMeter with a value and readMeter 
with null. That in combination with hot and cold SSTables in 
{code}SizeTieredCompactionStrategy.filterColdSSTables(...){code} will trigger 
the NullPointerException when CompactionExecutor kicks in trying to compact the 
hot SSTables already existing from startup which have a readMeter set and the 
just streamed cold SSTables from JMX bulkloading which have readMeter set to 
null.

Regards
/Fredrik


was (Author: fredrikl74):
Reproduced this in 2.1.3.
I'm not very familiar with the Cassandra code base but took me the liberty to 
do some debugging.
This occurs when doing bulkloading from JMX. 
The problem is that the SSTableLoader has a static initializer setting 
{code}
static
{
Config.setClientMode(true);
}
{code} 
(I guess going through JMX shouldn't be considered client mode but only when 
running SSTableLoader standalone).
Every SSTableReader created after the clientMode flag is set will have the 
readMeter set to null according to the SSTableReader constructor. The 
SSTableReader for SSTables existing at startup will have the readMeter set to 
some value but when JMX bulkloading is used, there will be a mix of 
SSTableReader for the same CF both with readMeter with a value and readMeter 
with null. That in combination with hot and cold SSTables in 
{code}SizeTieredCompactionStrategy.filterColdSSTables(...){code} will trigger 
the NullPointerException when CompactionExecutor kicks in trying to compact the 
hot SSTables already existing from startup which have a readMeter set and the 
just streamed cold SSTables from JMX bulkloading which have readMeter set to 
null.

Regards
/Fredrik

 NPE in SizeTieredCompactionStrategy.filterColdSSTables
 --

 Key: CASSANDRA-8238
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8238
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-assert-that-readMeter-is-not-null.patch


 {noformat}
 ERROR [CompactionExecutor:15] 2014-10-31 15:28:32,318 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:15,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:181)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_72]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_72]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8746) SSTableReader.cloneWithNewStart can drop too much page cache for compressed files

2015-03-13 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360221#comment-14360221
 ] 

Benedict commented on CASSANDRA-8746:
-

Patch available [here|github.com/belliottsmith/cassandra/tree/8746]

Basically I just move the dropPageCache() call inside of SegmentedFile (which 
is better encapsulation anyway); the compressed versions override this to 
lookup the start position of the relevant segment and only drop data prior to 
this

 SSTableReader.cloneWithNewStart can drop too much page cache for compressed 
 files
 -

 Key: CASSANDRA-8746
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8746
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 2.1.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-8238) NPE in SizeTieredCompactionStrategy.filterColdSSTables

2015-03-13 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reopened CASSANDRA-8238:


[~fredrikl74] thanks for the report, this helps alot

 NPE in SizeTieredCompactionStrategy.filterColdSSTables
 --

 Key: CASSANDRA-8238
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8238
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-assert-that-readMeter-is-not-null.patch


 {noformat}
 ERROR [CompactionExecutor:15] 2014-10-31 15:28:32,318 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:15,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:181)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_72]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_72]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/6] cassandra git commit: Fix duplicate up/down messages sent to native clients

2015-03-13 Thread brandonwilliams
Fix duplicate up/down messages sent to native clients

Patch by Stefania, reviewed by brandonwilliams for CASSANDRA-7816


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2199a87a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2199a87a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2199a87a

Branch: refs/heads/trunk
Commit: 2199a87aab8322c41f1b590c0fd8f08f448952ca
Parents: 77c66bf
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Mar 13 08:02:12 2015 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Mar 13 08:02:12 2015 -0500

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/gms/EndpointState.java | 12 ++
 src/java/org/apache/cassandra/gms/Gossiper.java | 25 +++-
 3 files changed, 32 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2199a87a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 382b3dd..8843908 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.14:
+ * Fix duplicate up/down messages sent to native clients (CASSANDRA-7816)
  * Expose commit log archive status via JMX (CASSANDRA-8734)
  * Provide better exceptions for invalid replication strategy parameters
(CASSANDRA-8909)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2199a87a/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 3df9155..518e575 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -46,12 +46,14 @@ public class EndpointState
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
 private volatile boolean isAlive;
+private volatile boolean hasPendingEcho;
 
 EndpointState(HeartBeatState initialHbState)
 {
 hbState = initialHbState;
 updateTimestamp = System.nanoTime();
 isAlive = true;
+hasPendingEcho = false;
 }
 
 HeartBeatState getHeartBeatState()
@@ -113,6 +115,16 @@ public class EndpointState
 isAlive = false;
 }
 
+public boolean hasPendingEcho()
+{
+return hasPendingEcho;
+}
+
+public void markPendingEcho(boolean val)
+{
+hasPendingEcho = val;
+}
+
 public String toString()
 {
 return EndpointState: HeartBeatState =  + hbState + , AppStateMap = 
 + applicationState;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2199a87a/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java 
b/src/java/org/apache/cassandra/gms/Gossiper.java
index a478405..97dc506 100644
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@ -29,6 +29,7 @@ import javax.management.MBeanServer;
 import javax.management.ObjectName;
 
 import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.ImmutableList;
 import com.google.common.util.concurrent.Uninterruptibles;
 
 import org.apache.cassandra.utils.Pair;
@@ -48,8 +49,6 @@ import org.apache.cassandra.net.MessagingService;
 import org.apache.cassandra.service.StorageService;
 import org.apache.cassandra.utils.FBUtilities;
 
-import com.google.common.collect.ImmutableList;
-
 /**
  * This module is responsible for Gossiping information for the local 
endpoint. This abstraction
  * maintains the list of live and dead endpoints. Periodically i.e. every 1 
second this module
@@ -878,6 +877,12 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 return;
 }
 
+if (localState.hasPendingEcho())
+{
+logger.debug({} has already a pending echo, skipping it, 
localState);
+return;
+}
+
 localState.markDead();
 
 MessageOutEchoMessage echoMessage = new 
MessageOutEchoMessage(MessagingService.Verb.ECHO, new EchoMessage(), 
EchoMessage.serializer);
@@ -891,9 +896,12 @@ public class Gossiper implements 
IFailureDetectionEventListener, GossiperMBean
 
 public void response(MessageIn msg)
 {
+localState.markPendingEcho(false);
 realMarkAlive(addr, localState);
 }
 };
+
+localState.markPendingEcho(true);
 MessagingService.instance().sendRR(echoMessage, addr, echoHandler);
 }
 
@@ -936,9 +944,10 @@ 

[6/6] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-13 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/gms/Gossiper.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/65d5ef26
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/65d5ef26
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/65d5ef26

Branch: refs/heads/trunk
Commit: 65d5ef26c50c2e394637f6ba1afe0b80fd1d36a2
Parents: c059a56 1376b8e
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Mar 13 08:06:15 2015 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Mar 13 08:06:15 2015 -0500

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/gms/EndpointState.java | 12 ++
 src/java/org/apache/cassandra/gms/Gossiper.java | 25 +++-
 3 files changed, 32 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/65d5ef26/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/65d5ef26/src/java/org/apache/cassandra/gms/Gossiper.java
--
diff --cc src/java/org/apache/cassandra/gms/Gossiper.java
index 4584044,ac98c53..1820c06
--- a/src/java/org/apache/cassandra/gms/Gossiper.java
+++ b/src/java/org/apache/cassandra/gms/Gossiper.java
@@@ -958,12 -959,14 +967,14 @@@ public class Gossiper implements IFailu
  logger.info(Node {} is now part of the cluster, ep);
  }
  if (logger.isTraceEnabled())
 -logger.trace(Adding endpoint state for  + ep);
 +logger.trace(Adding endpoint state for {}, ep);
  endpointStateMap.put(ep, epState);
  
- // the node restarted: it is up to the subscriber to take whatever 
action is necessary
- for (IEndpointStateChangeSubscriber subscriber : subscribers)
- subscriber.onRestart(ep, epState);
+ if (localEpState != null)
+ {   // the node restarted: it is up to the subscriber to take 
whatever action is necessary
+ for (IEndpointStateChangeSubscriber subscriber : subscribers)
+ subscriber.onRestart(ep, localEpState);
+ }
  
  if (!isDeadState(epState))
  markAlive(ep, epState);
@@@ -1042,7 -1046,8 +1054,8 @@@
  applyNewStates(ep, localEpStatePtr, remoteState);
  }
  else if (logger.isTraceEnabled())
 -logger.trace(Ignoring remote version  + 
remoteMaxVersion +  =  + localMaxVersion +  for  + ep);
 +logger.trace(Ignoring remote version {} = {} 
for {}, remoteMaxVersion, localMaxVersion, ep);
+ 
  if (!localEpStatePtr.isAlive()  
!isDeadState(localEpStatePtr)) // unless of course, it was dead
  markAlive(ep, localEpStatePtr);
  }



[jira] [Commented] (CASSANDRA-8238) NPE in SizeTieredCompactionStrategy.filterColdSSTables

2015-03-13 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360217#comment-14360217
 ] 

Marcus Eriksson commented on CASSANDRA-8238:


note that filterColdSSTables is removed in 2.1.4 so this will not show itself 
there, still a bug that we set client mode when calling from JMX though

 NPE in SizeTieredCompactionStrategy.filterColdSSTables
 --

 Key: CASSANDRA-8238
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8238
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-assert-that-readMeter-is-not-null.patch


 {noformat}
 ERROR [CompactionExecutor:15] 2014-10-31 15:28:32,318 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:15,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:181)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_72]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_72]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6809) Compressed Commit Log

2015-03-13 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360367#comment-14360367
 ] 

Benedict commented on CASSANDRA-6809:
-

bq. Good catch, the number of segments needed can depend on the sync period, 
thus this does need to be exposed. Done, as a non-published option in 
cassandra.yaml for now

What's stopping us ensuring there are always enough (or treating this as a hard 
limit rather than an actual number)? Seems like ensuring we have the number of 
buffers in flight + 1 should be enough to reach a steady state rapidly without 
overcommitting memory...

 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: docs-impacting, performance
 Fix For: 3.0

 Attachments: ComitLogStress.java, logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8238) NPE in SizeTieredCompactionStrategy.filterColdSSTables

2015-03-13 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8238:
---
Reviewer: Yuki Morishita

[~yukim] to review

 NPE in SizeTieredCompactionStrategy.filterColdSSTables
 --

 Key: CASSANDRA-8238
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8238
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
 Fix For: 2.1.4

 Attachments: 0001-assert-that-readMeter-is-not-null.patch, 
 0001-dont-always-set-client-mode-for-sstable-loader.patch


 {noformat}
 ERROR [CompactionExecutor:15] 2014-10-31 15:28:32,318 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:15,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:181)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_72]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_72]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6809) Compressed Commit Log

2015-03-13 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360347#comment-14360347
 ] 

Branimir Lambov commented on CASSANDRA-6809:


Rebased and updated 
[here|https://github.com/apache/cassandra/compare/trunk...blambov:6809-compressed-logs-rebase],
 now using ByteBuffers for compressing. Some fixes to the {{DeflateCompressor}} 
implementation are included.

{quote}
At 
https://github.com/apache/cassandra/compare/trunk...blambov:6809-compressed-logs#diff-d07279710c482983e537aed26df80400R340
If archiving fails it appears to delete the segment now. Is that the right 
thing to do?
{quote}
It deletes if archival was successful ({{deleteFile = archiveSuccess}}). The 
old code was doing the same thing, a bit more confusingly ({{deleteFile = 
!archiveSuccess ? false : true}}).

bq. CSLM's understanding of segment size is skewed because compressed segments 
are less than the expected segment size in reality. With real compression 
ratios it's going to be off by 30-50%. If when the size is known it's tracking 
could be corrected it would be nice.

Unfortunately that's not trivial. Measurement must happen when the file is done 
writing, which is a point CLSM doesn't currently have access to; moreover that 
could be triggered from either sync() or close() on the segment and I don't 
want to include the risk that I don't get all updates working correctly into 
this patch. Changed the description of the parameter to reflect what it is 
currently measuring.

This can and should be fixed soon after, though, and I'll open an issue as soon 
as this is committed.

bq. For the buffer pooling. I would be tempted to not wait for the collector to 
get to the DBB. If the DBB is promoted due to compaction or some other 
allocation hog it may not be reclaimed for some time. In 
CompressedSegment.close maybe null the field then invoke the cleaner on the 
buffer. There is a utility method for doing that so you don't have to access 
the interface directly (generates a compiler warning).

Done.

bq. Also make MAX_BUFFERPOOL_SIZE configurable via a property. I have been 
prefixing internal C* properties with cassandra. I suspect that at several 
hundred megabytes a second we will have more than 3 32 megabyte buffers in 
flight. I have a personal fear of shipping constants that aren't quite right 
and putting them all in properties can save waiting for code changes.

Good catch, the number of segments needed can depend on the sync period, thus 
this does need to be exposed. Done, as a non-published option in cassandra.yaml 
for now.

bq. I tested on Linux. If I drop the page cache on the new code it doesn't 
generate reads. I tested the old code and it generated a few hundred megabytes 
of reads.

Thank you.


 Compressed Commit Log
 -

 Key: CASSANDRA-6809
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6809
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Branimir Lambov
Priority: Minor
  Labels: docs-impacting, performance
 Fix For: 3.0

 Attachments: ComitLogStress.java, logtest.txt


 It seems an unnecessary oversight that we don't compress the commit log. 
 Doing so should improve throughput, but some care will need to be taken to 
 ensure we use as much of a segment as possible. I propose decoupling the 
 writing of the records from the segments. Basically write into a (queue of) 
 DirectByteBuffer, and have the sync thread compress, say, ~64K chunks every X 
 MB written to the CL (where X is ordinarily CLS size), and then pack as many 
 of the compressed chunks into a CLS as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8238) NPE in SizeTieredCompactionStrategy.filterColdSSTables

2015-03-13 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8238:
---
Attachment: 0001-dont-always-set-client-mode-for-sstable-loader.patch

attaching patch that sets client mode outside of SSTableLoader

note that this might break stuff for people who have extended sstable loader 
and written their own tools, added an error message which tells them what to 
do, but I dunno if that is enough?

 NPE in SizeTieredCompactionStrategy.filterColdSSTables
 --

 Key: CASSANDRA-8238
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8238
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
 Fix For: 2.1.4

 Attachments: 0001-assert-that-readMeter-is-not-null.patch, 
 0001-dont-always-set-client-mode-for-sstable-loader.patch


 {noformat}
 ERROR [CompactionExecutor:15] 2014-10-31 15:28:32,318 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:15,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:181)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_72]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_72]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8337) mmap underflow during validation compaction

2015-03-13 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360537#comment-14360537
 ] 

Joshua McKenzie commented on CASSANDRA-8337:


[~sterligovak] Any update on your side regarding this ticket? I don't believe 
we've seen any other instances of this from the 2.1 branch thus far.

 mmap underflow during validation compaction
 ---

 Key: CASSANDRA-8337
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8337
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Sterligov
Assignee: Joshua McKenzie
 Fix For: 2.1.4

 Attachments: 8337_v1.txt, thread_dump


 During full parallel repair I often get errors like the following
 {quote}
 [2014-11-19 01:02:39,355] Repair session 116beaf0-6f66-11e4-afbb-c1c082008cbe 
 for range (3074457345618263602,-9223372036854775808] failed with error 
 org.apache.cassandra.exceptions.RepairException: [repair 
 #116beaf0-6f66-11e4-afbb-c1c082008cbe on iss/target_state_history, 
 (3074457345618263602,-9223372036854775808]] Validation failed in 
 /95.108.242.19
 {quote}
 At the log of the node there are always same exceptions:
 {quote}
 ERROR [ValidationExecutor:2] 2014-11-19 01:02:10,847 
 JVMStabilityInspector.java:94 - JVM state determined to be unstable.  Exiting 
 forcefully due to:
 org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: 
 mmap segment underflow; remaining is 15 but 47 requested
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1518)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1385)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:1315)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1706)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1694)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:276)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getScanners(WrappingCompactionStrategy.java:320)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:917)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:97)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:557)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 Caused by: java.io.IOException: mmap segment underflow; remaining is 15 but 
 47 requested
 at 
 org.apache.cassandra.io.util.MappedFileDataInput.readBytes(MappedFileDataInput.java:135)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:348) 
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:327)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 at 
 org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1460)
  ~[apache-cassandra-2.1.2.jar:2.1.2]
 ... 13 common frames omitted
 {quote}
 Now i'm using die disk_failure_policy to determine such conditions faster, 
 but I get them even with stop policy.
 Streams related to host with such exception are hanged. Thread dump is 
 attached. Only restart helps.
 After retry I get errors from other nodes.
 scrub doesn't help and report that sstables are ok.
 Sequential repairs doesn't cause such exceptions.
 Load is about 1000 write rps and 50 read rps per node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-13 Thread brandonwilliams
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/994d8f50
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/994d8f50
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/994d8f50

Branch: refs/heads/trunk
Commit: 994d8f503191d255afccc8d127ded791350e5a2c
Parents: 65d5ef2 9caf045
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Mar 13 10:58:16 2015 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Mar 13 10:58:16 2015 -0500

--

--




[2/3] cassandra git commit: Allow invalidating permissions and cache time

2015-03-13 Thread brandonwilliams
Allow invalidating permissions and cache time

Patch by brandonwilliams reviewed by aleksey for CASSANDRA-8722


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9caf0457
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9caf0457
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9caf0457

Branch: refs/heads/trunk
Commit: 9caf0457ad8920788506f902ce4d9c130c881031
Parents: 1376b8e
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Mar 13 10:57:43 2015 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Mar 13 10:57:43 2015 -0500

--
 CHANGES.txt |  2 +-
 src/java/org/apache/cassandra/auth/Auth.java|  5 +-
 .../apache/cassandra/auth/PermissionsCache.java | 69 
 .../cassandra/auth/PermissionsCacheMBean.java   | 31 +
 .../org/apache/cassandra/config/Config.java |  4 +-
 .../cassandra/config/DatabaseDescriptor.java| 10 +++
 6 files changed, 102 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9caf0457/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cd29e9d..04861f0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Allow invalidating permissions and cache time (CASSANDRA-8722)
  * Log warning when queries that will require ALLOW FILTERING in Cassandra 3.0
are executed (CASSANDRA-8418)
  * Fix cassandra-stress so it respects the CL passed in user mode 
(CASSANDRA-8948)
@@ -40,7 +41,6 @@
  * Fix Adler32 digest for compressed sstables (CASSANDRA-8778)
  * Add nodetool statushandoff/statusbackup (CASSANDRA-8912)
 Merged from 2.0:
-2.0.14:
  * Fix duplicate up/down messages sent to native clients (CASSANDRA-7816)
  * Expose commit log archive status via JMX (CASSANDRA-8734)
  * Provide better exceptions for invalid replication strategy parameters

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9caf0457/src/java/org/apache/cassandra/auth/Auth.java
--
diff --git a/src/java/org/apache/cassandra/auth/Auth.java 
b/src/java/org/apache/cassandra/auth/Auth.java
index 05e5061..dac2af8 100644
--- a/src/java/org/apache/cassandra/auth/Auth.java
+++ b/src/java/org/apache/cassandra/auth/Auth.java
@@ -57,10 +57,7 @@ public class Auth
 public static final String USERS_CF = users;
 
 // User-level permissions cache.
-private static final PermissionsCache permissionsCache = new 
PermissionsCache(DatabaseDescriptor.getPermissionsValidity(),
-   
   DatabaseDescriptor.getPermissionsUpdateInterval(),
-   
   DatabaseDescriptor.getPermissionsCacheMaxEntries(),
-   
   DatabaseDescriptor.getAuthorizer());
+private static final PermissionsCache permissionsCache = new 
PermissionsCache(DatabaseDescriptor.getAuthorizer());
 
 private static final String USERS_CF_SCHEMA = String.format(CREATE TABLE 
%s.%s (
 + name text,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9caf0457/src/java/org/apache/cassandra/auth/PermissionsCache.java
--
diff --git a/src/java/org/apache/cassandra/auth/PermissionsCache.java 
b/src/java/org/apache/cassandra/auth/PermissionsCache.java
index 9e0dfa9..bc96d82 100644
--- a/src/java/org/apache/cassandra/auth/PermissionsCache.java
+++ b/src/java/org/apache/cassandra/auth/PermissionsCache.java
@@ -17,9 +17,11 @@
  */
 package org.apache.cassandra.auth;
 
+import java.lang.management.ManagementFactory;
 import java.util.Set;
 import java.util.concurrent.*;
 
+import org.apache.cassandra.config.DatabaseDescriptor;
 import com.google.common.cache.CacheBuilder;
 import com.google.common.cache.CacheLoader;
 import com.google.common.cache.LoadingCache;
@@ -31,19 +33,33 @@ import org.slf4j.LoggerFactory;
 import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor;
 import org.apache.cassandra.utils.Pair;
 
-public class PermissionsCache
+import javax.management.MBeanServer;
+import javax.management.ObjectName;
+
+public class PermissionsCache implements PermissionsCacheMBean
 {
 private static final Logger logger = 
LoggerFactory.getLogger(PermissionsCache.class);
 
+private final String MBEAN_NAME = 
org.apache.cassandra.auth:type=PermissionsCache;
+
 private final ThreadPoolExecutor cacheRefreshExecutor = new 
DebuggableThreadPoolExecutor(PermissionsCacheRefresh,
  

[jira] [Commented] (CASSANDRA-8238) NPE in SizeTieredCompactionStrategy.filterColdSSTables

2015-03-13 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360572#comment-14360572
 ] 

Yuki Morishita commented on CASSANDRA-8238:
---

+1

 NPE in SizeTieredCompactionStrategy.filterColdSSTables
 --

 Key: CASSANDRA-8238
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8238
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
 Fix For: 2.1.4

 Attachments: 0001-assert-that-readMeter-is-not-null.patch, 
 0001-dont-always-set-client-mode-for-sstable-loader.patch


 {noformat}
 ERROR [CompactionExecutor:15] 2014-10-31 15:28:32,318 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[CompactionExecutor:15,1,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:181)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:267)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_72]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_72]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_72]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8967) Allow RolesCache to be invalidated

2015-03-13 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-8967:
---

 Summary: Allow RolesCache to be invalidated
 Key: CASSANDRA-8967
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8967
 Project: Cassandra
  Issue Type: New Feature
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 3.0


Much like CASSANDRA-8722, we should add this to RolesCache as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Allow invalidating permissions and cache time

2015-03-13 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 1376b8eff - 9caf0457a
  refs/heads/trunk 65d5ef26c - 994d8f503


Allow invalidating permissions and cache time

Patch by brandonwilliams reviewed by aleksey for CASSANDRA-8722


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9caf0457
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9caf0457
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9caf0457

Branch: refs/heads/cassandra-2.1
Commit: 9caf0457ad8920788506f902ce4d9c130c881031
Parents: 1376b8e
Author: Brandon Williams brandonwilli...@apache.org
Authored: Fri Mar 13 10:57:43 2015 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Fri Mar 13 10:57:43 2015 -0500

--
 CHANGES.txt |  2 +-
 src/java/org/apache/cassandra/auth/Auth.java|  5 +-
 .../apache/cassandra/auth/PermissionsCache.java | 69 
 .../cassandra/auth/PermissionsCacheMBean.java   | 31 +
 .../org/apache/cassandra/config/Config.java |  4 +-
 .../cassandra/config/DatabaseDescriptor.java| 10 +++
 6 files changed, 102 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9caf0457/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cd29e9d..04861f0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Allow invalidating permissions and cache time (CASSANDRA-8722)
  * Log warning when queries that will require ALLOW FILTERING in Cassandra 3.0
are executed (CASSANDRA-8418)
  * Fix cassandra-stress so it respects the CL passed in user mode 
(CASSANDRA-8948)
@@ -40,7 +41,6 @@
  * Fix Adler32 digest for compressed sstables (CASSANDRA-8778)
  * Add nodetool statushandoff/statusbackup (CASSANDRA-8912)
 Merged from 2.0:
-2.0.14:
  * Fix duplicate up/down messages sent to native clients (CASSANDRA-7816)
  * Expose commit log archive status via JMX (CASSANDRA-8734)
  * Provide better exceptions for invalid replication strategy parameters

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9caf0457/src/java/org/apache/cassandra/auth/Auth.java
--
diff --git a/src/java/org/apache/cassandra/auth/Auth.java 
b/src/java/org/apache/cassandra/auth/Auth.java
index 05e5061..dac2af8 100644
--- a/src/java/org/apache/cassandra/auth/Auth.java
+++ b/src/java/org/apache/cassandra/auth/Auth.java
@@ -57,10 +57,7 @@ public class Auth
 public static final String USERS_CF = users;
 
 // User-level permissions cache.
-private static final PermissionsCache permissionsCache = new 
PermissionsCache(DatabaseDescriptor.getPermissionsValidity(),
-   
   DatabaseDescriptor.getPermissionsUpdateInterval(),
-   
   DatabaseDescriptor.getPermissionsCacheMaxEntries(),
-   
   DatabaseDescriptor.getAuthorizer());
+private static final PermissionsCache permissionsCache = new 
PermissionsCache(DatabaseDescriptor.getAuthorizer());
 
 private static final String USERS_CF_SCHEMA = String.format(CREATE TABLE 
%s.%s (
 + name text,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9caf0457/src/java/org/apache/cassandra/auth/PermissionsCache.java
--
diff --git a/src/java/org/apache/cassandra/auth/PermissionsCache.java 
b/src/java/org/apache/cassandra/auth/PermissionsCache.java
index 9e0dfa9..bc96d82 100644
--- a/src/java/org/apache/cassandra/auth/PermissionsCache.java
+++ b/src/java/org/apache/cassandra/auth/PermissionsCache.java
@@ -17,9 +17,11 @@
  */
 package org.apache.cassandra.auth;
 
+import java.lang.management.ManagementFactory;
 import java.util.Set;
 import java.util.concurrent.*;
 
+import org.apache.cassandra.config.DatabaseDescriptor;
 import com.google.common.cache.CacheBuilder;
 import com.google.common.cache.CacheLoader;
 import com.google.common.cache.LoadingCache;
@@ -31,19 +33,33 @@ import org.slf4j.LoggerFactory;
 import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor;
 import org.apache.cassandra.utils.Pair;
 
-public class PermissionsCache
+import javax.management.MBeanServer;
+import javax.management.ObjectName;
+
+public class PermissionsCache implements PermissionsCacheMBean
 {
 private static final Logger logger = 
LoggerFactory.getLogger(PermissionsCache.class);
 
+private final String MBEAN_NAME = 

[jira] [Commented] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-03-13 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360653#comment-14360653
 ] 

Marcus Eriksson commented on CASSANDRA-8535:


+1 on v3

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.4

 Attachments: 8535_v1.txt, 8535_v2.txt, 8535_v3.txt


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The process cannot access the file because it is being used by another 
 process.
   at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_45]
   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) 
 ~[na:1.7.0_45]
   at java.nio.file.Files.move(Files.java:1345) ~[na:1.7.0_45]
   at 
 org.apache.cassandra.io.util.FileUtils.atomicMoveWithFallback(FileUtils.java:184)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166) 
 ~[main/:na]
   ... 18 common frames omitted
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)