[jira] [Created] (CASSANDRA-12012) CQLSSTableWriter and composite clustering keys trigger NPE

2016-06-15 Thread Pierre N. (JIRA)
Pierre N. created CASSANDRA-12012:
-

 Summary: CQLSSTableWriter and composite clustering keys trigger NPE
 Key: CASSANDRA-12012
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12012
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
Reporter: Pierre N.


It triggers when using multiple clustering keys in the primary keys

{code}
package tests;

import java.io.File;
import org.apache.cassandra.io.sstable.CQLSSTableWriter;
import org.apache.cassandra.config.Config;

public class DefaultWriter {

public static void main(String[] args) throws Exception {
Config.setClientMode(true);

String createTableQuery = "CREATE TABLE ks_test.table_test ("
+ "pk1 int,"
+ "ck1 int,"
+ "ck2 int,"
+ "PRIMARY KEY ((pk1), ck1, ck2)"
+ ");";
String insertQuery = "INSERT INTO ks_test.table_test(pk1, ck1, ck2) 
VALUES(?,?,?)";

CQLSSTableWriter writer = CQLSSTableWriter.builder()
.inDirectory(File.createTempFile("sstdir", "-tmp"))
.forTable(createTableQuery)
.using(insertQuery)
.build();
writer.close();
}
}
{/code}

Exception : 

{code}
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
at org.apache.cassandra.db.Keyspace.openAndGetStore(Keyspace.java:159)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.hasSupportingIndex(PrimaryKeyRestrictionSet.java:156)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.(PrimaryKeyRestrictionSet.java:118)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.mergeWith(PrimaryKeyRestrictionSet.java:213)
at 
org.apache.cassandra.cql3.restrictions.StatementRestrictions.addSingleColumnRestriction(StatementRestrictions.java:266)
at 
org.apache.cassandra.cql3.restrictions.StatementRestrictions.addRestriction(StatementRestrictions.java:250)
at 
org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:159)
at 
org.apache.cassandra.cql3.statements.UpdateStatement$ParsedInsert.prepareInternal(UpdateStatement.java:183)
at 
org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:782)
at 
org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:768)
at 
org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:505)
at 
org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.getStatement(CQLSSTableWriter.java:508)
at 
org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.using(CQLSSTableWriter.java:439)
at tests.DefaultWriter.main(DefaultWriter.java:29)
Caused by: java.lang.NullPointerException
at 
org.apache.cassandra.config.DatabaseDescriptor.getFlushWriters(DatabaseDescriptor.java:1188)
at 
org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:127)
... 18 more
{/code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12012) CQLSSTableWriter and composite clustering keys trigger NPE

2016-06-15 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-12012:
--
Description: 
It triggers when using multiple clustering keys in the primary keys

{code}
package tests;

import java.io.File;
import org.apache.cassandra.io.sstable.CQLSSTableWriter;
import org.apache.cassandra.config.Config;

public class DefaultWriter {

public static void main(String[] args) throws Exception {
Config.setClientMode(true);

String createTableQuery = "CREATE TABLE ks_test.table_test ("
+ "pk1 int,"
+ "ck1 int,"
+ "ck2 int,"
+ "PRIMARY KEY ((pk1), ck1, ck2)"
+ ");";
String insertQuery = "INSERT INTO ks_test.table_test(pk1, ck1, ck2) 
VALUES(?,?,?)";

CQLSSTableWriter writer = CQLSSTableWriter.builder()
.inDirectory(File.createTempFile("sstdir", "-tmp"))
.forTable(createTableQuery)
.using(insertQuery)
.build();
writer.close();
}
}
{code}

Exception : 

{code}
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
at org.apache.cassandra.db.Keyspace.openAndGetStore(Keyspace.java:159)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.hasSupportingIndex(PrimaryKeyRestrictionSet.java:156)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.(PrimaryKeyRestrictionSet.java:118)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.mergeWith(PrimaryKeyRestrictionSet.java:213)
at 
org.apache.cassandra.cql3.restrictions.StatementRestrictions.addSingleColumnRestriction(StatementRestrictions.java:266)
at 
org.apache.cassandra.cql3.restrictions.StatementRestrictions.addRestriction(StatementRestrictions.java:250)
at 
org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:159)
at 
org.apache.cassandra.cql3.statements.UpdateStatement$ParsedInsert.prepareInternal(UpdateStatement.java:183)
at 
org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:782)
at 
org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:768)
at 
org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:505)
at 
org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.getStatement(CQLSSTableWriter.java:508)
at 
org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.using(CQLSSTableWriter.java:439)
at tests.DefaultWriter.main(DefaultWriter.java:29)
Caused by: java.lang.NullPointerException
at 
org.apache.cassandra.config.DatabaseDescriptor.getFlushWriters(DatabaseDescriptor.java:1188)
at 
org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:127)
... 18 more
{code}

  was:
It triggers when using multiple clustering keys in the primary keys

{code}
package tests;

import java.io.File;
import org.apache.cassandra.io.sstable.CQLSSTableWriter;
import org.apache.cassandra.config.Config;

public class DefaultWriter {

public static void main(String[] args) throws Exception {
Config.setClientMode(true);

String createTableQuery = "CREATE TABLE ks_test.table_test ("
+ "pk1 int,"
+ "ck1 int,"
+ "ck2 int,"
+ "PRIMARY KEY ((pk1), ck1, ck2)"
+ ");";
String insertQuery = "INSERT INTO ks_test.table_test(pk1, ck1, ck2) 
VALUES(?,?,?)";

CQLSSTableWriter writer = CQLSSTableWriter.builder()
.inDirectory(File.createTempFile("sstdir", "-tmp"))
.forTable(createTableQuery)
.using(insertQuery)
.build();
writer.close();
}
}
{/code}

Exception : 

{code}
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
at org.apache.cassandra.db.Keyspace.openAndGetStore(Keyspace.java:159)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.hasSupportingIndex(PrimaryKeyRestrictionSet.java:156)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.(PrimaryKeyRestrictionSet.java:118)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.mergeWith(PrimaryKeyRestrictionSet.java:

[jira] [Updated] (CASSANDRA-12012) CQLSSTableWriter and composite clustering keys trigger NPE

2016-06-16 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-12012:
--
Description: 
It triggers when using multiple clustering keys in the primary keys

{code}
package tests;

import java.io.File;
import org.apache.cassandra.io.sstable.CQLSSTableWriter;
import org.apache.cassandra.config.Config;

public class DefaultWriter {

public static void main(String[] args) throws Exception {
Config.setClientMode(true);

String createTableQuery = "CREATE TABLE ks_test.table_test ("
+ "pk1 int,"
+ "ck1 int,"
+ "ck2 int,"
+ "PRIMARY KEY ((pk1), ck1, ck2)"
+ ");";
String insertQuery = "INSERT INTO ks_test.table_test(pk1, ck1, ck2) 
VALUES(?,?,?)";

CQLSSTableWriter writer = CQLSSTableWriter.builder()
.inDirectory(Files.createTempDirectory("sst").toFile())
.forTable(createTableQuery)
.using(insertQuery)
.build();
writer.close();
}
}
{code}

Exception : 

{code}
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
at org.apache.cassandra.db.Keyspace.openAndGetStore(Keyspace.java:159)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.hasSupportingIndex(PrimaryKeyRestrictionSet.java:156)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.(PrimaryKeyRestrictionSet.java:118)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.mergeWith(PrimaryKeyRestrictionSet.java:213)
at 
org.apache.cassandra.cql3.restrictions.StatementRestrictions.addSingleColumnRestriction(StatementRestrictions.java:266)
at 
org.apache.cassandra.cql3.restrictions.StatementRestrictions.addRestriction(StatementRestrictions.java:250)
at 
org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:159)
at 
org.apache.cassandra.cql3.statements.UpdateStatement$ParsedInsert.prepareInternal(UpdateStatement.java:183)
at 
org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:782)
at 
org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:768)
at 
org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:505)
at 
org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.getStatement(CQLSSTableWriter.java:508)
at 
org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.using(CQLSSTableWriter.java:439)
at tests.DefaultWriter.main(DefaultWriter.java:29)
Caused by: java.lang.NullPointerException
at 
org.apache.cassandra.config.DatabaseDescriptor.getFlushWriters(DatabaseDescriptor.java:1188)
at 
org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:127)
... 18 more
{code}

  was:
It triggers when using multiple clustering keys in the primary keys

{code}
package tests;

import java.io.File;
import org.apache.cassandra.io.sstable.CQLSSTableWriter;
import org.apache.cassandra.config.Config;

public class DefaultWriter {

public static void main(String[] args) throws Exception {
Config.setClientMode(true);

String createTableQuery = "CREATE TABLE ks_test.table_test ("
+ "pk1 int,"
+ "ck1 int,"
+ "ck2 int,"
+ "PRIMARY KEY ((pk1), ck1, ck2)"
+ ");";
String insertQuery = "INSERT INTO ks_test.table_test(pk1, ck1, ck2) 
VALUES(?,?,?)";

CQLSSTableWriter writer = CQLSSTableWriter.builder()
.inDirectory(File.createTempFile("sstdir", "-tmp"))
.forTable(createTableQuery)
.using(insertQuery)
.build();
writer.close();
}
}
{code}

Exception : 

{code}
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
at org.apache.cassandra.db.Keyspace.openAndGetStore(Keyspace.java:159)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.hasSupportingIndex(PrimaryKeyRestrictionSet.java:156)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.(PrimaryKeyRestrictionSet.java:118)
at 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.mergeWith(PrimaryKeyRestrictionSet.ja

[jira] [Commented] (CASSANDRA-12012) CQLSSTableWriter and composite clustering keys trigger NPE

2016-06-16 Thread Pierre N. (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1565#comment-1565
 ] 

Pierre N. commented on CASSANDRA-12012:
---

hasSupportingIndex() of 
org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet is doing 
Keyspace.openAndGetStore(cfm) which trigger error because of uninitialized 
keyspace in client mode. 

I hotfixed by adding this check : 
{code}
+import org.apache.cassandra.config.Config;
 import org.apache.cassandra.cql3.QueryOptions;
 import org.apache.cassandra.cql3.functions.Function;
 import org.apache.cassandra.cql3.statements.Bound;
@@ -115,7 +116,7 @@ final class PrimaryKeyRestrictionSet extends 
AbstractPrimaryKeyRestrictions
 this.isPartitionKey = primaryKeyRestrictions.isPartitionKey;
 this.cfm = primaryKeyRestrictions.cfm;
 
-if (!primaryKeyRestrictions.isEmpty() && 
!hasSupportingIndex(restriction))
+if (!Config.isClientMode() && !primaryKeyRestrictions.isEmpty() && 
!hasSupportingIndex(restriction))
 {
 ColumnDefinition lastRestrictionStart = 
primaryKeyRestrictions.restrictions.lastRestriction().getFirstColumn();
 ColumnDefinition newRestrictionStart = 
restriction.getFirstColumn();
n))
{code}

It works and generate a valid sstable, however, not sure this is the best way 
to fix it.

> CQLSSTableWriter and composite clustering keys trigger NPE
> --
>
> Key: CASSANDRA-12012
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12012
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Pierre N.
>Assignee: Mahdi Mohammadi
>
> It triggers when using multiple clustering keys in the primary keys
> {code}
> package tests;
> import java.io.File;
> import org.apache.cassandra.io.sstable.CQLSSTableWriter;
> import org.apache.cassandra.config.Config;
> public class DefaultWriter {
> 
> public static void main(String[] args) throws Exception {
> Config.setClientMode(true);
> 
> String createTableQuery = "CREATE TABLE ks_test.table_test ("
> + "pk1 int,"
> + "ck1 int,"
> + "ck2 int,"
> + "PRIMARY KEY ((pk1), ck1, ck2)"
> + ");";
> String insertQuery = "INSERT INTO ks_test.table_test(pk1, ck1, ck2) 
> VALUES(?,?,?)";
> 
> CQLSSTableWriter writer = CQLSSTableWriter.builder()
> .inDirectory(Files.createTempDirectory("sst").toFile())
> .forTable(createTableQuery)
> .using(insertQuery)
> .build();
> writer.close();
> }
> }
> {code}
> Exception : 
> {code}
> Exception in thread "main" java.lang.ExceptionInInitializerError
>   at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
>   at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
>   at org.apache.cassandra.db.Keyspace.openAndGetStore(Keyspace.java:159)
>   at 
> org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.hasSupportingIndex(PrimaryKeyRestrictionSet.java:156)
>   at 
> org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.(PrimaryKeyRestrictionSet.java:118)
>   at 
> org.apache.cassandra.cql3.restrictions.PrimaryKeyRestrictionSet.mergeWith(PrimaryKeyRestrictionSet.java:213)
>   at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.addSingleColumnRestriction(StatementRestrictions.java:266)
>   at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.addRestriction(StatementRestrictions.java:250)
>   at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:159)
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement$ParsedInsert.prepareInternal(UpdateStatement.java:183)
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:782)
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement$Parsed.prepare(ModificationStatement.java:768)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:505)
>   at 
> org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.getStatement(CQLSSTableWriter.java:508)
>   at 
> org.apache.cassandra.io.sstable.CQLSSTableWriter$Builder.using(CQLSSTableWriter.java:439)
>   at tests.DefaultWriter.main(DefaultWriter.java:29)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.cassandra.config.DatabaseDescriptor.getFlushWriters(DatabaseDescriptor.java:1188)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:127)
>   ... 18 more

[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2015-08-06 Thread Pierre N. (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14660581#comment-14660581
 ] 

Pierre N. commented on CASSANDRA-8099:
--

Yes it would be great to have a full specification now cassandra has a new 
sstable format.

> Refactor and modernize the storage engine
> -
>
> Key: CASSANDRA-8099
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0 beta 1
>
> Attachments: 8099-nit
>
>
> The current storage engine (which for this ticket I'll loosely define as "the 
> code implementing the read/write path") is suffering from old age. One of the 
> main problem is that the only structure it deals with is the cell, which 
> completely ignores the more high level CQL structure that groups cell into 
> (CQL) rows.
> This leads to many inefficiencies, like the fact that during a reads we have 
> to group cells multiple times (to count on replica, then to count on the 
> coordinator, then to produce the CQL resultset) because we forget about the 
> grouping right away each time (so lots of useless cell names comparisons in 
> particular). But outside inefficiencies, having to manually recreate the CQL 
> structure every time we need it for something is hindering new features and 
> makes the code more complex that it should be.
> Said storage engine also has tons of technical debt. To pick an example, the 
> fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
> hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
> to go into to simply "remove the last query result".
> So I want to bite the bullet and modernize this storage engine. I propose to 
> do 2 main things:
> # Make the storage engine more aware of the CQL structure. In practice, 
> instead of having partitions be a simple iterable map of cells, it should be 
> an iterable list of row (each being itself composed of per-column cells, 
> though obviously not exactly the same kind of cell we have today).
> # Make the engine more iterative. What I mean here is that in the read path, 
> we end up reading all cells in memory (we put them in a ColumnFamily object), 
> but there is really no reason to. If instead we were working with iterators 
> all the way through, we could get to a point where we're basically 
> transferring data from disk to the network, and we should be able to reduce 
> GC substantially.
> Please note that such refactor should provide some performance improvements 
> right off the bat but it's not it's primary goal either. It's primary goal is 
> to simplify the storage engine and adds abstraction that are better suited to 
> further optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9727) AuthSuccess NPE

2015-07-04 Thread Pierre N. (JIRA)
Pierre N. created CASSANDRA-9727:


 Summary: AuthSuccess NPE
 Key: CASSANDRA-9727
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9727
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre N.
Priority: Minor


Triggered while playing with org.apache.cassandra.transport.Client with 
PasswordAuthenticator  : 
 
{code}
>> startup
11:48:42.522 [main] DEBUG io.netty.util.Recycler - 
-Dio.netty.recycler.maxCapacity.default: 262144
11:48:42.530 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator - 
-Dio.netty.allocator.numHeapArenas: 8
11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator - 
-Dio.netty.allocator.numDirectArenas: 8
11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator - 
-Dio.netty.allocator.pageSize: 8192
11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator - 
-Dio.netty.allocator.maxOrder: 11
11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator - 
-Dio.netty.allocator.chunkSize: 16777216
11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator - 
-Dio.netty.allocator.tinyCacheSize: 512
11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator - 
-Dio.netty.allocator.smallCacheSize: 256
11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator - 
-Dio.netty.allocator.normalCacheSize: 64
11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator - 
-Dio.netty.allocator.maxCachedBufferCapacity: 32768
11:48:42.532 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator - 
-Dio.netty.allocator.cacheTrimInterval: 8192
11:48:42.552 [nioEventLoopGroup-2-1] DEBUG io.netty.util.ResourceLeakDetector - 
-Dio.netty.leakDetectionLevel: simple
-> AUTHENTICATE org.apache.cassandra.auth.PasswordAuthenticator
>> authenticate username=cassandra password=WRONGPASSWORD
ERROR: org.apache.cassandra.exceptions.AuthenticationException: Username and/or 
password are incorrect
>> authenticate username=cassandra password=cassandra
11:50:00.095 [nioEventLoopGroup-2-1] DEBUG io.netty.util.internal.Cleaner0 - 
java.nio.ByteBuffer.cleaner(): available
11:50:00.113 [nioEventLoopGroup-2-1] ERROR o.a.cassandra.transport.SimpleClient 
- Exception in response
io.netty.handler.codec.DecoderException: 
org.apache.cassandra.transport.messages.ErrorMessage$WrappedException: 
java.lang.NullPointerException
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) 
[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 
[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) 
[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.j

[jira] [Updated] (CASSANDRA-9727) AuthSuccess NPE

2015-07-04 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-9727:
-
Attachment: trunk-9727.patch

> AuthSuccess NPE
> ---
>
> Key: CASSANDRA-9727
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9727
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
>Priority: Minor
> Fix For: 3.x, 2.2.0 rc2
>
> Attachments: trunk-9727.patch
>
>
> Triggered while playing with org.apache.cassandra.transport.Client with 
> PasswordAuthenticator  : 
>  
> {code}
> >> startup
> 11:48:42.522 [main] DEBUG io.netty.util.Recycler - 
> -Dio.netty.recycler.maxCapacity.default: 262144
> 11:48:42.530 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.numHeapArenas: 8
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.numDirectArenas: 8
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.pageSize: 8192
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.maxOrder: 11
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.chunkSize: 16777216
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.tinyCacheSize: 512
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.smallCacheSize: 256
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.normalCacheSize: 64
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
> 11:48:42.532 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.cacheTrimInterval: 8192
> 11:48:42.552 [nioEventLoopGroup-2-1] DEBUG io.netty.util.ResourceLeakDetector 
> - -Dio.netty.leakDetectionLevel: simple
> -> AUTHENTICATE org.apache.cassandra.auth.PasswordAuthenticator
> >> authenticate username=cassandra password=WRONGPASSWORD
> ERROR: org.apache.cassandra.exceptions.AuthenticationException: Username 
> and/or password are incorrect
> >> authenticate username=cassandra password=cassandra
> 11:50:00.095 [nioEventLoopGroup-2-1] DEBUG io.netty.util.internal.Cleaner0 - 
> java.nio.ByteBuffer.cleaner(): available
> 11:50:00.113 [nioEventLoopGroup-2-1] ERROR 
> o.a.cassandra.transport.SimpleClient - Exception in response
> io.netty.handler.codec.DecoderException: 
> org.apache.cassandra.transport.messages.ErrorMessage$WrappedException: 
> java.lang.NullPointerException
>   at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 
> [netty-all-4.0.23.Final.ja

[jira] [Updated] (CASSANDRA-9727) AuthSuccess NPE

2015-07-04 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-9727:
-
Priority: Major  (was: Minor)

> AuthSuccess NPE
> ---
>
> Key: CASSANDRA-9727
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9727
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
> Fix For: 3.x, 2.2.0 rc2
>
> Attachments: trunk-9727.patch
>
>
> Triggered while playing with org.apache.cassandra.transport.Client with 
> PasswordAuthenticator  : 
>  
> {code}
> >> startup
> 11:48:42.522 [main] DEBUG io.netty.util.Recycler - 
> -Dio.netty.recycler.maxCapacity.default: 262144
> 11:48:42.530 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.numHeapArenas: 8
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.numDirectArenas: 8
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.pageSize: 8192
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.maxOrder: 11
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.chunkSize: 16777216
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.tinyCacheSize: 512
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.smallCacheSize: 256
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.normalCacheSize: 64
> 11:48:42.531 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
> 11:48:42.532 [nioEventLoopGroup-2-1] DEBUG i.n.buffer.PooledByteBufAllocator 
> - -Dio.netty.allocator.cacheTrimInterval: 8192
> 11:48:42.552 [nioEventLoopGroup-2-1] DEBUG io.netty.util.ResourceLeakDetector 
> - -Dio.netty.leakDetectionLevel: simple
> -> AUTHENTICATE org.apache.cassandra.auth.PasswordAuthenticator
> >> authenticate username=cassandra password=WRONGPASSWORD
> ERROR: org.apache.cassandra.exceptions.AuthenticationException: Username 
> and/or password are incorrect
> >> authenticate username=cassandra password=cassandra
> 11:50:00.095 [nioEventLoopGroup-2-1] DEBUG io.netty.util.internal.Cleaner0 - 
> java.nio.ByteBuffer.cleaner(): available
> 11:50:00.113 [nioEventLoopGroup-2-1] ERROR 
> o.a.cassandra.transport.SimpleClient - Exception in response
> io.netty.handler.codec.DecoderException: 
> org.apache.cassandra.transport.messages.ErrorMessage$WrappedException: 
> java.lang.NullPointerException
>   at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at i

[jira] [Created] (CASSANDRA-9758) nodetool compactionhistory NPE

2015-07-08 Thread Pierre N. (JIRA)
Pierre N. created CASSANDRA-9758:


 Summary: nodetool compactionhistory NPE
 Key: CASSANDRA-9758
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9758
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre N.
Priority: Minor
 Attachments: 0001-fix-compaction-history-NPE.patch

nodetool compactionhistory may trigger NPE : 

admin@localhost:~$ nodetool compactionhistory
Compaction History: 
error: null
-- StackTrace --
java.lang.NullPointerException
at com.google.common.base.Joiner$MapJoiner.join(Joiner.java:330)
at org.apache.cassandra.utils.FBUtilities.toString(FBUtilities.java:515)
at 
org.apache.cassandra.db.compaction.CompactionHistoryTabularData.from(CompactionHistoryTabularData.java:78)
at 
org.apache.cassandra.db.SystemKeyspace.getCompactionHistory(SystemKeyspace.java:422)
at 
org.apache.cassandra.db.compaction.CompactionManager.getCompactionHistory(CompactionManager.java:1490)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at 
com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
at 
com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
at 
javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$2.run(Transport.java:202)
at sun.rmi.transport.Transport$2.run(Transport.java:199)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)


admin@localhost:~$ cqlsh -e "select * from system.compaction_history" | grep -F 
null
 ede434c0-2306-11e5-8a1a-85b300e09146 |  120 | 0 | peers | 2015-07-05 
13:13:57+0200 |system | null
 ae33fb90-23a0-11e5-9245-85b300e09146 |  120 | 0 | peers | 2015-07-06 
07:34:32+0200 |system | null
 085cb1f0-2542-11e5-9291-dfb803ff9672 |  120 | 0 | peers | 2015-07-08 
09:22:04+0200 |system | null
 0dbd4240-2349-11e5-a72b-85b30

[jira] [Updated] (CASSANDRA-9758) nodetool compactionhistory NPE

2015-07-08 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-9758:
-
Description: 
nodetool compactionhistory may trigger NPE : 

{code}
admin@localhost:~$ nodetool compactionhistory
Compaction History: 
error: null
-- StackTrace --
java.lang.NullPointerException
at com.google.common.base.Joiner$MapJoiner.join(Joiner.java:330)
at org.apache.cassandra.utils.FBUtilities.toString(FBUtilities.java:515)
at 
org.apache.cassandra.db.compaction.CompactionHistoryTabularData.from(CompactionHistoryTabularData.java:78)
at 
org.apache.cassandra.db.SystemKeyspace.getCompactionHistory(SystemKeyspace.java:422)
at 
org.apache.cassandra.db.compaction.CompactionManager.getCompactionHistory(CompactionManager.java:1490)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at 
com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
at 
com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
at 
javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$2.run(Transport.java:202)
at sun.rmi.transport.Transport$2.run(Transport.java:199)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

{code}
admin@localhost:~$ cqlsh -e "select * from system.compaction_history" | grep -F 
null
 ede434c0-2306-11e5-8a1a-85b300e09146 |  120 | 0 | peers | 2015-07-05 
13:13:57+0200 |system | null
 ae33fb90-23a0-11e5-9245-85b300e09146 |  120 | 0 | peers | 2015-07-06 
07:34:32+0200 |system | null
 085cb1f0-2542-11e5-9291-dfb803ff9672 |  120 | 0 | peers | 2015-07-08 
09:22:04+0200 |system | null
 0dbd4240-2349-11e5-a72b-85b300e09146 |  120 | 0 | peers | 2015-07-05 
21:07:17+0200 |system | null
 51e56b70-2261-11e5-8df2-85b300e09146 |  120 | 0 | peers | 2015-07-04 
17:28:28+0200 |system | null

[jira] [Updated] (CASSANDRA-9758) nodetool compactionhistory NPE

2015-07-08 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-9758:
-
Attachment: (was: 0001-fix-compaction-history-NPE.patch)

> nodetool compactionhistory NPE
> --
>
> Key: CASSANDRA-9758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9758
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0001-fix-npe-inline.patch
>
>
> nodetool compactionhistory may trigger NPE : 
> {code}
> admin@localhost:~$ nodetool compactionhistory
> Compaction History: 
> error: null
> -- StackTrace --
> java.lang.NullPointerException
>   at com.google.common.base.Joiner$MapJoiner.join(Joiner.java:330)
>   at org.apache.cassandra.utils.FBUtilities.toString(FBUtilities.java:515)
>   at 
> org.apache.cassandra.db.compaction.CompactionHistoryTabularData.from(CompactionHistoryTabularData.java:78)
>   at 
> org.apache.cassandra.db.SystemKeyspace.getCompactionHistory(SystemKeyspace.java:422)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getCompactionHistory(CompactionManager.java:1490)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$2.run(Transport.java:202)
>   at sun.rmi.transport.Transport$2.run(Transport.java:199)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> admin@localhost:~$ cqlsh -e "select * from system.compaction_history" | grep 
> -F null
>  ede43

[jira] [Updated] (CASSANDRA-9758) nodetool compactionhistory NPE

2015-07-08 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-9758:
-
Attachment: 0001-fix-npe-inline.patch

> nodetool compactionhistory NPE
> --
>
> Key: CASSANDRA-9758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9758
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0001-fix-npe-inline.patch
>
>
> nodetool compactionhistory may trigger NPE : 
> {code}
> admin@localhost:~$ nodetool compactionhistory
> Compaction History: 
> error: null
> -- StackTrace --
> java.lang.NullPointerException
>   at com.google.common.base.Joiner$MapJoiner.join(Joiner.java:330)
>   at org.apache.cassandra.utils.FBUtilities.toString(FBUtilities.java:515)
>   at 
> org.apache.cassandra.db.compaction.CompactionHistoryTabularData.from(CompactionHistoryTabularData.java:78)
>   at 
> org.apache.cassandra.db.SystemKeyspace.getCompactionHistory(SystemKeyspace.java:422)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getCompactionHistory(CompactionManager.java:1490)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$2.run(Transport.java:202)
>   at sun.rmi.transport.Transport$2.run(Transport.java:199)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> admin@localhost:~$ cqlsh -e "select * from system.compaction_history" | grep 
> -F null
>  ede434c0-2306-11e5-8a1a-85b3

[jira] [Commented] (CASSANDRA-9758) nodetool compactionhistory NPE

2015-07-08 Thread Pierre N. (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14618820#comment-14618820
 ] 

Pierre N. commented on CASSANDRA-9758:
--

In 2.2 branch.

> nodetool compactionhistory NPE
> --
>
> Key: CASSANDRA-9758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9758
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0001-fix-npe-inline.patch, 9758.txt
>
>
> nodetool compactionhistory may trigger NPE : 
> {code}
> admin@localhost:~$ nodetool compactionhistory
> Compaction History: 
> error: null
> -- StackTrace --
> java.lang.NullPointerException
>   at com.google.common.base.Joiner$MapJoiner.join(Joiner.java:330)
>   at org.apache.cassandra.utils.FBUtilities.toString(FBUtilities.java:515)
>   at 
> org.apache.cassandra.db.compaction.CompactionHistoryTabularData.from(CompactionHistoryTabularData.java:78)
>   at 
> org.apache.cassandra.db.SystemKeyspace.getCompactionHistory(SystemKeyspace.java:422)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getCompactionHistory(CompactionManager.java:1490)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$2.run(Transport.java:202)
>   at sun.rmi.transport.Transport$2.run(Transport.java:199)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> admin@localhost:~$ cqlsh -e "select * from system.compaction_history" | grep 

[jira] [Commented] (CASSANDRA-9758) nodetool compactionhistory NPE

2015-07-08 Thread Pierre N. (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14618821#comment-14618821
 ] 

Pierre N. commented on CASSANDRA-9758:
--

and yes i prefer your fix too

> nodetool compactionhistory NPE
> --
>
> Key: CASSANDRA-9758
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9758
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0001-fix-npe-inline.patch, 9758.txt
>
>
> nodetool compactionhistory may trigger NPE : 
> {code}
> admin@localhost:~$ nodetool compactionhistory
> Compaction History: 
> error: null
> -- StackTrace --
> java.lang.NullPointerException
>   at com.google.common.base.Joiner$MapJoiner.join(Joiner.java:330)
>   at org.apache.cassandra.utils.FBUtilities.toString(FBUtilities.java:515)
>   at 
> org.apache.cassandra.db.compaction.CompactionHistoryTabularData.from(CompactionHistoryTabularData.java:78)
>   at 
> org.apache.cassandra.db.SystemKeyspace.getCompactionHistory(SystemKeyspace.java:422)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getCompactionHistory(CompactionManager.java:1490)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$2.run(Transport.java:202)
>   at sun.rmi.transport.Transport$2.run(Transport.java:199)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> admin@localhost:~$ cqlsh -e "select * from system.compaction_hi

[jira] [Created] (CASSANDRA-9285) LEAK DETECTED in sstwriter

2015-05-01 Thread Pierre N. (JIRA)
Pierre N. created CASSANDRA-9285:


 Summary: LEAK DETECTED in sstwriter
 Key: CASSANDRA-9285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9285
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre N.


I think new IndexWriter iWriter in SSTableWriter is not correctly closed in 
SSTableWriter.close() (at least, iWriter.summary is not closed)

reproduce bug : 

{code}
public static void main(String[] args) throws Exception {
System.setProperty("cassandra.debugrefcount","true");

String ks = "ks1";
String table = "t1";

String schema = "CREATE TABLE " + ks + "." + table + "(a1 INT, PRIMARY 
KEY (a1));";
String insert = "INSERT INTO "+ ks + "." + table + "(a1) VALUES(?);";

File dir = new File("/var/tmp/" + ks + "/" + table);
dir.mkdirs();

CQLSSTableWriter writer = 
CQLSSTableWriter.builder().forTable(schema).using(insert).inDirectory(dir).build();

writer.addRow(1);
writer.close();
writer = null;

Thread.sleep(1000);System.gc();
Thread.sleep(1000);System.gc();
}
{code}

{quote}
[2015-05-01 16:09:59,139] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@79fa9da9) to class 
org.apache.cassandra.io.util.SafeMemory$MemoryTidy@2053866990:Memory@[7f87f8043b20..7f87f8043b48)
 was not released before the reference was garbage collected
[2015-05-01 16:09:59,143] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
org.apache.cassandra.utils.concurrent.Ref$State@79fa9da9:
Thread[Thread-2,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1552)
at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:133)
at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:60)
at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32)
at 
org.apache.cassandra.io.util.SafeMemoryWriter.(SafeMemoryWriter.java:33)
at 
org.apache.cassandra.io.sstable.IndexSummaryBuilder.(IndexSummaryBuilder.java:111)
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.(SSTableWriter.java:576)
at 
org.apache.cassandra.io.sstable.SSTableWriter.(SSTableWriter.java:140)
at 
org.apache.cassandra.io.sstable.AbstractSSTableSimpleWriter.getWriter(AbstractSSTableSimpleWriter.java:58)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:227)

[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@664382e3) to class 
org.apache.cassandra.io.util.SafeMemory$MemoryTidy@899100784:Memory@[7f87f8043990..7f87f8043994)
 was not released before the reference was garbage collected
[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
org.apache.cassandra.utils.concurrent.Ref$State@664382e3:
Thread[Thread-2,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1552)
at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:133)
at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:60)
at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32)
at 
org.apache.cassandra.io.util.SafeMemoryWriter.(SafeMemoryWriter.java:33)
at 
org.apache.cassandra.io.sstable.IndexSummaryBuilder.(IndexSummaryBuilder.java:110)
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.(SSTableWriter.java:576)
at 
org.apache.cassandra.io.sstable.SSTableWriter.(SSTableWriter.java:140)
at 
org.apache.cassandra.io.sstable.AbstractSSTableSimpleWriter.getWriter(AbstractSSTableSimpleWriter.java:58)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:227)

[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@3cca0ac2) to class 
org.apache.cassandra.io.util.SafeMemory$MemoryTidy@499043670:Memory@[7f87f8039940..7f87f8039c60)
 was not released before the reference was garbage collected
[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
org.apache.cassandra.utils.concurrent.Ref$State@3cca0ac2:
Thread[Thread-2,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1552)
at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
at org.apache.cassandra.utils.concu

[jira] [Updated] (CASSANDRA-9285) LEAK DETECTED in sstwriter

2015-05-01 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-9285:
-
Description: 
I think iWriter of SSTableWriter is not correctly closed in 
SSTableWriter.close() (at least, iWriter.summary is not closed)

reproduce bug : 

{code}
public static void main(String[] args) throws Exception {
System.setProperty("cassandra.debugrefcount","true");

String ks = "ks1";
String table = "t1";

String schema = "CREATE TABLE " + ks + "." + table + "(a1 INT, PRIMARY 
KEY (a1));";
String insert = "INSERT INTO "+ ks + "." + table + "(a1) VALUES(?);";

File dir = new File("/var/tmp/" + ks + "/" + table);
dir.mkdirs();

CQLSSTableWriter writer = 
CQLSSTableWriter.builder().forTable(schema).using(insert).inDirectory(dir).build();

writer.addRow(1);
writer.close();
writer = null;

Thread.sleep(1000);System.gc();
Thread.sleep(1000);System.gc();
}
{code}

{quote}
[2015-05-01 16:09:59,139] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@79fa9da9) to class 
org.apache.cassandra.io.util.SafeMemory$MemoryTidy@2053866990:Memory@[7f87f8043b20..7f87f8043b48)
 was not released before the reference was garbage collected
[2015-05-01 16:09:59,143] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
org.apache.cassandra.utils.concurrent.Ref$State@79fa9da9:
Thread[Thread-2,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1552)
at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:133)
at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:60)
at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32)
at 
org.apache.cassandra.io.util.SafeMemoryWriter.(SafeMemoryWriter.java:33)
at 
org.apache.cassandra.io.sstable.IndexSummaryBuilder.(IndexSummaryBuilder.java:111)
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.(SSTableWriter.java:576)
at 
org.apache.cassandra.io.sstable.SSTableWriter.(SSTableWriter.java:140)
at 
org.apache.cassandra.io.sstable.AbstractSSTableSimpleWriter.getWriter(AbstractSSTableSimpleWriter.java:58)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:227)

[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@664382e3) to class 
org.apache.cassandra.io.util.SafeMemory$MemoryTidy@899100784:Memory@[7f87f8043990..7f87f8043994)
 was not released before the reference was garbage collected
[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
org.apache.cassandra.utils.concurrent.Ref$State@664382e3:
Thread[Thread-2,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1552)
at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:133)
at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:60)
at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32)
at 
org.apache.cassandra.io.util.SafeMemoryWriter.(SafeMemoryWriter.java:33)
at 
org.apache.cassandra.io.sstable.IndexSummaryBuilder.(IndexSummaryBuilder.java:110)
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.(SSTableWriter.java:576)
at 
org.apache.cassandra.io.sstable.SSTableWriter.(SSTableWriter.java:140)
at 
org.apache.cassandra.io.sstable.AbstractSSTableSimpleWriter.getWriter(AbstractSSTableSimpleWriter.java:58)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:227)

[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@3cca0ac2) to class 
org.apache.cassandra.io.util.SafeMemory$MemoryTidy@499043670:Memory@[7f87f8039940..7f87f8039c60)
 was not released before the reference was garbage collected
[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
org.apache.cassandra.utils.concurrent.Ref$State@3cca0ac2:
Thread[Thread-2,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1552)
at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:133)
at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:60)
at org.apache.cas

[jira] [Updated] (CASSANDRA-9285) LEAK DETECTED in sstwriter

2015-05-01 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-9285:
-
Description: 
reproduce bug : 

{code}
public static void main(String[] args) throws Exception {
System.setProperty("cassandra.debugrefcount","true");

String ks = "ks1";
String table = "t1";

String schema = "CREATE TABLE " + ks + "." + table + "(a1 INT, PRIMARY 
KEY (a1));";
String insert = "INSERT INTO "+ ks + "." + table + "(a1) VALUES(?);";

File dir = new File("/var/tmp/" + ks + "/" + table);
dir.mkdirs();

CQLSSTableWriter writer = 
CQLSSTableWriter.builder().forTable(schema).using(insert).inDirectory(dir).build();

writer.addRow(1);
writer.close();
writer = null;

Thread.sleep(1000);System.gc();
Thread.sleep(1000);System.gc();
}
{code}

{quote}
[2015-05-01 16:09:59,139] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@79fa9da9) to class 
org.apache.cassandra.io.util.SafeMemory$MemoryTidy@2053866990:Memory@[7f87f8043b20..7f87f8043b48)
 was not released before the reference was garbage collected
[2015-05-01 16:09:59,143] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
org.apache.cassandra.utils.concurrent.Ref$State@79fa9da9:
Thread[Thread-2,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1552)
at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:133)
at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:60)
at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32)
at 
org.apache.cassandra.io.util.SafeMemoryWriter.(SafeMemoryWriter.java:33)
at 
org.apache.cassandra.io.sstable.IndexSummaryBuilder.(IndexSummaryBuilder.java:111)
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.(SSTableWriter.java:576)
at 
org.apache.cassandra.io.sstable.SSTableWriter.(SSTableWriter.java:140)
at 
org.apache.cassandra.io.sstable.AbstractSSTableSimpleWriter.getWriter(AbstractSSTableSimpleWriter.java:58)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:227)

[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@664382e3) to class 
org.apache.cassandra.io.util.SafeMemory$MemoryTidy@899100784:Memory@[7f87f8043990..7f87f8043994)
 was not released before the reference was garbage collected
[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
org.apache.cassandra.utils.concurrent.Ref$State@664382e3:
Thread[Thread-2,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1552)
at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:133)
at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:60)
at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32)
at 
org.apache.cassandra.io.util.SafeMemoryWriter.(SafeMemoryWriter.java:33)
at 
org.apache.cassandra.io.sstable.IndexSummaryBuilder.(IndexSummaryBuilder.java:110)
at 
org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.(SSTableWriter.java:576)
at 
org.apache.cassandra.io.sstable.SSTableWriter.(SSTableWriter.java:140)
at 
org.apache.cassandra.io.sstable.AbstractSSTableSimpleWriter.getWriter(AbstractSSTableSimpleWriter.java:58)
at 
org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:227)

[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@3cca0ac2) to class 
org.apache.cassandra.io.util.SafeMemory$MemoryTidy@499043670:Memory@[7f87f8039940..7f87f8039c60)
 was not released before the reference was garbage collected
[2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
org.apache.cassandra.utils.concurrent.Ref$State@3cca0ac2:
Thread[Thread-2,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1552)
at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:133)
at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:60)
at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32)
at 
org.apache.cassandra.io.compress.CompressionMetadata$Writer.(Compre

[jira] [Commented] (CASSANDRA-9285) LEAK DETECTED in sstwriter

2015-05-01 Thread Pierre N. (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14523289#comment-14523289
 ] 

Pierre N. commented on CASSANDRA-9285:
--

Didn't installed 2.1.5 as a server yet.

Following may fix the issue

{code:none}
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 9ac2f89..1cd7956 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -409,6 +409,7 @@ public class CompressionMetadata
writeHeader(out, dataLength, chunks);
 for (int i = 0 ; i < count ; i++)
 out.writeLong(offsets.getLong(i * 8L));
+offsets.close();
 }
 finally
 {
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
index a39c134..9648636 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
@@ -486,6 +486,7 @@ public class SSTableWriter extends SSTable
 case EARLY: case CLOSE: case NORMAL:
 iwriter.close();
 dataFile.close();
+iwriter.summary.close();
 if (type == FinishType.CLOSE)
 iwriter.bf.close();
 }
{code}

> LEAK DETECTED in sstwriter
> --
>
> Key: CASSANDRA-9285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9285
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
> Fix For: 2.1.x
>
>
> reproduce bug : 
> {code}
> public static void main(String[] args) throws Exception {
> System.setProperty("cassandra.debugrefcount","true");
> 
> String ks = "ks1";
> String table = "t1";
> 
> String schema = "CREATE TABLE " + ks + "." + table + "(a1 INT, 
> PRIMARY KEY (a1));";
> String insert = "INSERT INTO "+ ks + "." + table + "(a1) VALUES(?);";
> 
> File dir = new File("/var/tmp/" + ks + "/" + table);
> dir.mkdirs();
> 
> CQLSSTableWriter writer = 
> CQLSSTableWriter.builder().forTable(schema).using(insert).inDirectory(dir).build();
> 
> writer.addRow(1);
> writer.close();
> writer = null;
> 
> Thread.sleep(1000);System.gc();
> Thread.sleep(1000);System.gc();
> }
> {code}
> {quote}
> [2015-05-01 16:09:59,139] [Reference-Reaper:1] ERROR 
> org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@79fa9da9) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@2053866990:Memory@[7f87f8043b20..7f87f8043b48)
>  was not released before the reference was garbage collected
> [2015-05-01 16:09:59,143] [Reference-Reaper:1] ERROR 
> org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
> org.apache.cassandra.utils.concurrent.Ref$State@79fa9da9:
> Thread[Thread-2,5,main]
>   at java.lang.Thread.getStackTrace(Thread.java:1552)
>   at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
>   at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:133)
>   at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:60)
>   at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32)
>   at 
> org.apache.cassandra.io.util.SafeMemoryWriter.(SafeMemoryWriter.java:33)
>   at 
> org.apache.cassandra.io.sstable.IndexSummaryBuilder.(IndexSummaryBuilder.java:111)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.(SSTableWriter.java:576)
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.(SSTableWriter.java:140)
>   at 
> org.apache.cassandra.io.sstable.AbstractSSTableSimpleWriter.getWriter(AbstractSSTableSimpleWriter.java:58)
>   at 
> org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:227)
> [2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
> org.apache.cassandra.utils.concurrent.Ref - LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@664382e3) to class 
> org.apache.cassandra.io.util.SafeMemory$MemoryTidy@899100784:Memory@[7f87f8043990..7f87f8043994)
>  was not released before the reference was garbage collected
> [2015-05-01 16:09:59,144] [Reference-Reaper:1] ERROR 
> org.apache.cassandra.utils.concurrent.Ref - Allocate trace 
> org.apache.cassandra.utils.concurrent.Ref$State@664382e3:
> Thread[Thread-2,5,main]
>   at java.lang.Thread.getStackTrace(Thread.java:1552)
>   at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:200)
>   at org.apache.cassa

[jira] [Updated] (CASSANDRA-9323) Bulk upload is slow

2015-05-07 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-9323:
-
Description: 
When I bulk upload sstable created with CQLSSTableWriter, it's very slow. I 
tested on a fresh cassandra node (nothing in keyspace, nor tables) with good 
hardware (8x2.8ghz, 32G ram), but with classic hard disk (performance won't be 
improved with SSD in this case I think). 

When I upload from a different server an sstable I get an average of 3 MB/sec, 
in the attached example I managed to get 5 MB/sec, which is still slow.

During the streaming process  I noticed that one core of the server is full 
CPU, so I think the operation is CPU bound server side. I quickly attached a 
sample profiler to the cassandra instance and got the following output : 

https://i.imgur.com/IfLc2Ip.png

So, I think, but I may be wrong because it's inaccurate sampling, during 
streaming the table is unserialized and reserialized to another sstable, and 
that's this unserailize/serialize process which is taking a big amount of CPU, 
slowing down the insert speed.

Can someone confirm the bulk load is slow ? I tested also on my computer and 
barely reach 1MB/sec 

I don't understand the point of totally unserializing the table I just did 
build using the CQLSStableWriter (because it's already a long process to build 
and sort the table), couldn't it just copy the table from offset X to offset Y 
(using index information by example) without unserializing/reserializing it ?


  was:
Hi, 

When I bulk upload sstable created with CQLSSTableWriter, it's very slow. I 
tested on a fresh cassandra node (nothing in keyspace, nor tables) with good 
hardware (8x2.8ghz, 32G ram), but with classic hard disk (performance won't be 
improved with SSD in this case I think). 

When I upload from a different server an sstable I get an average of 3 MB/sec, 
in the attached example I managed to get 5 MB/sec, which is still slow.

During the streaming process  I noticed that one core of the server is full 
CPU, so I think the operation is CPU bound server side. I quickly attached a 
sample profiler to the cassandra instance and got the following output : 

https://i.imgur.com/IfLc2Ip.png

So, I think, but I may be wrong because it's inaccurate sampling, during 
streaming the table is unserialized and reserialized to another sstable, and 
that's this unserailize/serialize process which is taking a big amount of CPU, 
slowing down the insert speed.

Can someone confirm the bulk load is slow ? I tested also on my computer and 
barely reach 1MB/sec 

I don't understand the point of totally unserializing the table I just did 
build using the CQLSStableWriter (because it's already a long process to build 
and sort the table), couldn't it just copy the table from offset X to offset Y 
(using index information by example) without unserializing/reserializing it ?



> Bulk upload is slow
> ---
>
> Key: CASSANDRA-9323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9323
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
> Attachments: App.java
>
>
> When I bulk upload sstable created with CQLSSTableWriter, it's very slow. I 
> tested on a fresh cassandra node (nothing in keyspace, nor tables) with good 
> hardware (8x2.8ghz, 32G ram), but with classic hard disk (performance won't 
> be improved with SSD in this case I think). 
> When I upload from a different server an sstable I get an average of 3 
> MB/sec, in the attached example I managed to get 5 MB/sec, which is still 
> slow.
> During the streaming process  I noticed that one core of the server is full 
> CPU, so I think the operation is CPU bound server side. I quickly attached a 
> sample profiler to the cassandra instance and got the following output : 
> https://i.imgur.com/IfLc2Ip.png
> So, I think, but I may be wrong because it's inaccurate sampling, during 
> streaming the table is unserialized and reserialized to another sstable, and 
> that's this unserailize/serialize process which is taking a big amount of 
> CPU, slowing down the insert speed.
> Can someone confirm the bulk load is slow ? I tested also on my computer and 
> barely reach 1MB/sec 
> I don't understand the point of totally unserializing the table I just did 
> build using the CQLSStableWriter (because it's already a long process to 
> build and sort the table), couldn't it just copy the table from offset X to 
> offset Y (using index information by example) without 
> unserializing/reserializing it ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9323) Bulk upload is slow

2015-05-07 Thread Pierre N. (JIRA)
Pierre N. created CASSANDRA-9323:


 Summary: Bulk upload is slow
 Key: CASSANDRA-9323
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9323
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre N.
 Attachments: App.java

Hi, 

When I bulk upload sstable created with CQLSSTableWriter, it's very slow. I 
tested on a fresh cassandra node (nothing in keyspace, nor tables) with good 
hardware (8x2.8ghz, 32G ram), but with classic hard disk (performance won't be 
improved with SSD in this case I think). 

When I upload from a different server an sstable I get an average of 3 MB/sec, 
in the attached example I managed to get 5 MB/sec, which is still slow.

During the streaming process  I noticed that one core of the server is full 
CPU, so I think the operation is CPU bound server side. I quickly attached a 
sample profiler to the cassandra instance and got the following output : 

https://i.imgur.com/IfLc2Ip.png

So, I think, but I may be wrong because it's inaccurate sampling, during 
streaming the table is unserialized and reserialized to another sstable, and 
that's this unserailize/serialize process which is taking a big amount of CPU, 
slowing down the insert speed.

Can someone confirm the bulk load is slow ? I tested also on my computer and 
barely reach 1MB/sec 

I don't understand the point of totally unserializing the table I just did 
build using the CQLSStableWriter (because it's already a long process to build 
and sort the table), couldn't it just copy the table from offset X to offset Y 
(using index information by example) without unserializing/reserializing it ?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9323) Bulk loading is slow

2015-05-07 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-9323:
-
Description: 
When I bulk upload sstable created with CQLSSTableWriter, it's very slow. I 
tested on a fresh cassandra node (nothing in keyspace, nor tables) with good 
hardware (8x2.8ghz, 32G ram), but with classic hard disk (performance won't be 
improved with SSD in this case I think). 

When I upload from a different server an sstable using sstableloader I get an 
average of 3 MB/sec, in the attached example I managed to get 5 MB/sec, which 
is still slow.

During the streaming process  I noticed that one core of the server is full 
CPU, so I think the operation is CPU bound server side. I quickly attached a 
sample profiler to the cassandra instance and got the following output : 

https://i.imgur.com/IfLc2Ip.png

So, I think, but I may be wrong because it's inaccurate sampling, during 
streaming the table is unserialized and reserialized to another sstable, and 
that's this unserailize/serialize process which is taking a big amount of CPU, 
slowing down the insert speed.

Can someone confirm the bulk load is slow ? I tested also on my computer and 
barely reach 1MB/sec 

I don't understand the point of totally unserializing the table I just did 
build using the CQLSStableWriter (because it's already a long process to build 
and sort the table), couldn't it just copy the table from offset X to offset Y 
(using index information by example) without unserializing/reserializing it ?


  was:
When I bulk upload sstable created with CQLSSTableWriter, it's very slow. I 
tested on a fresh cassandra node (nothing in keyspace, nor tables) with good 
hardware (8x2.8ghz, 32G ram), but with classic hard disk (performance won't be 
improved with SSD in this case I think). 

When I upload from a different server an sstable I get an average of 3 MB/sec, 
in the attached example I managed to get 5 MB/sec, which is still slow.

During the streaming process  I noticed that one core of the server is full 
CPU, so I think the operation is CPU bound server side. I quickly attached a 
sample profiler to the cassandra instance and got the following output : 

https://i.imgur.com/IfLc2Ip.png

So, I think, but I may be wrong because it's inaccurate sampling, during 
streaming the table is unserialized and reserialized to another sstable, and 
that's this unserailize/serialize process which is taking a big amount of CPU, 
slowing down the insert speed.

Can someone confirm the bulk load is slow ? I tested also on my computer and 
barely reach 1MB/sec 

I don't understand the point of totally unserializing the table I just did 
build using the CQLSStableWriter (because it's already a long process to build 
and sort the table), couldn't it just copy the table from offset X to offset Y 
(using index information by example) without unserializing/reserializing it ?



> Bulk loading is slow
> 
>
> Key: CASSANDRA-9323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9323
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
> Attachments: App.java
>
>
> When I bulk upload sstable created with CQLSSTableWriter, it's very slow. I 
> tested on a fresh cassandra node (nothing in keyspace, nor tables) with good 
> hardware (8x2.8ghz, 32G ram), but with classic hard disk (performance won't 
> be improved with SSD in this case I think). 
> When I upload from a different server an sstable using sstableloader I get an 
> average of 3 MB/sec, in the attached example I managed to get 5 MB/sec, which 
> is still slow.
> During the streaming process  I noticed that one core of the server is full 
> CPU, so I think the operation is CPU bound server side. I quickly attached a 
> sample profiler to the cassandra instance and got the following output : 
> https://i.imgur.com/IfLc2Ip.png
> So, I think, but I may be wrong because it's inaccurate sampling, during 
> streaming the table is unserialized and reserialized to another sstable, and 
> that's this unserailize/serialize process which is taking a big amount of 
> CPU, slowing down the insert speed.
> Can someone confirm the bulk load is slow ? I tested also on my computer and 
> barely reach 1MB/sec 
> I don't understand the point of totally unserializing the table I just did 
> build using the CQLSStableWriter (because it's already a long process to 
> build and sort the table), couldn't it just copy the table from offset X to 
> offset Y (using index information by example) without 
> unserializing/reserializing it ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9323) Bulk loading is slow

2015-05-07 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-9323:
-
Summary: Bulk loading is slow  (was: Bulk upload is slow)

> Bulk loading is slow
> 
>
> Key: CASSANDRA-9323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9323
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
> Attachments: App.java
>
>
> When I bulk upload sstable created with CQLSSTableWriter, it's very slow. I 
> tested on a fresh cassandra node (nothing in keyspace, nor tables) with good 
> hardware (8x2.8ghz, 32G ram), but with classic hard disk (performance won't 
> be improved with SSD in this case I think). 
> When I upload from a different server an sstable I get an average of 3 
> MB/sec, in the attached example I managed to get 5 MB/sec, which is still 
> slow.
> During the streaming process  I noticed that one core of the server is full 
> CPU, so I think the operation is CPU bound server side. I quickly attached a 
> sample profiler to the cassandra instance and got the following output : 
> https://i.imgur.com/IfLc2Ip.png
> So, I think, but I may be wrong because it's inaccurate sampling, during 
> streaming the table is unserialized and reserialized to another sstable, and 
> that's this unserailize/serialize process which is taking a big amount of 
> CPU, slowing down the insert speed.
> Can someone confirm the bulk load is slow ? I tested also on my computer and 
> barely reach 1MB/sec 
> I don't understand the point of totally unserializing the table I just did 
> build using the CQLSStableWriter (because it's already a long process to 
> build and sort the table), couldn't it just copy the table from offset X to 
> offset Y (using index information by example) without 
> unserializing/reserializing it ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9323) Bulk loading is slow

2015-05-07 Thread Pierre N. (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre N. updated CASSANDRA-9323:
-
Description: 
When I bulk upload sstable created with CQLSSTableWriter, it's very slow. I 
tested on a fresh cassandra node (nothing in keyspace, nor tables) with good 
hardware (8x2.8ghz, 32G ram), but with classic hard disk (performance won't be 
improved with SSD in this case I think). 

When I upload from a different server an sstable using sstableloader I get an 
average of 3 MB/sec, in the attached example I managed to get 5 MB/sec, which 
is still slow.

During the streaming process  I noticed that one core of the server is full 
CPU, so I think the operation is CPU bound server side. I quickly attached a 
sample profiler to the cassandra instance and got the following output : 

https://i.imgur.com/IfLc2Ip.png

So, I think, but I may be wrong because it's inaccurate sampling, during 
streaming the table is unserialized and reserialized to another sstable, and 
that's this unserialize/serialize process which is taking a big amount of CPU, 
slowing down the insert speed.

Can someone confirm the bulk load is slow ? I tested also on my computer and 
barely reach 1MB/sec 

I don't understand the point of totally unserializing the table I just did 
build using the CQLSStableWriter (because it's already a long process to build 
and sort the table), couldn't it just copy the table from offset X to offset Y 
(using index information by example) without unserializing/reserializing it ?


  was:
When I bulk upload sstable created with CQLSSTableWriter, it's very slow. I 
tested on a fresh cassandra node (nothing in keyspace, nor tables) with good 
hardware (8x2.8ghz, 32G ram), but with classic hard disk (performance won't be 
improved with SSD in this case I think). 

When I upload from a different server an sstable using sstableloader I get an 
average of 3 MB/sec, in the attached example I managed to get 5 MB/sec, which 
is still slow.

During the streaming process  I noticed that one core of the server is full 
CPU, so I think the operation is CPU bound server side. I quickly attached a 
sample profiler to the cassandra instance and got the following output : 

https://i.imgur.com/IfLc2Ip.png

So, I think, but I may be wrong because it's inaccurate sampling, during 
streaming the table is unserialized and reserialized to another sstable, and 
that's this unserailize/serialize process which is taking a big amount of CPU, 
slowing down the insert speed.

Can someone confirm the bulk load is slow ? I tested also on my computer and 
barely reach 1MB/sec 

I don't understand the point of totally unserializing the table I just did 
build using the CQLSStableWriter (because it's already a long process to build 
and sort the table), couldn't it just copy the table from offset X to offset Y 
(using index information by example) without unserializing/reserializing it ?



> Bulk loading is slow
> 
>
> Key: CASSANDRA-9323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9323
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
> Attachments: App.java
>
>
> When I bulk upload sstable created with CQLSSTableWriter, it's very slow. I 
> tested on a fresh cassandra node (nothing in keyspace, nor tables) with good 
> hardware (8x2.8ghz, 32G ram), but with classic hard disk (performance won't 
> be improved with SSD in this case I think). 
> When I upload from a different server an sstable using sstableloader I get an 
> average of 3 MB/sec, in the attached example I managed to get 5 MB/sec, which 
> is still slow.
> During the streaming process  I noticed that one core of the server is full 
> CPU, so I think the operation is CPU bound server side. I quickly attached a 
> sample profiler to the cassandra instance and got the following output : 
> https://i.imgur.com/IfLc2Ip.png
> So, I think, but I may be wrong because it's inaccurate sampling, during 
> streaming the table is unserialized and reserialized to another sstable, and 
> that's this unserialize/serialize process which is taking a big amount of 
> CPU, slowing down the insert speed.
> Can someone confirm the bulk load is slow ? I tested also on my computer and 
> barely reach 1MB/sec 
> I don't understand the point of totally unserializing the table I just did 
> build using the CQLSStableWriter (because it's already a long process to 
> build and sort the table), couldn't it just copy the table from offset X to 
> offset Y (using index information by example) without 
> unserializing/reserializing it ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8845) sorted CQLSSTableWriter accept unsorted clustering keys

2015-03-18 Thread Pierre N. (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14368609#comment-14368609
 ] 

Pierre N. commented on CASSANDRA-8845:
--

Yes that's what I'm saying, if it's the desired behavior, just javadoc should 
be updated and there is no bug.

> sorted CQLSSTableWriter accept unsorted clustering keys
> ---
>
> Key: CASSANDRA-8845
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8845
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre N.
> Fix For: 2.1.4
>
> Attachments: TestSorted.java
>
>
> The javadoc says : 
> {quote}
> The SSTable sorted order means that rows are added such that their partition 
> key respect the partitioner order and for a given partition, that *the rows 
> respect the clustering columns order*.
> public Builder sorted()
> {quote}
> It throw an ex when partition key are in incorrect order, however, it doesn't 
> throw an ex when rows are inserted with incorrect clustering keys order. It 
> buffer them and sort them in correct order.
> {code}
> writer.addRow(1, 3);
> writer.addRow(1, 1);
> writer.addRow(1, 2);
> {code}
> {code}
> $ sstable2json sorted/ks/t1/ks-t1-ka-1-Data.db 
> [
> {"key": "1",
>  "cells": [["\u\u\u\u0001:","",1424524149557000],
>["\u\u\u\u0002:","",1424524149557000],
>["\u\u\u\u0003:","",142452414955]]}
> ]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)