[jira] [Comment Edited] (CASSANDRA-15128) Cassandra does not support openjdk version "1.8.0_202"

2019-05-15 Thread Panneerselvam (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16841023#comment-16841023
 ] 

Panneerselvam edited comment on CASSANDRA-15128 at 5/16/19 6:35 AM:


*+Please find the details below before and after the jdk change.+*

 

*+Before changing to oracle jdk :+*   Caused Issue 

 

C:\Users\panneer>java -version

openjdk version "1.8.0_202"

OpenJDK Runtime Environment (build 1.8.0_202-b08)

Eclipse OpenJ9 VM (build openj9-0.12.1, JRE 1.8.0 Windows 8.1 amd64-64-Bit 
Compressed References 20190205_265 (JIT enabled, AOT enabled)

OpenJ9   - 90dd8cb40

OMR  - d2f4534b

JCL  - d002501a90 based on jdk8u202-b08)

 

*+After changing to oracle jdk:+*    Working version

 

C:\Users\panneer>java -version

java version "1.8.0_141"

Java(TM) SE Runtime Environment (build 1.8.0_141-b15)

Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)


was (Author: panneerboss):
*+Please find the details below before and after the jdk change.+*

 

*+Before changing to oracle jdk :+  * Caused Issue++

 

C:\Users\panneer>java -version

openjdk version "1.8.0_202"

OpenJDK Runtime Environment (build 1.8.0_202-b08)

Eclipse OpenJ9 VM (build openj9-0.12.1, JRE 1.8.0 Windows 8.1 amd64-64-Bit 
Compressed References 20190205_265 (JIT enabled, AOT enabled)

OpenJ9   - 90dd8cb40

OMR  - d2f4534b

JCL  - d002501a90 based on jdk8u202-b08)

 

*+After changing to oracle jdk:+*   ++  Working version

 

C:\Users\panneer>D:\Softwares\Java\jdk1.8.0_141\bin\java -version

java version "1.8.0_141"

Java(TM) SE Runtime Environment (build 1.8.0_141-b15)

Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)

> Cassandra does not support openjdk version "1.8.0_202"
> --
>
> Key: CASSANDRA-15128
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15128
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Panneerselvam
>Priority: Normal
>
> I am trying to setup Apache Cassandra DB 3.11.4 version in my Windows 8 
> system  and getting below error while starting the Cassandra.bat file.
>  Software installed:
>  * Cassandra 3.11.4 
>  * Java 1.8 
>  * Python 2.7
> It started working after installing HotSpot jdk 1.8 . 
> Are we not supporting openjdk1.8 or only the issue with the particular 
> version (1.8.0_202).
>  
>  
> {code:java}
> Exception (java.lang.ExceptionInInitializerError) encountered during startup: 
> null
> java.lang.ExceptionInInitializerError
>     at java.lang.J9VMInternals.ensureError(J9VMInternals.java:146)
>     at 
> java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:135)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfReferenceArray(ObjectSizes.java:79)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfArray(ObjectSizes.java:89)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOnHeapExcludingData(ObjectSizes.java:112)
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.unsharedHeapSizeExcludingData(AbstractBufferClusteringPrefix.java:70)
>     at 
> org.apache.cassandra.db.rows.BTreeRow.unsharedHeapSizeExcludingData(BTreeRow.java:450)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:336)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
>     at 
> org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:139)
>     at org.apache.cassandra.utils.btree.BTree.build(BTree.java:121)
>     at org.apache.cassandra.utils.btree.BTree.update(BTree.java:178)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
>     at org.apache.cassandra.db.Memtable.put(Memtable.java:282)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1352)
>     at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:626)
>     at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:470)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:587)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:581)
>     at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:365)
>     at 
> org.apache.cassandra.db.SystemKeyspace.persistLocalMetadata(SystemKeyspace.java:520)

[jira] [Commented] (CASSANDRA-15128) Cassandra does not support openjdk version "1.8.0_202"

2019-05-15 Thread Panneerselvam (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16841023#comment-16841023
 ] 

Panneerselvam commented on CASSANDRA-15128:
---

*+Please find the details below before and after the jdk change.+*

 

*+Before changing to oracle jdk :+  * Caused Issue++

 

C:\Users\panneer>java -version

openjdk version "1.8.0_202"

OpenJDK Runtime Environment (build 1.8.0_202-b08)

Eclipse OpenJ9 VM (build openj9-0.12.1, JRE 1.8.0 Windows 8.1 amd64-64-Bit 
Compressed References 20190205_265 (JIT enabled, AOT enabled)

OpenJ9   - 90dd8cb40

OMR  - d2f4534b

JCL  - d002501a90 based on jdk8u202-b08)

 

*+After changing to oracle jdk:+*   ++  Working version

 

C:\Users\panneer>D:\Softwares\Java\jdk1.8.0_141\bin\java -version

java version "1.8.0_141"

Java(TM) SE Runtime Environment (build 1.8.0_141-b15)

Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)

> Cassandra does not support openjdk version "1.8.0_202"
> --
>
> Key: CASSANDRA-15128
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15128
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Panneerselvam
>Priority: Normal
>
> I am trying to setup Apache Cassandra DB 3.11.4 version in my Windows 8 
> system  and getting below error while starting the Cassandra.bat file.
>  Software installed:
>  * Cassandra 3.11.4 
>  * Java 1.8 
>  * Python 2.7
> It started working after installing HotSpot jdk 1.8 . 
> Are we not supporting openjdk1.8 or only the issue with the particular 
> version (1.8.0_202).
>  
>  
> {code:java}
> Exception (java.lang.ExceptionInInitializerError) encountered during startup: 
> null
> java.lang.ExceptionInInitializerError
>     at java.lang.J9VMInternals.ensureError(J9VMInternals.java:146)
>     at 
> java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:135)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfReferenceArray(ObjectSizes.java:79)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfArray(ObjectSizes.java:89)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOnHeapExcludingData(ObjectSizes.java:112)
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.unsharedHeapSizeExcludingData(AbstractBufferClusteringPrefix.java:70)
>     at 
> org.apache.cassandra.db.rows.BTreeRow.unsharedHeapSizeExcludingData(BTreeRow.java:450)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:336)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
>     at 
> org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:139)
>     at org.apache.cassandra.utils.btree.BTree.build(BTree.java:121)
>     at org.apache.cassandra.utils.btree.BTree.update(BTree.java:178)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
>     at org.apache.cassandra.db.Memtable.put(Memtable.java:282)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1352)
>     at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:626)
>     at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:470)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:587)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:581)
>     at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:365)
>     at 
> org.apache.cassandra.db.SystemKeyspace.persistLocalMetadata(SystemKeyspace.java:520)
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:221)
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:620)
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:732)
> Caused by: java.lang.NumberFormatException: For input string: "openj9-0"
>     at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>     at java.lang.Integer.parseInt(Integer.java:580)
>     at java.lang.Integer.parseInt(Integer.java:615)
>     at 
> org.github.jamm.MemoryLayoutSpecification.getEffectiveMemoryLayoutSpecification(MemoryLayoutSpecification.java:190)
>     at 
> org.github.jamm.MemoryLayoutSpecification.(MemoryLayoutSpecification.

[jira] [Updated] (CASSANDRA-15129) Cassandra unit test with compression occurs BUILD FAILED

2019-05-15 Thread maxwellguo (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maxwellguo updated CASSANDRA-15129:
---
Attachment: CASSANDRA-15129.txt

> Cassandra unit test with compression occurs BUILD FAILED 
> -
>
> Key: CASSANDRA-15129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15129
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: maxwellguo
>Priority: Normal
> Attachments: CASSANDRA-15129.txt
>
>
> under cassandra source code dir ,when I run the command : ant 
> test-compression will occurs npe exception . 
> {panel:title= log}
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
>  Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit-timeout]
> [junit-timeout] Null Test:Caused an ERROR
> [junit-timeout] 
> /Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
> [junit-timeout] java.lang.ClassNotFoundException: 
> /Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
> [junit-timeout]   at java.lang.Class.forName0(Native Method)
> [junit-timeout]   at java.lang.Class.forName(Class.java:264)
> [junit-timeout]
> [junit-timeout]
> [junit-timeout] Test 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest
>  FAILED
> {panel}
> for we use ant test-compression ,then the unit dir's test and the 
> dristributed dir test will be run with compression configure.  but in the 
> build.xml configure for testlist-compression macrodef, only unit test dir was 
> as the input .and the target test-compression use two fileset dir 
> "test.unit.src" and "test.distributed.src" , so when comes to distributed 
> dir's test with compression ,there occurs an CLASSANOT FOUND exception . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15129) Cassandra unit test with compression occurs BUILD FAILED

2019-05-15 Thread maxwellguo (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16841019#comment-16841019
 ] 

maxwellguo commented on CASSANDRA-15129:


I tried to fix this problem with little change to build.xml  but failed for i 
am not familiar with ant . I have got two suggests : 1. if we can make 
test-compression with unit and distribued separately, not under on 
test-compression target ; 2. delete line  for we got dtest, and compresion test with 
distribute can be move to here . [~ifesdjeen] [~aweisberg] 

> Cassandra unit test with compression occurs BUILD FAILED 
> -
>
> Key: CASSANDRA-15129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15129
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: maxwellguo
>Priority: Normal
>
> under cassandra source code dir ,when I run the command : ant 
> test-compression will occurs npe exception . 
> {panel:title= log}
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
>  Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit-timeout]
> [junit-timeout] Null Test:Caused an ERROR
> [junit-timeout] 
> /Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
> [junit-timeout] java.lang.ClassNotFoundException: 
> /Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
> [junit-timeout]   at java.lang.Class.forName0(Native Method)
> [junit-timeout]   at java.lang.Class.forName(Class.java:264)
> [junit-timeout]
> [junit-timeout]
> [junit-timeout] Test 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest
>  FAILED
> {panel}
> for we use ant test-compression ,then the unit dir's test and the 
> dristributed dir test will be run with compression configure.  but in the 
> build.xml configure for testlist-compression macrodef, only unit test dir was 
> as the input .and the target test-compression use two fileset dir 
> "test.unit.src" and "test.distributed.src" , so when comes to distributed 
> dir's test with compression ,there occurs an CLASSANOT FOUND exception . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15129) Cassandra unit test with compression occurs BUILD FAILED

2019-05-15 Thread maxwellguo (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maxwellguo updated CASSANDRA-15129:
---
Description: 
under cassandra source code dir ,when I run the command : ant test-compression 
will occurs npe exception . 

{panel:title= log}
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
[junit-timeout]
[junit-timeout] Null Test:  Caused an ERROR
[junit-timeout] 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] java.lang.ClassNotFoundException: 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] at java.lang.Class.forName0(Native Method)
[junit-timeout] at java.lang.Class.forName(Class.java:264)
[junit-timeout]
[junit-timeout]
[junit-timeout] Test 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest
 FAILED
{panel}

for we use ant test-compression ,then the unit dir's test and the dristributed 
dir test will be run with compression configure.  but in the build.xml 
configure for testlist-compression macrodef, only unit test dir was as the 
input .and the target test-compression use two fileset dir "test.unit.src" and 
"test.distributed.src" , so when comes to distributed dir's test with 
compression ,there occurs an CLASSANOT FOUND exception . 






  was:
under cassandra source code dir ,when I run the command : ant test-compression 
will occurs npe exception . 

{panel:title=My title}
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
[junit-timeout]
[junit-timeout] Null Test:  Caused an ERROR
[junit-timeout] 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] java.lang.ClassNotFoundException: 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] at java.lang.Class.forName0(Native Method)
[junit-timeout] at java.lang.Class.forName(Class.java:264)
[junit-timeout]
[junit-timeout]
[junit-timeout] Test 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest
 FAILED
{panel}

for we use ant test-compression ,then the unit dir's test and the dristributed 
dir test will be run with compression configure.  but in the build.xml 
configure for testlist-compression macrodef, only unit test dir was as the 
input .and the target test-compression use two fileset dir "test.unit.src" and 
"test.distributed.src" , so when comes to distributed dir's test with 
compression ,there occurs an CLASSANOT FOUND exception . 







> Cassandra unit test with compression occurs BUILD FAILED 
> -
>
> Key: CASSANDRA-15129
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15129
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: maxwellguo
>Priority: Normal
>
> under cassandra source code dir ,when I run the command : ant 
> test-compression will occurs npe exception . 
> {panel:title= log}
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
> [junit-timeout] Testsuite: 
> .Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
>  Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit-timeout]
> [junit-timeout] Null Test:Caused an ERROR
> [junit-timeout] 
> /Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
> [junit-timeout] java.lang.Cla

[jira] [Created] (CASSANDRA-15129) Cassandra unit test with compression occurs BUILD FAILED

2019-05-15 Thread maxwellguo (JIRA)
maxwellguo created CASSANDRA-15129:
--

 Summary: Cassandra unit test with compression occurs BUILD FAILED 
 Key: CASSANDRA-15129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15129
 Project: Cassandra
  Issue Type: Bug
  Components: Test/unit
Reporter: maxwellguo


under cassandra source code dir ,when I run the command : ant test-compression 
will occurs npe exception . 

{panel:title=My title}
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
[junit-timeout] Testsuite: 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest-compression
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
[junit-timeout]
[junit-timeout] Null Test:  Caused an ERROR
[junit-timeout] 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] java.lang.ClassNotFoundException: 
/Users/maxwell/Documents/software/cassandra_project/cassandra/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest
[junit-timeout] at java.lang.Class.forName0(Native Method)
[junit-timeout] at java.lang.Class.forName(Class.java:264)
[junit-timeout]
[junit-timeout]
[junit-timeout] Test 
.Users.maxwell.Documents.software.cassandra_project.cassandra.test.distributed.org.apache.cassandra.distributed.DistributedReadWritePathTest
 FAILED
{panel}

for we use ant test-compression ,then the unit dir's test and the dristributed 
dir test will be run with compression configure.  but in the build.xml 
configure for testlist-compression macrodef, only unit test dir was as the 
input .and the target test-compression use two fileset dir "test.unit.src" and 
"test.distributed.src" , so when comes to distributed dir's test with 
compression ,there occurs an CLASSANOT FOUND exception . 








--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15128) Cassandra does not support openjdk version "1.8.0_202"

2019-05-15 Thread Chris Lohfink (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16840920#comment-16840920
 ] 

Chris Lohfink commented on CASSANDRA-15128:
---

are you sure you are not running that with jdk 9? From the 
{{java.lang.J9VMInternals}}

> Cassandra does not support openjdk version "1.8.0_202"
> --
>
> Key: CASSANDRA-15128
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15128
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build
>Reporter: Panneerselvam
>Priority: Normal
>
> I am trying to setup Apache Cassandra DB 3.11.4 version in my Windows 8 
> system  and getting below error while starting the Cassandra.bat file.
>  Software installed:
>  * Cassandra 3.11.4 
>  * Java 1.8 
>  * Python 2.7
> It started working after installing HotSpot jdk 1.8 . 
> Are we not supporting openjdk1.8 or only the issue with the particular 
> version (1.8.0_202).
>  
>  
> {code:java}
> Exception (java.lang.ExceptionInInitializerError) encountered during startup: 
> null
> java.lang.ExceptionInInitializerError
>     at java.lang.J9VMInternals.ensureError(J9VMInternals.java:146)
>     at 
> java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:135)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfReferenceArray(ObjectSizes.java:79)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOfArray(ObjectSizes.java:89)
>     at 
> org.apache.cassandra.utils.ObjectSizes.sizeOnHeapExcludingData(ObjectSizes.java:112)
>     at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.unsharedHeapSizeExcludingData(AbstractBufferClusteringPrefix.java:70)
>     at 
> org.apache.cassandra.db.rows.BTreeRow.unsharedHeapSizeExcludingData(BTreeRow.java:450)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:336)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
>     at 
> org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:139)
>     at org.apache.cassandra.utils.btree.BTree.build(BTree.java:121)
>     at org.apache.cassandra.utils.btree.BTree.update(BTree.java:178)
>     at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
>     at org.apache.cassandra.db.Memtable.put(Memtable.java:282)
>     at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1352)
>     at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:626)
>     at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:470)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
>     at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:587)
>     at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:581)
>     at 
> org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:365)
>     at 
> org.apache.cassandra.db.SystemKeyspace.persistLocalMetadata(SystemKeyspace.java:520)
>     at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:221)
>     at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:620)
>     at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:732)
> Caused by: java.lang.NumberFormatException: For input string: "openj9-0"
>     at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>     at java.lang.Integer.parseInt(Integer.java:580)
>     at java.lang.Integer.parseInt(Integer.java:615)
>     at 
> org.github.jamm.MemoryLayoutSpecification.getEffectiveMemoryLayoutSpecification(MemoryLayoutSpecification.java:190)
>     at 
> org.github.jamm.MemoryLayoutSpecification.(MemoryLayoutSpecification.java:31)
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14459) DynamicEndpointSnitch should never prefer latent nodes

2019-05-15 Thread Blake Eggleston (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16840757#comment-16840757
 ] 

Blake Eggleston commented on CASSANDRA-14459:
-

I finished my first round of review of the implementation. I still need to 
spend some time looking at the tests and docs.

Some general notes:
 * I don't think we need a dynamicsnitch package, the 2 implementations could 
just live in the locator package.
 * I can't find anywhere that we validate the various dynamic snitch values? I 
didn't look super hard, but if we're not, it would be good to check people are 
using sane values on startup and jmx update.
 * It's probably a good time to rebase onto current trunk.

ScheduledExecutors:
 * {{getOrCreateSharedExecutor}} could be private
 * shutdown times accumulate in the wait section. ie: if we have 2 executors 
with a shutdown time of 2 seconds, we'll wait up to 4 seconds for them to stop.
 * we should validate shutdown hasn't been called in 
{{getOrCreateSharedExecutor}}
 * I don't see any benefit to making this class lock free. Making the 
{{getOrCreateSharedExecutor}} and {{shutdownAndWait}} methods synchronized and 
just using a normal hash map for {{executors}} would make this class easier to 
reason about. As it stands, you could argue there's some raciness with creating 
and shutting down at the same time, although it's unlikely to be a problem in 
practice. I do think I might be borderline nit picking here though.

StorageService
 * {{doLocalReadTest}} method is unused

MessagingService
 * Fix class declaration indentation
 * remove unused import

DynamicEndpointSnitch
 * I don't think we need to check {{logger.isTraceEnabled}} before calling 
{{logger.trace()}}? At least I don't see any toString computation that would 
happen in the calls.
 * The class hierarchy could be improved a bit. There's code in 
{{DynamicEndpointSnitchHistogram}} that's also used in the legacy snitch, and 
code in {{DynamicEndpointSnitch}} that's only used in 
{{DynamicEndpointSnitchHistogram}}. The boundary between DynamicEndpointSnitch 
and DynamicEndpointSnitchHistogram in particular feels kind of arbitrary.

DynamicEndpointLegacySnitch
 * If we're going to keep the old behavior around as a failsafe (and we 
should), I think we should avoid improving it by changing the reset behavior. 
Only resetting under some situations intuitively feels like the right thing to 
do, but it would suck if there were unforeseen problems with it that made it a 
regression from the 3.x behavior.

> DynamicEndpointSnitch should never prefer latent nodes
> --
>
> Key: CASSANDRA-14459
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14459
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Coordination
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Low
>  Labels: 4.0-feature-freeze-review-requested, 
> pull-request-available
> Fix For: 4.x
>
>  Time Spent: 25.5h
>  Remaining Estimate: 0h
>
> The DynamicEndpointSnitch has two unfortunate behaviors that allow it to 
> provide latent hosts as replicas:
>  # Loses all latency information when Cassandra restarts
>  # Clears latency information entirely every ten minutes (by default), 
> allowing global queries to be routed to _other datacenters_ (and local 
> queries cross racks/azs)
> This means that the first few queries after restart/reset could be quite slow 
> compared to average latencies. I propose we solve this by resetting to the 
> minimum observed latency instead of completely clearing the samples and 
> extending the {{isLatencyForSnitch}} idea to a three state variable instead 
> of two, in particular {{YES}}, {{NO}}, {{MAYBE}}. This extension allows 
> {{EchoMessages}} and {{PingMessages}} to send {{MAYBE}} indicating that the 
> DS should use those measurements if it only has one or fewer samples for a 
> host. This fixes both problems because on process restart we send out 
> {{PingMessages}} / {{EchoMessages}} as part of startup, and we would reset to 
> effectively the RTT of the hosts (also at that point normal gossip 
> {{EchoMessages}} have an opportunity to add an additional latency 
> measurement).
> This strategy also nicely deals with the "a host got slow but now it's fine" 
> problem that the DS resets were (afaik) designed to stop because the 
> {{EchoMessage}} ping latency will count only after the reset for that host. 
> Ping latency is a more reasonable lower bound on host latency (as opposed to 
> status quo of zero).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, 

[jira] [Created] (CASSANDRA-15128) Cassandra does not support openjdk version "1.8.0_202"

2019-05-15 Thread Panneerselvam (JIRA)
Panneerselvam created CASSANDRA-15128:
-

 Summary: Cassandra does not support openjdk version "1.8.0_202"
 Key: CASSANDRA-15128
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15128
 Project: Cassandra
  Issue Type: Bug
  Components: Build
Reporter: Panneerselvam


I am trying to setup Apache Cassandra DB 3.11.4 version in my Windows 8 system  
and getting below error while starting the Cassandra.bat file.

 Software installed:
 * Cassandra 3.11.4 
 * Java 1.8 
 * Python 2.7

It started working after installing HotSpot jdk 1.8 . 

Are we not supporting openjdk1.8 or only the issue with the particular version 
(1.8.0_202).

 

 
{code:java}
Exception (java.lang.ExceptionInInitializerError) encountered during startup: 
null
java.lang.ExceptionInInitializerError
    at java.lang.J9VMInternals.ensureError(J9VMInternals.java:146)
    at 
java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:135)
    at 
org.apache.cassandra.utils.ObjectSizes.sizeOfReferenceArray(ObjectSizes.java:79)
    at 
org.apache.cassandra.utils.ObjectSizes.sizeOfArray(ObjectSizes.java:89)
    at 
org.apache.cassandra.utils.ObjectSizes.sizeOnHeapExcludingData(ObjectSizes.java:112)
    at 
org.apache.cassandra.db.AbstractBufferClusteringPrefix.unsharedHeapSizeExcludingData(AbstractBufferClusteringPrefix.java:70)
    at 
org.apache.cassandra.db.rows.BTreeRow.unsharedHeapSizeExcludingData(BTreeRow.java:450)
    at 
org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:336)
    at 
org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:295)
    at org.apache.cassandra.utils.btree.BTree.buildInternal(BTree.java:139)
    at org.apache.cassandra.utils.btree.BTree.build(BTree.java:121)
    at org.apache.cassandra.utils.btree.BTree.update(BTree.java:178)
    at 
org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:156)
    at org.apache.cassandra.db.Memtable.put(Memtable.java:282)
    at 
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1352)
    at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:626)
    at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:470)
    at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
    at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
    at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
    at 
org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:587)
    at 
org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:581)
    at 
org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:365)
    at 
org.apache.cassandra.db.SystemKeyspace.persistLocalMetadata(SystemKeyspace.java:520)
    at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:221)
    at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:620)
    at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:732)
Caused by: java.lang.NumberFormatException: For input string: "openj9-0"
    at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
    at java.lang.Integer.parseInt(Integer.java:580)
    at java.lang.Integer.parseInt(Integer.java:615)
    at 
org.github.jamm.MemoryLayoutSpecification.getEffectiveMemoryLayoutSpecification(MemoryLayoutSpecification.java:190)
    at 
org.github.jamm.MemoryLayoutSpecification.(MemoryLayoutSpecification.java:31)
{code}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15126) Resource leak when queries apply a RowFilter

2019-05-15 Thread Benedict (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-15126:
-
Source Control Link: 
[c07f3c88a4ba164bf01b0450b2463746b40c0d48|https://github.com/apache/cassandra/commit/c07f3c88a4ba164bf01b0450b2463746b40c0d48]
  (was: 
https://github.com/apache/cassandra/commit/c07f3c88a4ba164bf01b0450b2463746b40c0d48)

> Resource leak when queries apply a RowFilter
> 
>
> Key: CASSANDRA-15126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15126
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: Benedict
>Assignee: Benedict
>Priority: Normal
> Fix For: 3.0.19
>
>
> RowFilter.CQLFilter optionally removes those partitions that have no matching 
> results, but fails to close the iterator representing that partition’s 
> unfiltered results, leaking resources when this happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15126) Resource leak when queries apply a RowFilter

2019-05-15 Thread Benedict (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-15126:
-
Source Control Link: 
https://github.com/apache/cassandra/commit/c07f3c88a4ba164bf01b0450b2463746b40c0d48
  (was: 
[c07f3c88a4ba164bf01b0450b2463746b40c0d48|https://github.com/apache/cassandra/commit/c07f3c88a4ba164bf01b0450b2463746b40c0d48])

> Resource leak when queries apply a RowFilter
> 
>
> Key: CASSANDRA-15126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15126
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: Benedict
>Assignee: Benedict
>Priority: Normal
> Fix For: 3.0.19
>
>
> RowFilter.CQLFilter optionally removes those partitions that have no matching 
> results, but fails to close the iterator representing that partition’s 
> unfiltered results, leaking resources when this happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15126) Resource leak when queries apply a RowFilter

2019-05-15 Thread Benedict (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-15126:
-
Source Control Link: 
https://github.com/apache/cassandra/commit/c07f3c88a4ba164bf01b0450b2463746b40c0d48
  (was: 
[c07f3c88a4ba164bf01b0450b2463746b40c0d48|https://github.com/apache/cassandra/commit/c07f3c88a4ba164bf01b0450b2463746b40c0d48])

> Resource leak when queries apply a RowFilter
> 
>
> Key: CASSANDRA-15126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15126
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: Benedict
>Assignee: Benedict
>Priority: Normal
> Fix For: 3.0.19
>
>
> RowFilter.CQLFilter optionally removes those partitions that have no matching 
> results, but fails to close the iterator representing that partition’s 
> unfiltered results, leaking resources when this happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15126) Resource leak when queries apply a RowFilter

2019-05-15 Thread Benedict (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-15126:
-
Source Control Link: 
[c07f3c88a4ba164bf01b0450b2463746b40c0d48|https://github.com/apache/cassandra/commit/c07f3c88a4ba164bf01b0450b2463746b40c0d48]
  (was: 
https://github.com/apache/cassandra/commit/c07f3c88a4ba164bf01b0450b2463746b40c0d48)

> Resource leak when queries apply a RowFilter
> 
>
> Key: CASSANDRA-15126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15126
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: Benedict
>Assignee: Benedict
>Priority: Normal
> Fix For: 3.0.19
>
>
> RowFilter.CQLFilter optionally removes those partitions that have no matching 
> results, but fails to close the iterator representing that partition’s 
> unfiltered results, leaking resources when this happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15126) Resource leak when queries apply a RowFilter

2019-05-15 Thread Benedict (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-15126:
-
Source Control Link: 
[c07f3c88a4ba164bf01b0450b2463746b40c0d48|https://github.com/apache/cassandra/commit/c07f3c88a4ba164bf01b0450b2463746b40c0d48]

> Resource leak when queries apply a RowFilter
> 
>
> Key: CASSANDRA-15126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15126
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: Benedict
>Assignee: Benedict
>Priority: Normal
> Fix For: 3.0.19
>
>
> RowFilter.CQLFilter optionally removes those partitions that have no matching 
> results, but fails to close the iterator representing that partition’s 
> unfiltered results, leaking resources when this happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15126) Resource leak when queries apply a RowFilter

2019-05-15 Thread Benedict (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-15126:
-
Source Control Link: 
[c07f3c88a4ba164bf01b0450b2463746b40c0d48|https://github.com/apache/cassandra/commit/c07f3c88a4ba164bf01b0450b2463746b40c0d48]

> Resource leak when queries apply a RowFilter
> 
>
> Key: CASSANDRA-15126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15126
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
>Reporter: Benedict
>Assignee: Benedict
>Priority: Normal
> Fix For: 3.0.19
>
>
> RowFilter.CQLFilter optionally removes those partitions that have no matching 
> results, but fails to close the iterator representing that partition’s 
> unfiltered results, leaking resources when this happens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14654) Reduce heap pressure during compactions

2019-05-15 Thread Chris Lohfink (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-14654:
--
Status: Resolved  (was: Ready to Commit)
Resolution: Fixed

committed 
[7df67eff2d66dba4bed2b4f6aeabf05144d9b057|https://github.com/apache/cassandra/commit/7df67eff2d66dba4bed2b4f6aeabf05144d9b057]

> Reduce heap pressure during compactions
> ---
>
> Key: CASSANDRA-14654
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14654
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
>Priority: Normal
>  Labels: Performance, pull-request-available
> Fix For: 4.x
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Small partition compactions are painfully slow with a lot of overhead per 
> partition. There also tends to be an excess of objects created (ie 
> 200-700mb/s) per compaction thread.
> The EncoderStats walks through all the partitions and with mergeWith it will 
> create a new one per partition as it walks the potentially millions of 
> partitions. In a test scenario of about 600byte partitions and a couple 100mb 
> of data this consumed ~16% of the heap pressure. Changing this to instead 
> mutably track the min values and create one in a EncodingStats.Collector 
> brought this down considerably (but not 100% since the 
> UnfilteredRowIterator.stats() still creates 1 per partition).
> The KeyCacheKey makes a full copy of the underlying byte array in 
> ByteBufferUtil.getArray in its constructor. This is the dominating heap 
> pressure as there are more sstables. By changing this to just keeping the 
> original it completely eliminates the current dominator of the compactions 
> and also improves read performance.
> Minor tweak included for this as well for operators when compactions are 
> behind on low read clusters is to make the preemptive opening setting a 
> hotprop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated: Reduce heap pressure during compactions Patch by Chris Lohfink; Reviewed by Dinesh Joshi and Benedict for CASSANDRA-14654

2019-05-15 Thread clohfink
This is an automated email from the ASF dual-hosted git repository.

clohfink pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7df67ef  Reduce heap pressure during compactions Patch by Chris 
Lohfink; Reviewed by Dinesh Joshi and Benedict for CASSANDRA-14654
7df67ef is described below

commit 7df67eff2d66dba4bed2b4f6aeabf05144d9b057
Author: Chris Lohfink 
AuthorDate: Wed May 15 08:55:31 2019 -0500

Reduce heap pressure during compactions
Patch by Chris Lohfink; Reviewed by Dinesh Joshi and Benedict for 
CASSANDRA-14654
---
 CHANGES.txt|   1 +
 src/java/org/apache/cassandra/config/Config.java   |   4 +-
 .../cassandra/config/DatabaseDescriptor.java   |  14 +-
 .../apache/cassandra/db/rows/EncodingStats.java|  39 -
 .../cassandra/db/rows/UnfilteredRowIterators.java  |  10 +-
 .../io/sstable/SSTableIdentityIterator.java|   2 +-
 .../cassandra/io/sstable/SSTableRewriter.java  |  20 +--
 .../cassandra/io/sstable/format/SSTableReader.java |  24 +--
 .../io/sstable/format/big/BigTableReader.java  |   6 +-
 .../io/sstable/metadata/StatsMetadata.java |   4 +
 .../apache/cassandra/service/StorageService.java   |  21 ++-
 .../cassandra/service/StorageServiceMBean.java |   6 +
 .../unit/org/apache/cassandra/db/KeyCacheTest.java |   2 +-
 .../apache/cassandra/db/lifecycle/TrackerTest.java |  13 +-
 .../cassandra/db/rows/EncodingStatsTest.java   | 170 +
 .../db/streaming/CassandraStreamManagerTest.java   |   6 +-
 .../CompressedSequentialWriterReopenTest.java  |   3 +-
 .../org/apache/cassandra/schema/MockSchema.java|  39 -
 18 files changed, 320 insertions(+), 64 deletions(-)

diff --git a/CHANGES.txt b/CHANGES.txt
index 960ed64..3a98fa5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Reduce heap pressure during compactions (CASSANDRA-14654)
  * Support building Cassandra with JDK 11 (CASSANDRA-15108)
  * Use quilt to patch cassandra.in.sh in Debian packaging (CASSANDRA-14710)
  * Take sstable references before calculating approximate key count 
(CASSANDRA-14647)
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 04ac608..a6050be 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -253,12 +253,14 @@ public class Config
 public int hints_flush_period_in_ms = 1;
 public int max_hints_file_size_in_mb = 128;
 public ParameterizedClass hints_compression;
-public int sstable_preemptive_open_interval_in_mb = 50;
 
 public volatile boolean incremental_backups = false;
 public boolean trickle_fsync = false;
 public int trickle_fsync_interval_in_kb = 10240;
 
+public volatile int sstable_preemptive_open_interval_in_mb = 50;
+
+public volatile boolean key_cache_migrate_during_compaction = true;
 public Long key_cache_size_in_mb = null;
 public volatile int key_cache_save_period = 14400;
 public volatile int key_cache_keys_to_save = Integer.MAX_VALUE;
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index e2c2ace..b3ab054 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -2241,11 +2241,21 @@ public class DatabaseDescriptor
 return conf.commitlog_total_space_in_mb;
 }
 
-public static int getSSTablePreempiveOpenIntervalInMB()
+public static boolean shouldMigrateKeycacheOnCompaction()
+{
+return conf.key_cache_migrate_during_compaction;
+}
+
+public static void setMigrateKeycacheOnCompaction(boolean 
migrateCacheEntry)
+{
+conf.key_cache_migrate_during_compaction = migrateCacheEntry;
+}
+
+public static int getSSTablePreemptiveOpenIntervalInMB()
 {
 return FBUtilities.isWindows ? -1 : 
conf.sstable_preemptive_open_interval_in_mb;
 }
-public static void setSSTablePreempiveOpenIntervalInMB(int mb)
+public static void setSSTablePreemptiveOpenIntervalInMB(int mb)
 {
 conf.sstable_preemptive_open_interval_in_mb = mb;
 }
diff --git a/src/java/org/apache/cassandra/db/rows/EncodingStats.java 
b/src/java/org/apache/cassandra/db/rows/EncodingStats.java
index 955ffc7..4a7bb19 100644
--- a/src/java/org/apache/cassandra/db/rows/EncodingStats.java
+++ b/src/java/org/apache/cassandra/db/rows/EncodingStats.java
@@ -19,6 +19,9 @@ package org.apache.cassandra.db.rows;
 
 import java.io.IOException;
 import java.util.*;
+import java.util.function.Function;
+
+import com.google.common.collect.Iterables;
 
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.partitions.PartitionStatisticsCollector;
@@ -41,7 +44

[jira] [Updated] (CASSANDRA-15127) Add compression performance metrics

2019-05-15 Thread Michael (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael updated CASSANDRA-15127:

Component/s: Observability/Metrics

> Add compression performance metrics
> ---
>
> Key: CASSANDRA-15127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15127
> Project: Cassandra
>  Issue Type: Task
>  Components: Observability/Metrics
>Reporter: Michael
>Priority: Normal
>
> When doing some bulk load into my cluster I notice almost 100% CPU usage.
> As I am using DeflateCompressor, I assume that the data 
> compression/decompression contributes a lot to overall CPU load. 
> Unfortunately cassandra doesn't seem to have any metrics explaining how much 
> CPU time has been required for that.
> So I guess it would be useful to introduce cumulative times for compression 
> and decompression, breaking down by each supported compression algorithm.
> Then by comparing how much does each specific value increase per minute, with 
> number of processed requests and their cumulative times, it would be easy to 
> conclude how costly is the compression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15127) Add compression performance metrics

2019-05-15 Thread Michael (JIRA)
Michael created CASSANDRA-15127:
---

 Summary: Add compression performance metrics
 Key: CASSANDRA-15127
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15127
 Project: Cassandra
  Issue Type: Task
Reporter: Michael


When doing some bulk load into my cluster I notice almost 100% CPU usage.

As I am using DeflateCompressor, I assume that the data 
compression/decompression contributes a lot to overall CPU load. Unfortunately 
cassandra doesn't seem to have any metrics explaining how much CPU time has 
been required for that.

So I guess it would be useful to introduce cumulative times for compression and 
decompression, breaking down by each supported compression algorithm.

Then by comparing how much does each specific value increase per minute, with 
number of processed requests and their cumulative times, it would be easy to 
conclude how costly is the compression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15120) Nodes that join the ring while another node is MOVING build an invalid view of the token ring

2019-05-15 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16840344#comment-16840344
 ] 

Benedict commented on CASSANDRA-15120:
--

Ok, I've pushed a version with feature flags introduced to the dtests, so that 
gossip is only enabled for those tests that need it.  Once you confirm it looks 
OK to you, I'll port it to the other versions (though I may defer 
_implementation_ of this capability on trunk until after CASSANDRA-15066, which 
already implements a lot of this)

> Nodes that join the ring while another node is MOVING build an invalid view 
> of the token ring
> -
>
> Key: CASSANDRA-15120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15120
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Gossip, Cluster/Membership
>Reporter: Benedict
>Assignee: Benedict
>Priority: Normal
>
> Gossip only updates the token metadata for nodes in the NORMAL, SHUTDOWN or 
> LEAVING* statuses.  MOVING and REMOVING_TOKEN nodes do not have their ring 
> information updated (nor do others, but these other states _should_ only be 
> taken by nodes that are not members of the ring).  
> If a node missed the most recent token-modifying events because they were not 
> a member of the ring when they happened (or because Gossip was delayed to 
> them), they will retain an invalid view of the ring until the node enters the 
> one of the NORMAL, SHUTDOWN or LEAVING states.
> *LEAVING is populated differently, however, and in a probably unsafe manner 
> that this work will also address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-05-15 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16840301#comment-16840301
 ] 

Benedict edited comment on CASSANDRA-15013 at 5/15/19 11:28 AM:


I've pushed some minor suggestions 
[here|https://github.com/belliottsmith/cassandra/tree/15013-suggestions] around 
naming:

# Tried to make the native_transport config parameters have more consistent 
naming with prior parameters - feel free to modify them further, if you think 
you can improve them still
# {{forceAllocate}} -> {{allocate}}, which is usually the alternative to 
{{tryAllocate}}
# Shortened THROW_ON_OVERLOAD parameter

There are three remaining bugs, and I've paused review until they can be 
addressed:

# {{this::releaseItem}} is unsafe to provide to the {{Flusher}} constructor, 
since these are unique to a channel, and the {{Flusher}} is per-eventLoop.  If 
we choose to hash all connections on an endpoint to a single eventLoop this 
would be easy to accommodate, or otherwise {{FlushItem}} needs to be the 
implementor of {{release()}}
# I don't think we can use {{Ref}} for management of the 
{{EndpointPayloadTracker}}.  The {{Tidy}} implementation requires a reference 
to the object itself, and anyway logically deleting itself after release 
defeats the point of Ref (which is leak detection).  It's impossible for it to 
detect a leak and cleanup, if the strong reference is cleaned up by this 
process (since there will always be a strong reference until it invokes, and it 
requires there to be no strong references, it will never invoke).  Probably we 
should use a simple AtomicInteger to manage reference counts.  I think it would 
be cleanest to encapsulate the map management inside a static method in 
{{EndpointPayloadTracker}} as well.
# I think we currently have a race condition around the release of a channel 
(and its {{EndpointPayloadTracker}}) and the attempt to release capacity from 
the {{EndpointPayloadTracker}} we have requests in flight for.  Channels can be 
invalidated before we complete requests issued by them, so we must be sure to 
release from the tracker we allocated from, so that we do not wrap into 
negative on release.



was (Author: benedict):
I've pushed some minor suggestions 
[here|https://github.com/belliottsmith/cassandra/tree/15013-suggestions] around 
naming:

# Tried to make the native_transport config parameters have more consistent 
naming with prior parameters - feel free to modify them further, if you think 
you can improve them still
# {{forceAllocate}} -> {{allocate}}, which is usually the alternative to 
{{tryAllocate}}
# Shortened THROW_ON_OVERLOAD parameter

There are three remaining bugs, and I've paused review until they can be 
addressed:

# {{this::releaseItem}} is unsafe to provide to the {{Flusher}} constructor, 
since these are unique to an address, and the {{Flusher}} is per-eventLoop.  If 
we choose to hash all connections on an endpoint to a single eventLoop this 
would be easy to accommodate, or otherwise {{FlushItem}} needs to be the 
implementor of {{release()}}
# I don't think we can use {{Ref}} for management of the 
{{EndpointPayloadTracker}}.  The {{Tidy}} implementation requires a reference 
to the object itself, and anyway logically deleting itself after release 
defeats the point of Ref (which is leak detection).  It's impossible for it to 
detect a leak and cleanup, if the strong reference is cleaned up by this 
process (since there will always be a strong reference until it invokes, and it 
requires there to be no strong references, it will never invoke).  Probably we 
should use a simple AtomicInteger to manage reference counts.  I think it would 
be cleanest to encapsulate the map management inside a static method in 
{{EndpointPayloadTracker}} as well.
# I think we currently have a race condition around the release of a channel 
(and its {{EndpointPayloadTracker}}) and the attempt to release capacity from 
the {{EndpointPayloadTracker}} we have requests in flight for.  Channels can be 
invalidated before we complete requests issued by them, so we must be sure to 
release from the tracker we allocated from, so that we do not wrap into 
negative on release.


> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopF

[jira] [Commented] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-05-15 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16840301#comment-16840301
 ] 

Benedict commented on CASSANDRA-15013:
--

I've pushed some minor suggestions 
[here|https://github.com/belliottsmith/cassandra/tree/15013-suggestions] around 
naming:

# Tried to make the native_transport config parameters have more consistent 
naming with prior parameters - feel free to modify them further, if you think 
you can improve them still
# {{forceAllocate}} -> {{allocate}}, which is usually the alternative to 
{{tryAllocate}}
# Shortened THROW_ON_OVERLOAD parameter

There are three remaining bugs, and I've paused review until they can be 
addressed:

# {{this::releaseItem}} is unsafe to provide to the {{Flusher}} constructor, 
since these are unique to an address, and the {{Flusher}} is per-eventLoop.  If 
we choose to hash all connections on an endpoint to a single eventLoop this 
would be easy to accommodate, or otherwise {{FlushItem}} needs to be the 
implementor of {{release()}}
# I don't think we can use {{Ref}} for management of the 
{{EndpointPayloadTracker}}.  The {{Tidy}} implementation requires a reference 
to the object itself, and anyway logically deleting itself after release 
defeats the point of Ref (which is leak detection).  It's impossible for it to 
detect a leak and cleanup, if the strong reference is cleaned up by this 
process (since there will always be a strong reference until it invokes, and it 
requires there to be no strong references, it will never invoke).  Probably we 
should use a simple AtomicInteger to manage reference counts.  I think it would 
be cleanest to encapsulate the map management inside a static method in 
{{EndpointPayloadTracker}} as well.
# I think we currently have a race condition around the release of a channel 
(and its {{EndpointPayloadTracker}}) and the attempt to release capacity from 
the {{EndpointPayloadTracker}} we have requests in flight for.  Channels can be 
invalidated before we complete requests issued by them, so we must be sure to 
release from the tracker we allocated from, so that we do not wrap into 
negative on release.


> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15013) Message Flusher queue can grow unbounded, potentially running JVM out of memory

2019-05-15 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16840266#comment-16840266
 ] 

Benedict commented on CASSANDRA-15013:
--

So, thinking on it a bit more, I don't think this would actually be a very 
large change, but it also wouldn't simplify things as much as I might like.  It 
might only save concurrency for endpoint resource allocation.  So I'll review 
the patch as it is, and we can consider after that if we want to make any 
further changes.

It looks like I also made an error in my first skim of the patch, or I was 
looking at a different version - there's no need to set the queue limit to -1; 
Integer.MAX_VALUE is fine - if we hit that limit we have bigger problems :) 

> Message Flusher queue can grow unbounded, potentially running JVM out of 
> memory
> ---
>
> Key: CASSANDRA-15013
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15013
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Normal
>  Labels: pull-request-available
> Fix For: 4.0, 3.0.x, 3.11.x
>
> Attachments: BlockedEpollEventLoopFromHeapDump.png, 
> BlockedEpollEventLoopFromThreadDump.png, RequestExecutorQueueFull.png, heap 
> dump showing each ImmediateFlusher taking upto 600MB.png
>
>
> This is a follow-up ticket out of CASSANDRA-14855, to make the Flusher queue 
> bounded, since, in the current state, items get added to the queue without 
> any checks on queue size, nor with any checks on netty outbound buffer to 
> check the isWritable state.
> We are seeing this issue hit our production 3.0 clusters quite often.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13292) Replace MessagingService usage of MD5 with something more modern

2019-05-15 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16840218#comment-16840218
 ] 

Benedict commented on CASSANDRA-13292:
--

https://github.com/rurban/smhasher

> Replace MessagingService usage of MD5 with something more modern
> 
>
> Key: CASSANDRA-13292
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13292
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Michael Kjellman
>Assignee: Michael Kjellman
>Priority: Normal
> Attachments: quorum-concurrency-reads-quorum.svg
>
>
> While profiling C* via multiple profilers, I've consistently seen a 
> significant amount of time being spent calculating MD5 digests.
> {code}
> Stack Trace   Sample CountPercentage(%)
> sun.security.provider.MD5.implCompress(byte[], int)   264 1.566
>sun.security.provider.DigestBase.implCompressMultiBlock(byte[], int, int)  
> 200 1.187
>   sun.security.provider.DigestBase.engineUpdate(byte[], int, int) 200 
> 1.187
>  java.security.MessageDigestSpi.engineUpdate(ByteBuffer)  200 
> 1.187
> java.security.MessageDigest$Delegate.engineUpdate(ByteBuffer) 
> 200 1.187
>java.security.MessageDigest.update(ByteBuffer) 200 1.187
>   org.apache.cassandra.db.Column.updateDigest(MessageDigest)  
> 193 1.145
>  
> org.apache.cassandra.db.ColumnFamily.updateDigest(MessageDigest) 193 1.145
> 
> org.apache.cassandra.db.ColumnFamily.digest(ColumnFamily) 193 1.145
>
> org.apache.cassandra.service.RowDigestResolver.resolve()   106 0.629
>   
> org.apache.cassandra.service.RowDigestResolver.resolve()106 0.629
>  
> org.apache.cassandra.service.ReadCallback.get()  88  0.522
> 
> org.apache.cassandra.service.AbstractReadExecutor.get()   88  0.522
>
> org.apache.cassandra.service.StorageProxy.fetchRows(List, ConsistencyLevel)   
>  88  0.522
>   
> org.apache.cassandra.service.StorageProxy.read(List, ConsistencyLevel)  
> 88  0.522
>  
> org.apache.cassandra.service.pager.SliceQueryPager.queryNextPage(int, 
> ConsistencyLevel, boolean) 88  0.522
> 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(int)  88  
> 0.522
>
> org.apache.cassandra.service.pager.SliceQueryPager.fetchPage(int)  88  
> 0.522
>   
> org.apache.cassandra.cql3.statements.SelectStatement.execute(QueryState, 
> QueryOptions)  88  0.522
>  
> org.apache.cassandra.cql3.statements.SelectStatement.execute(QueryState, 
> QueryOptions)   88  0.522
> 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(CQLStatement, 
> QueryState, QueryOptions) 88  0.522
>
> org.apache.cassandra.cql3.QueryProcessor.process(String, QueryState, 
> QueryOptions) 88  0.522
>   
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryState)
> 88  0.522
>  
> org.apache.cassandra.transport.Message$Dispatcher.messageReceived(ChannelHandlerContext,
>  MessageEvent)   88  0.522
> 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(ChannelHandlerContext,
>  ChannelEvent)  88  0.522
>
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline$DefaultChannelHandlerContext,
>  ChannelEvent) 88  0.522
>   
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(ChannelEvent)
>   88  0.522
>   
>org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun() 
>   88  0.522
>   
>   org.jboss.netty.handler.execution.ChannelEv