[jira] [Commented] (CASSANDRA-12886) Streaming failed due to SSL Socket connection reset

2016-11-09 Thread Bing Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15653288#comment-15653288
 ] 

Bing Wu commented on CASSANDRA-12886:
-

[~pjrmoreira] I lost track of the debug.log or some of the system.logs. And the 
admins just added the tcp_keepingalive settings. So I will try to repro this 
issue from a clean slate. 

> Streaming failed due to SSL Socket connection reset
> ---
>
> Key: CASSANDRA-12886
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12886
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Bing Wu
>
> While running "nodetool repair", I see many instances of 
> "javax.net.ssl.SSLException: java.net.SocketException: Connection reset" in 
> system.logs on some nodes in the cluster. Timestamps correspond to streaming 
> source/initiator's error messages of "sync failed between ..."
> Setup: 
> - Cassandra 3.7.01 
> - CentOS 6.7 in AWS (multi-region)
> - JDK version: {noformat}
> java version "1.8.0_102"
> Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
> {noformat}
> - cassandra.yaml:
> {noformat}
> server_encryption_options:
> internode_encryption: all
> keystore: [path]
> keystore_password: [password]
> truststore: [path]
> truststore_password: [password]
> # More advanced defaults below:
> # protocol: TLS
> # algorithm: SunX509
> # store_type: JKS
> # cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
> require_client_auth: false
> {noformat}
> Error messages in system.log on the target host:
> {noformat}
> ERROR [STREAM-OUT-/54.247.111.232:7001] 2016-11-07 07:30:56,475 
> StreamSession.java:529 - [Stream #e14abcb0-a4bb-11e6-9758-55b9ac38b78e] 
> Streaming error occurred on session with peer 54.247.111.232
> javax.net.ssl.SSLException: Connection has been shutdown: 
> javax.net.ssl.SSLException: java.net.SocketException: Connection reset
> at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) 
> ~[na:1.8.0_102]
> at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) 
> ~[na:1.8.0_102]
> at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) 
> ~[na:1.8.0_102]
> at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
> ~[na:1.8.0_102]
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
> ~[na:1.8.0_102]
> at 
> org.apache.cassandra.io.util.WrappedDataOutputStreamPlus.flush(WrappedDataOutputStreamPlus.java:66)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:371)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:342)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
> Caused by: javax.net.ssl.SSLException: java.net.SocketException: Connection 
> reset
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12273) Casandra stress graph: option to create directory for graph if it doesn't exist

2016-11-09 Thread Murukesh Mohanan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652591#comment-15652591
 ] 

Murukesh Mohanan commented on CASSANDRA-12273:
--

[~jkni], I'd be happy to look into the {{hdrfile}} thing. Just to confirm, the 
file to be attached would look like:

{code}
>From b98e9b4de506fd0b344d9aaf4cb380651a0a68a0 Mon Sep 17 00:00:00 2001
From: Murukesh Mohanan 
Date: Thu, 10 Nov 2016 09:58:34 +0900
Subject: [PATCH] Create log directories as needed, handling symbolic links
 (CASSANDRA-12273)

patch by Murukesh Mohanan; reviewed by Joel Knighton
---
 CHANGES.txt |  1 +
 .../stress/src/org/apache/cassandra/stress/StressGraph.java | 13 +
 2 files changed, 14 insertions(+)

diff --git a/CHANGES.txt b/CHANGES.txt
index 69a05c2..ac2e9c3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Create log directories as needed, handling symbolic links (CASSANDRA-12273)
  * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
  * Add (automate) Nodetool Documentation (CASSANDRA-12672)
  * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
diff --git a/tools/stress/src/org/apache/cassandra/stress/StressGraph.java 
b/tools/stress/src/org/apache/cassandra/stress/StressGraph.java
index 17b718d..a4d9744 100644
...
{code}

> Casandra stress graph: option to create directory for graph if it doesn't 
> exist
> ---
>
> Key: CASSANDRA-12273
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12273
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Murukesh Mohanan
>Priority: Minor
>  Labels: lhf
> Attachments: 12273.patch
>
>
> I am running it in CI with ephemeral workspace  / build dirs. It would be 
> nice if CS would create the directory so my build tool doesn't have to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8751) C* should always listen to both ssl/non-ssl ports

2016-11-09 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown resolved CASSANDRA-8751.

Resolution: Duplicate

Closing as a (sort of) dupe of CASSANDRA-10404. That ticket will cover this one 
+ transitional SSL

> C* should always listen to both ssl/non-ssl ports
> -
>
> Key: CASSANDRA-8751
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8751
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Minh Do
>Assignee: Minh Do
> Fix For: 3.x
>
>
> Since there is always one thread dedicated on server socket listener and it 
> does not use much resource, we should always have these two listeners up no 
> matter what users set for internode_encryption.
> The reason behind this is that we need to switch back and forth between 
> different internode_encryption modes and we need C* servers to keep running 
> in transient state or during mode switching.  Currently this is not possible.
> For example, we have a internode_encryption=dc cluster in a multi-region AWS 
> environment and want to set internode_encryption=all by rolling restart C* 
> nodes.  However, the node with internode_encryption=all does not open to 
> listen to non-ssl port.  As a result, we have a splitted brain cluster here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12888) Incremental repairs broken for MVs and CDC

2016-11-09 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652510#comment-15652510
 ] 

Paulo Motta commented on CASSANDRA-12888:
-

bq. I'm not sure how effective that would be in practice. In an active cluster, 
I'd expect the race between in flight mutations and flushes to usually result 
in at least a little bit of streaming.

Good point! How about keeping the streamed sstables, and having a special 
{{mutation.apply}} path that only writes to the commit log/CDC and apply MVs, 
while skipping applying mutations of the base table? That seems simpler than 
keeping repaired state at the memtable, unless there are caveats I'm missing.

> Incremental repairs broken for MVs and CDC
> --
>
> Key: CASSANDRA-12888
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12888
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Stefan Podkowinski
>Priority: Critical
>
> SSTables streamed during the repair process will first be written locally and 
> afterwards either simply added to the pool of existing sstables or, in case 
> of existing MVs or active CDC, replayed on mutation basis:
> As described in {{StreamReceiveTask.OnCompletionRunnable}}:
> {quote}
> We have a special path for views and for CDC.
> For views, since the view requires cleaning up any pre-existing state, we 
> must put all partitions through the same write path as normal mutations. This 
> also ensures any 2is are also updated.
> For CDC-enabled tables, we want to ensure that the mutations are run through 
> the CommitLog so they can be archived by the CDC process on discard.
> {quote}
> Using the regular write path turns out to be an issue for incremental 
> repairs, as we loose the {{repaired_at}} state in the process. Eventually the 
> streamed rows will end up in the unrepaired set, in contrast to the rows on 
> the sender site moved to the repaired set. The next repair run will stream 
> the same data back again, causing rows to bounce on and on between nodes on 
> each repair.
> See linked dtest on steps to reproduce. An example for reproducing this 
> manually using ccm can be found 
> [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12888) Incremental repairs broken for MVs and CDC

2016-11-09 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652229#comment-15652229
 ] 

Blake Eggleston commented on CASSANDRA-12888:
-

bq.  skip anti-compaction altogether when there is a mismatch for MV tables

I'm not sure how effective that would be in practice. In an active cluster, I'd 
expect the race between in flight mutations and flushes to usually result in at 
least a little bit of streaming.

> Incremental repairs broken for MVs and CDC
> --
>
> Key: CASSANDRA-12888
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12888
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Stefan Podkowinski
>Priority: Critical
>
> SSTables streamed during the repair process will first be written locally and 
> afterwards either simply added to the pool of existing sstables or, in case 
> of existing MVs or active CDC, replayed on mutation basis:
> As described in {{StreamReceiveTask.OnCompletionRunnable}}:
> {quote}
> We have a special path for views and for CDC.
> For views, since the view requires cleaning up any pre-existing state, we 
> must put all partitions through the same write path as normal mutations. This 
> also ensures any 2is are also updated.
> For CDC-enabled tables, we want to ensure that the mutations are run through 
> the CommitLog so they can be archived by the CDC process on discard.
> {quote}
> Using the regular write path turns out to be an issue for incremental 
> repairs, as we loose the {{repaired_at}} state in the process. Eventually the 
> streamed rows will end up in the unrepaired set, in contrast to the rows on 
> the sender site moved to the repaired set. The next repair run will stream 
> the same data back again, causing rows to bounce on and on between nodes on 
> each repair.
> See linked dtest on steps to reproduce. An example for reproducing this 
> manually using ccm can be found 
> [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11039) SegFault in Cassandra

2016-11-09 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-11039:

Reproduced In: 3.0.9
Since Version: 3.0.0
Fix Version/s: (was: 3.2)
   3.0.x
  Component/s: Configuration

This was fixed in 3.X with CASSANDRA-9472, but is still a problem in 3.0.  We 
need to dis-allow offheap_buffers in 3.0.

> SegFault in Cassandra
> -
>
> Key: CASSANDRA-11039
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11039
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: Kernel: Linux cass6 3.13.0-44-generic 
> #73~precise1-Ubuntu SMP Wed Dec 17 00:39:15 UTC 2014 x86_64 x86_64 x86_64 
> GNU/Linux
> OS: Ubuntu 12.04.5 LTS (GNU/Linux 3.13.0-44-generic x86_64)
> JVM: 
>   java version "1.8.0_66"
>   Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
>   Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
>Reporter: Nimi Wariboko Jr.
> Fix For: 3.0.x
>
>
> This occurred under quite heavy load.
> Attached is the dump that was spit out by Cassandra, and my cassandra.yaml
> hs_err_1453233896.log:
> https://s3-us-west-1.amazonaws.com/channelmeter-misc/hs_err_1453233896.log
> cassandra.yaml
> https://s3-us-west-1.amazonaws.com/channelmeter-misc/cassandra.yaml
> Process Options:
> {code}
> java -ea -Xms16G -Xmx16G -Xss256k -XX:+UseG1GC 
> -XX:G1RSetUpdatingPauseTimePercent=5 -XX:MaxGCPauseMillis=500 
> -XX:InitiatingHeapOccupancyPercent=70 -XX:+AlwaysPreTouch 
> -XX:-UseBiasedLocking -XX:StringTableSize=103 -XX:+UseTLAB 
> -XX:+ResizeTLAB -XX:+PerfDisableSharedMem 
> -XX:CompileCommandFile=/etc/cassandra/hotspot_compiler 
> -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar -XX:+UseThreadPriorities 
> -XX:ThreadPriorityPolicy=42 -XX:+HeapDumpOnOutOfMemoryError 
> -Djava.net.preferIPv4Stack=true -Dcassandra.jmx.local.port=7199 
> -XX:+DisableExplicitGC -Djava.library.path=/usr/share/cassandra/lib/sigar-bin 
> -Dcassandra.metricsReporterConfigFile=/etc/cassandra-metrics-graphite.yaml 
> -Dcassandra.libjemalloc=- -Dlogback.configurationFile=logback.xml 
> -Dcassandra.logdir=/var/log/cassandra 
> -Dcassandra.storagedir=/var/lib/cassandra 
> -Dcassandra-pidfile=/var/run/cassandra/cassandra.pid -cp 
> /etc/cassandra:/usr/share/cassandra/lib/ST4-4.0.8.jar:/usr/share/cassandra/lib/airline-0.6.jar:/usr/share/cassandra/lib/antlr-runtime-3.5.2.jar:/usr/share/cassandra/lib/asm-5.0.4.jar:/usr/share/cassandra/lib/cassandra-driver-core-3.0.0-beta1-bb1bce4-SNAPSHOT-shaded.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/commons-math3-3.2.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.4.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/ecj-4.4.2.jar:/usr/share/cassandra/lib/guava-18.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.0.6.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.3.0.jar:/usr/share/cassandra/lib/javax.inject.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jcl-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/jna-4.0.0.jar:/usr/share/cassandra/lib/joda-time-2.4.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.2.jar:/usr/share/cassandra/lib/log4j-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/logback-classic-1.1.3.jar:/usr/share/cassandra/lib/logback-core-1.1.3.jar:/usr/share/cassandra/lib/lz4-1.3.0.jar:/usr/share/cassandra/lib/metrics-core-3.1.0.jar:/usr/share/cassandra/lib/metrics-core-3.1.2.jar:/usr/share/cassandra/lib/metrics-graphite-2.2.0.jar:/usr/share/cassandra/lib/metrics-graphite-3.1.2.jar:/usr/share/cassandra/lib/metrics-logback-3.1.0.jar:/usr/share/cassandra/lib/netty-all-4.0.23.Final.jar:/usr/share/cassandra/lib/ohc-core-0.4.2.jar:/usr/share/cassandra/lib/ohc-core-j8-0.4.2.jar:/usr/share/cassandra/lib/reporter-config-base-3.0.0.jar:/usr/share/cassandra/lib/reporter-config3-3.0.0.jar:/usr/share/cassandra/lib/sigar-1.6.4.jar:/usr/share/cassandra/lib/slf4j-api-1.7.7.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.1.1.7.jar:/usr/share/cassandra/lib/stream-2.5.2.jar:/usr/share/cassandra/lib/thrift-server-0.3.7.jar:/usr/share/cassandra/apache-cassandra-3.2.jar:/usr/share/cassandra/apache-cassandra-thrift-3.2.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/stress.jar:
>  -XX:HeapDumpPath=/var/lib/cassandra/java_1453248542.hprof 
> -XX:ErrorFile=/var/lib/cassandra/hs_err_1453248542.log 
> org.apache.cassandra.service.CassandraDaemon
> {code}



--
This message was sent by Atlassian 

[jira] [Reopened] (CASSANDRA-11039) SegFault in Cassandra

2016-11-09 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reopened CASSANDRA-11039:
-
  Assignee: T Jake Luciani

> SegFault in Cassandra
> -
>
> Key: CASSANDRA-11039
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11039
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: Kernel: Linux cass6 3.13.0-44-generic 
> #73~precise1-Ubuntu SMP Wed Dec 17 00:39:15 UTC 2014 x86_64 x86_64 x86_64 
> GNU/Linux
> OS: Ubuntu 12.04.5 LTS (GNU/Linux 3.13.0-44-generic x86_64)
> JVM: 
>   java version "1.8.0_66"
>   Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
>   Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
>Reporter: Nimi Wariboko Jr.
>Assignee: T Jake Luciani
> Fix For: 3.0.x
>
>
> This occurred under quite heavy load.
> Attached is the dump that was spit out by Cassandra, and my cassandra.yaml
> hs_err_1453233896.log:
> https://s3-us-west-1.amazonaws.com/channelmeter-misc/hs_err_1453233896.log
> cassandra.yaml
> https://s3-us-west-1.amazonaws.com/channelmeter-misc/cassandra.yaml
> Process Options:
> {code}
> java -ea -Xms16G -Xmx16G -Xss256k -XX:+UseG1GC 
> -XX:G1RSetUpdatingPauseTimePercent=5 -XX:MaxGCPauseMillis=500 
> -XX:InitiatingHeapOccupancyPercent=70 -XX:+AlwaysPreTouch 
> -XX:-UseBiasedLocking -XX:StringTableSize=103 -XX:+UseTLAB 
> -XX:+ResizeTLAB -XX:+PerfDisableSharedMem 
> -XX:CompileCommandFile=/etc/cassandra/hotspot_compiler 
> -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar -XX:+UseThreadPriorities 
> -XX:ThreadPriorityPolicy=42 -XX:+HeapDumpOnOutOfMemoryError 
> -Djava.net.preferIPv4Stack=true -Dcassandra.jmx.local.port=7199 
> -XX:+DisableExplicitGC -Djava.library.path=/usr/share/cassandra/lib/sigar-bin 
> -Dcassandra.metricsReporterConfigFile=/etc/cassandra-metrics-graphite.yaml 
> -Dcassandra.libjemalloc=- -Dlogback.configurationFile=logback.xml 
> -Dcassandra.logdir=/var/log/cassandra 
> -Dcassandra.storagedir=/var/lib/cassandra 
> -Dcassandra-pidfile=/var/run/cassandra/cassandra.pid -cp 
> /etc/cassandra:/usr/share/cassandra/lib/ST4-4.0.8.jar:/usr/share/cassandra/lib/airline-0.6.jar:/usr/share/cassandra/lib/antlr-runtime-3.5.2.jar:/usr/share/cassandra/lib/asm-5.0.4.jar:/usr/share/cassandra/lib/cassandra-driver-core-3.0.0-beta1-bb1bce4-SNAPSHOT-shaded.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/commons-math3-3.2.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.4.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/ecj-4.4.2.jar:/usr/share/cassandra/lib/guava-18.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.0.6.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.3.0.jar:/usr/share/cassandra/lib/javax.inject.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jcl-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/jna-4.0.0.jar:/usr/share/cassandra/lib/joda-time-2.4.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.2.jar:/usr/share/cassandra/lib/log4j-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/logback-classic-1.1.3.jar:/usr/share/cassandra/lib/logback-core-1.1.3.jar:/usr/share/cassandra/lib/lz4-1.3.0.jar:/usr/share/cassandra/lib/metrics-core-3.1.0.jar:/usr/share/cassandra/lib/metrics-core-3.1.2.jar:/usr/share/cassandra/lib/metrics-graphite-2.2.0.jar:/usr/share/cassandra/lib/metrics-graphite-3.1.2.jar:/usr/share/cassandra/lib/metrics-logback-3.1.0.jar:/usr/share/cassandra/lib/netty-all-4.0.23.Final.jar:/usr/share/cassandra/lib/ohc-core-0.4.2.jar:/usr/share/cassandra/lib/ohc-core-j8-0.4.2.jar:/usr/share/cassandra/lib/reporter-config-base-3.0.0.jar:/usr/share/cassandra/lib/reporter-config3-3.0.0.jar:/usr/share/cassandra/lib/sigar-1.6.4.jar:/usr/share/cassandra/lib/slf4j-api-1.7.7.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.1.1.7.jar:/usr/share/cassandra/lib/stream-2.5.2.jar:/usr/share/cassandra/lib/thrift-server-0.3.7.jar:/usr/share/cassandra/apache-cassandra-3.2.jar:/usr/share/cassandra/apache-cassandra-thrift-3.2.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/stress.jar:
>  -XX:HeapDumpPath=/var/lib/cassandra/java_1453248542.hprof 
> -XX:ErrorFile=/var/lib/cassandra/hs_err_1453248542.log 
> org.apache.cassandra.service.CassandraDaemon
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12858) testall failure in org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression

2016-11-09 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652211#comment-15652211
 ] 

Dikang Gu commented on CASSANDRA-12858:
---

[unit test | 
https://cassci.datastax.com/view/Dev/view/DikangGu/job/DikangGu-CASSANDRA-12858-trunk-ci-testall/],
 the failure is irrelevant.

[~blambov], [~Stefania], do you mind to take a look at the fix?

Thanks.

> testall failure in 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression
> 
>
> Key: CASSANDRA-12858
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12858
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Dikang Gu
>  Labels: test-failure, testall
> Fix For: 3.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/49/testReport/org.apache.cassandra.dht/Murmur3PartitionerTest/testSplitWrapping_compression/
> {code}
> Error Message
> For 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: For 
> 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:138)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:129)
>   at 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping(Murmur3PartitionerTest.java:50)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12858) testall failure in org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression

2016-11-09 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-12858:
--
 Reviewer: Branimir Lambov
Fix Version/s: 3.x
   Status: Patch Available  (was: Open)

> testall failure in 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping-compression
> 
>
> Key: CASSANDRA-12858
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12858
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Dikang Gu
>  Labels: test-failure, testall
> Fix For: 3.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/49/testReport/org.apache.cassandra.dht/Murmur3PartitionerTest/testSplitWrapping_compression/
> {code}
> Error Message
> For 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: For 
> 8833996864316961974,8833996864316961979: range did not contain new 
> token:8833996864316961974
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:138)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:150)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:148)
>   at 
> org.apache.cassandra.dht.PartitionerTestCase.assertSplit(PartitionerTestCase.java:129)
>   at 
> org.apache.cassandra.dht.Murmur3PartitionerTest.testSplitWrapping(Murmur3PartitionerTest.java:50)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12889) Pass root cause to CorruptBlockException when uncompression failed

2016-11-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12889:

Status: Ready to Commit  (was: Patch Available)

> Pass root cause to CorruptBlockException when uncompression failed
> --
>
> Key: CASSANDRA-12889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12889
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Trivial
> Fix For: 3.0.x, 3.x
>
>
> When reading compressed SSTable failed, CorruptBlockException is thrown 
> without root cause. It is nice to have when investigating uncompression error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12889) Pass root cause to CorruptBlockException when uncompression failed

2016-11-09 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652163#comment-15652163
 ] 

Paulo Motta commented on CASSANDRA-12889:
-

+1 (unrelated test failures, marked as ready to commit). Thanks!

> Pass root cause to CorruptBlockException when uncompression failed
> --
>
> Key: CASSANDRA-12889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12889
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Trivial
> Fix For: 3.0.x, 3.x
>
>
> When reading compressed SSTable failed, CorruptBlockException is thrown 
> without root cause. It is nice to have when investigating uncompression error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12888) Incremental repairs broken for MVs and CDC

2016-11-09 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652122#comment-15652122
 ] 

Paulo Motta commented on CASSANDRA-12888:
-

The quickest (but a bit dirty) fix here would be to skip anti-compaction 
altogether when there is a mismatch for MV tables, forcing data to be 
re-compared at the next repair - and if they match, do anticompaction.

The proper solution is to segregate repaired from unrepaired data on the 
memtable, and flush them to separate repaired/unrepaired sstables, but this 
would  probably be a bit more involved).

> Incremental repairs broken for MVs and CDC
> --
>
> Key: CASSANDRA-12888
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12888
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Stefan Podkowinski
>Priority: Critical
>
> SSTables streamed during the repair process will first be written locally and 
> afterwards either simply added to the pool of existing sstables or, in case 
> of existing MVs or active CDC, replayed on mutation basis:
> As described in {{StreamReceiveTask.OnCompletionRunnable}}:
> {quote}
> We have a special path for views and for CDC.
> For views, since the view requires cleaning up any pre-existing state, we 
> must put all partitions through the same write path as normal mutations. This 
> also ensures any 2is are also updated.
> For CDC-enabled tables, we want to ensure that the mutations are run through 
> the CommitLog so they can be archived by the CDC process on discard.
> {quote}
> Using the regular write path turns out to be an issue for incremental 
> repairs, as we loose the {{repaired_at}} state in the process. Eventually the 
> streamed rows will end up in the unrepaired set, in contrast to the rows on 
> the sender site moved to the repaired set. The next repair run will stream 
> the same data back again, causing rows to bounce on and on between nodes on 
> each repair.
> See linked dtest on steps to reproduce. An example for reproducing this 
> manually using ccm can be found 
> [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12886) Streaming failed due to SSL Socket connection reset

2016-11-09 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652089#comment-15652089
 ] 

Paulo Motta commented on CASSANDRA-12886:
-

* Can you check if the source/destination {{STREAM-(IN/OUT)-IP}} of failed 
streams is the same throughout the cluster?
* What are your tcp_keepalive settings? (see tuning guide 
[here|http://docs.datastax.com/en/cassandra/2.0/cassandra/troubleshooting/trblshootIdleFirewall.html])
* Also, can you paste full debug.log sample of a node with this error?

> Streaming failed due to SSL Socket connection reset
> ---
>
> Key: CASSANDRA-12886
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12886
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Bing Wu
>
> While running "nodetool repair", I see many instances of 
> "javax.net.ssl.SSLException: java.net.SocketException: Connection reset" in 
> system.logs on some nodes in the cluster. Timestamps correspond to streaming 
> source/initiator's error messages of "sync failed between ..."
> Setup: 
> - Cassandra 3.7.01 
> - CentOS 6.7 in AWS (multi-region)
> - JDK version: {noformat}
> java version "1.8.0_102"
> Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
> {noformat}
> - cassandra.yaml:
> {noformat}
> server_encryption_options:
> internode_encryption: all
> keystore: [path]
> keystore_password: [password]
> truststore: [path]
> truststore_password: [password]
> # More advanced defaults below:
> # protocol: TLS
> # algorithm: SunX509
> # store_type: JKS
> # cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
> require_client_auth: false
> {noformat}
> Error messages in system.log on the target host:
> {noformat}
> ERROR [STREAM-OUT-/54.247.111.232:7001] 2016-11-07 07:30:56,475 
> StreamSession.java:529 - [Stream #e14abcb0-a4bb-11e6-9758-55b9ac38b78e] 
> Streaming error occurred on session with peer 54.247.111.232
> javax.net.ssl.SSLException: Connection has been shutdown: 
> javax.net.ssl.SSLException: java.net.SocketException: Connection reset
> at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1541) 
> ~[na:1.8.0_102]
> at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1553) 
> ~[na:1.8.0_102]
> at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:71) 
> ~[na:1.8.0_102]
> at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
> ~[na:1.8.0_102]
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
> ~[na:1.8.0_102]
> at 
> org.apache.cassandra.io.util.WrappedDataOutputStreamPlus.flush(WrappedDataOutputStreamPlus.java:66)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:371)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:342)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
> Caused by: javax.net.ssl.SSLException: java.net.SocketException: Connection 
> reset
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12895) dtest failure in disk_balance_test.TestDiskBalance.disk_balance_stress_test

2016-11-09 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652020#comment-15652020
 ] 

Philip Thompson commented on CASSANDRA-12895:
-

Multiplexing here: 
http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/359/

> dtest failure in disk_balance_test.TestDiskBalance.disk_balance_stress_test
> ---
>
> Key: CASSANDRA-12895
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12895
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: DS Test Eng
>  Labels: dtest, test-failure
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1418/testReport/disk_balance_test/TestDiskBalance/disk_balance_stress_test
> {noformat}
> Error Message
> 'float' object has no attribute '2f'
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-lxr8Vr
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/disk_balance_test.py", line 31, in 
> disk_balance_stress_test
> self.assert_balanced(node)
>   File "/home/automaton/cassandra-dtest/disk_balance_test.py", line 120, in 
> assert_balanced
> assert_almost_equal(*sums, error=0.1, error_message=node.name)
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 187, in 
> assert_almost_equal
> assert vmin > vmax * (1.0 - error) or vmin == vmax, "values not within 
> {.2f}% of the max: {} ({})".format(error * 100, args, error_message)
> "'float' object has no attribute '2f'\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-lxr8Vr\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12651) Failure in SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex

2016-11-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652018#comment-15652018
 ] 

Alex Petrov commented on CASSANDRA-12651:
-

I've added some retrying and additional logging as well. I'll keep re-running 
as well.

> Failure in 
> SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex
> 
>
> Key: CASSANDRA-12651
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12651
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Alex Petrov
>  Labels: test-failure
>
> This has failed with/without compression.
> Stacktrace:
> {code}
> junit.framework.AssertionFailedError: Got less rows than expected. Expected 2 
> but got 0
>   at org.apache.cassandra.cql3.CQLTester.assertRows(CQLTester.java:909)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest.lambda$testAllowFilteringOnPartitionKeyWithSecondaryIndex$78(SecondaryIndexTest.java:1228)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest$$Lambda$293/218688965.apply(Unknown
>  Source)
>   at 
> org.apache.cassandra.cql3.CQLTester.beforeAndAfterFlush(CQLTester.java:1215)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex(SecondaryIndexTest.java:1218)
> {code}
> Examples:
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex_compression/
> http://cassci.datastax.com/job/trunk_testall/1219/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1216/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1208/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1175/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> May or may not be related, but there's a test failure (index duplicate):
> http://cassci.datastax.com/view/Dev/view/carlyeks/job/carlyeks-ticket-11803-3.X-testall/lastCompletedBuild/testReport/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn_compression/
> http://cassci.datastax.com/job/ifesdjeen-11803-test-fix-trunk-testall/1/testReport/junit/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12652) Failure in SASIIndexTest.testStaticIndex-compression

2016-11-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652015#comment-15652015
 ] 

Alex Petrov edited comment on CASSANDRA-12652 at 11/9/16 8:56 PM:
--

I've checked logs and it seems that in this case we only have 2 sstables, so it 
looks like there's more to it than just leftovers and asynchronous flushing. 
Having that said, so far I could not reproduce it in a setting that doesn't 
have interference. 


was (Author: ifesdjeen):
I've checked logs and it seems that in this case we only have 2 sstables, so it 
looks like there's more to it than just leftovers and asynchronous flushing. 
Having that said, so far I could not reproduce it in a setting that doesn't 
have interference. 

I'll cancel the patch for now and investigate further.

> Failure in SASIIndexTest.testStaticIndex-compression
> 
>
> Key: CASSANDRA-12652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Alex Petrov
> Fix For: 3.x, 4.x
>
>
> Stacktrace:
> {code}
> junit.framework.AssertionFailedError: expected:<1> but was:<0>
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.testStaticIndex(SASIIndexTest.java:1839)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.testStaticIndex(SASIIndexTest.java:1786)
> {code}
> Example failure:
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.index.sasi/SASIIndexTest/testStaticIndex_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12652) Failure in SASIIndexTest.testStaticIndex-compression

2016-11-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652015#comment-15652015
 ] 

Alex Petrov commented on CASSANDRA-12652:
-

I've checked logs and it seems that in this case we only have 2 sstables, so it 
looks like there's more to it than just leftovers and asynchronous flushing. 
Having that said, so far I could not reproduce it in a setting that doesn't 
have interference. 

I'll cancel the patch for now and investigate further.

> Failure in SASIIndexTest.testStaticIndex-compression
> 
>
> Key: CASSANDRA-12652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Alex Petrov
> Fix For: 3.x, 4.x
>
>
> Stacktrace:
> {code}
> junit.framework.AssertionFailedError: expected:<1> but was:<0>
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.testStaticIndex(SASIIndexTest.java:1839)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.testStaticIndex(SASIIndexTest.java:1786)
> {code}
> Example failure:
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.index.sasi/SASIIndexTest/testStaticIndex_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9472) Reintroduce off heap memtables

2016-11-09 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani resolved CASSANDRA-9472.
---
Resolution: Fixed
  Assignee: Benedict  (was: T Jake Luciani)

> Reintroduce off heap memtables
> --
>
> Key: CASSANDRA-9472
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9472
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 3.4
>
>
> CASSANDRA-8099 removes off heap memtables. We should reintroduce them ASAP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-9472) Reintroduce off heap memtables

2016-11-09 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reopened CASSANDRA-9472:
---
  Assignee: T Jake Luciani  (was: Benedict)

> Reintroduce off heap memtables
> --
>
> Key: CASSANDRA-9472
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9472
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: T Jake Luciani
> Fix For: 3.4
>
>
> CASSANDRA-8099 removes off heap memtables. We should reintroduce them ASAP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12856) dtest failure in replication_test.SnitchConfigurationUpdateTest.test_cannot_restart_with_different_rack

2016-11-09 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651955#comment-15651955
 ] 

Michael Shuler commented on CASSANDRA-12856:


https://issues.apache.org/jira/issues/?jql=project%20%3D%20CASSANDRA%20AND%20text%20~%20%22No%20underlying%20server%20socket%22

has a few interesting hits.

> dtest failure in 
> replication_test.SnitchConfigurationUpdateTest.test_cannot_restart_with_different_rack
> ---
>
> Key: CASSANDRA-12856
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12856
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest, test-failure
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/280/testReport/replication_test/SnitchConfigurationUpdateTest/test_cannot_restart_with_different_rack
> {code}
> Error Message
> Problem stopping node node1
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/replication_test.py", line 630, in 
> test_cannot_restart_with_different_rack
> node1.stop(wait_other_notice=True)
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 727, in 
> stop
> raise NodeError("Problem stopping node %s" % self.name)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12856) dtest failure in replication_test.SnitchConfigurationUpdateTest.test_cannot_restart_with_different_rack

2016-11-09 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651952#comment-15651952
 ] 

Michael Shuler commented on CASSANDRA-12856:


It's not fresh in my memory, but it is a possibility.

> dtest failure in 
> replication_test.SnitchConfigurationUpdateTest.test_cannot_restart_with_different_rack
> ---
>
> Key: CASSANDRA-12856
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12856
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest, test-failure
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/280/testReport/replication_test/SnitchConfigurationUpdateTest/test_cannot_restart_with_different_rack
> {code}
> Error Message
> Problem stopping node node1
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/replication_test.py", line 630, in 
> test_cannot_restart_with_different_rack
> node1.stop(wait_other_notice=True)
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 727, in 
> stop
> raise NodeError("Problem stopping node %s" % self.name)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2016-11-09 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651913#comment-15651913
 ] 

Philip Thompson commented on CASSANDRA-12617:
-

Moving to the bug queue to see if this is a problem with the leveling.

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12617
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12809) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x.boolean_test

2016-11-09 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651911#comment-15651911
 ] 

Philip Thompson commented on CASSANDRA-12809:
-

I've tried multiplexing this thousands of times. It reproduces almost never, 
and when it does, it's an issue with an invalid jvm.options file.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x.boolean_test
> ---
>
> Key: CASSANDRA-12809
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12809
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest, test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_upgrade/64/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/boolean_test
> {code}
> Error Message
> Problem starting node node1 due to [Errno 2] No such file or directory: 
> '/tmp/dtest-QXmxBV/test/node1/cassandra.pid'
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 2206, in boolean_test
> for is_upgraded, cursor in self.do_upgrade(cursor):
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 153, in do_upgrade
> node1.start(wait_for_binary_proto=True, wait_other_notice=True)
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 648, in 
> start
> self._update_pid(process)
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1780, in 
> _update_pid
> raise NodeError('Problem starting node %s due to %s' % (self.name, e), 
> process)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2016-11-09 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12617:

Assignee: (was: DS Test Eng)

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12617
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2016-11-09 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12617:

Issue Type: Bug  (was: Test)

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-12617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12617
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in 
> sstableofflinerelevel_test
> self.assertGreater(max(final_levels), 1)
>   File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> "1 not greater than 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12652) Failure in SASIIndexTest.testStaticIndex-compression

2016-11-09 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651910#comment-15651910
 ] 

Michael Shuler commented on CASSANDRA-12652:


We also just caught this same error on a non-compression test run, if that adds 
any info.

> Failure in SASIIndexTest.testStaticIndex-compression
> 
>
> Key: CASSANDRA-12652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Alex Petrov
> Fix For: 3.x, 4.x
>
>
> Stacktrace:
> {code}
> junit.framework.AssertionFailedError: expected:<1> but was:<0>
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.testStaticIndex(SASIIndexTest.java:1839)
>   at 
> org.apache.cassandra.index.sasi.SASIIndexTest.testStaticIndex(SASIIndexTest.java:1786)
> {code}
> Example failure:
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.index.sasi/SASIIndexTest/testStaticIndex_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12833) dtest failure in auth_test.TestAuthRoles.udf_permissions_in_update_test

2016-11-09 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651909#comment-15651909
 ] 

Philip Thompson commented on CASSANDRA-12833:
-

I dont see any stdout, but we clearly thought the node was up since we returned 
from the start() call. This is probably a weird flake, and not a bug.

> dtest failure in auth_test.TestAuthRoles.udf_permissions_in_update_test
> ---
>
> Key: CASSANDRA-12833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12833
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest, test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/838/testReport/auth_test/TestAuthRoles/udf_permissions_in_update_test
> {code}
> Error Message
> [Errno 2] No such file or directory: 
> '/tmp/dtest-ZILXmx/test/node1/logs/system.log'
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 2358, in 
> udf_permissions_in_update_test
> self.verify_udf_permissions("UPDATE ks.t1 SET v = ks.plus_one(2) WHERE k 
> = ks.plus_one(0)")
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 2375, in 
> verify_udf_permissions
> self.prepare()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 2622, in prepare
> self.wait_for_any_log(self.cluster.nodelist(), 'Created default 
> superuser', 25)
>   File "/home/automaton/cassandra-dtest/dtest.py", line 642, in 
> wait_for_any_log
> found = node.grep_log(pattern, filename=filename)
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 347, in 
> grep_log
> with open(os.path.join(self.get_path(), 'logs', filename)) as f:
> {code}
> There were no logs saved for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12833) dtest failure in auth_test.TestAuthRoles.udf_permissions_in_update_test

2016-11-09 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651904#comment-15651904
 ] 

Philip Thompson commented on CASSANDRA-12833:
-

Twenty five seconds between starting a node and a system.log file appearing? 
That is quite the gap.

> dtest failure in auth_test.TestAuthRoles.udf_permissions_in_update_test
> ---
>
> Key: CASSANDRA-12833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12833
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest, test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/838/testReport/auth_test/TestAuthRoles/udf_permissions_in_update_test
> {code}
> Error Message
> [Errno 2] No such file or directory: 
> '/tmp/dtest-ZILXmx/test/node1/logs/system.log'
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 2358, in 
> udf_permissions_in_update_test
> self.verify_udf_permissions("UPDATE ks.t1 SET v = ks.plus_one(2) WHERE k 
> = ks.plus_one(0)")
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 2375, in 
> verify_udf_permissions
> self.prepare()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 2622, in prepare
> self.wait_for_any_log(self.cluster.nodelist(), 'Created default 
> superuser', 25)
>   File "/home/automaton/cassandra-dtest/dtest.py", line 642, in 
> wait_for_any_log
> found = node.grep_log(pattern, filename=filename)
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 347, in 
> grep_log
> with open(os.path.join(self.get_path(), 'logs', filename)) as f:
> {code}
> There were no logs saved for this test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12856) dtest failure in replication_test.SnitchConfigurationUpdateTest.test_cannot_restart_with_different_rack

2016-11-09 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651901#comment-15651901
 ] 

Philip Thompson commented on CASSANDRA-12856:
-

If we check the logs, it's just 
{code}
WARN  [Thread-2] 2016-10-27 19:28:01,236 CustomTThreadPoolServer.java:122 - 
Transport error occurred during acceptance of message.
org.apache.thrift.transport.TTransportException: No underlying server socket.
at 
org.apache.cassandra.thrift.TCustomServerSocket.acceptImpl(TCustomServerSocket.java:96)
 ~[main/:na]
at 
org.apache.cassandra.thrift.TCustomServerSocket.acceptImpl(TCustomServerSocket.java:36)
 ~[main/:na]
at 
org.apache.thrift.transport.TServerTransport.accept(TServerTransport.java:60) 
~[libthrift-0.9.2.jar:0.9.2]
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer.serve(CustomTThreadPoolServer.java:110)
 ~[main/:na]
at 
org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.run(ThriftServer.java:137)
 [main/:na]
{code}

[~mshuler], haven't we seen this before? Did we say it was a hardware problem?

> dtest failure in 
> replication_test.SnitchConfigurationUpdateTest.test_cannot_restart_with_different_rack
> ---
>
> Key: CASSANDRA-12856
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12856
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest, test-failure
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/280/testReport/replication_test/SnitchConfigurationUpdateTest/test_cannot_restart_with_different_rack
> {code}
> Error Message
> Problem stopping node node1
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/replication_test.py", line 630, in 
> test_cannot_restart_with_different_rack
> node1.stop(wait_other_notice=True)
>   File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 727, in 
> stop
> raise NodeError("Problem stopping node %s" % self.name)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12865) dtest failure in materialized_views_test.TestMaterializedViews.view_tombstone_test

2016-11-09 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12865:

Issue Type: Bug  (was: Test)

> dtest failure in 
> materialized_views_test.TestMaterializedViews.view_tombstone_test
> --
>
> Key: CASSANDRA-12865
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12865
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/844/testReport/materialized_views_test/TestMaterializedViews/view_tombstone_test
> {code}
> Error Message
> Encountered digest mismatch when we shouldn't
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 971, in view_tombstone_test
> self.check_trace_events(result.get_query_trace(), False)
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 995, in check_trace_events
> self.fail("Encountered digest mismatch when we shouldn't")
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12865) dtest failure in materialized_views_test.TestMaterializedViews.view_tombstone_test

2016-11-09 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12865:

Assignee: (was: DS Test Eng)

> dtest failure in 
> materialized_views_test.TestMaterializedViews.view_tombstone_test
> --
>
> Key: CASSANDRA-12865
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12865
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/844/testReport/materialized_views_test/TestMaterializedViews/view_tombstone_test
> {code}
> Error Message
> Encountered digest mismatch when we shouldn't
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 971, in view_tombstone_test
> self.check_trace_events(result.get_query_trace(), False)
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 995, in check_trace_events
> self.fail("Encountered digest mismatch when we shouldn't")
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12865) dtest failure in materialized_views_test.TestMaterializedViews.view_tombstone_test

2016-11-09 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651893#comment-15651893
 ] 

Philip Thompson commented on CASSANDRA-12865:
-

I dont see any problems with the test, so moving to the bug queue.

> dtest failure in 
> materialized_views_test.TestMaterializedViews.view_tombstone_test
> --
>
> Key: CASSANDRA-12865
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12865
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/844/testReport/materialized_views_test/TestMaterializedViews/view_tombstone_test
> {code}
> Error Message
> Encountered digest mismatch when we shouldn't
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 971, in view_tombstone_test
> self.check_trace_events(result.get_query_trace(), False)
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 995, in check_trace_events
> self.fail("Encountered digest mismatch when we shouldn't")
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12273) Casandra stess graph: option to create directory for graph if it doesn't exist

2016-11-09 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-12273:
--
Status: Open  (was: Patch Available)

> Casandra stess graph: option to create directory for graph if it doesn't exist
> --
>
> Key: CASSANDRA-12273
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12273
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Murukesh Mohanan
>Priority: Minor
>  Labels: lhf
> Attachments: 12273.patch
>
>
> I am running it in CI with ephemeral workspace  / build dirs. It would be 
> nice if CS would create the directory so my build tool doesn't have to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12273) Casandra stress graph: option to create directory for graph if it doesn't exist

2016-11-09 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-12273:
--
Summary: Casandra stress graph: option to create directory for graph if it 
doesn't exist  (was: Casandra stess graph: option to create directory for graph 
if it doesn't exist)

> Casandra stress graph: option to create directory for graph if it doesn't 
> exist
> ---
>
> Key: CASSANDRA-12273
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12273
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Murukesh Mohanan
>Priority: Minor
>  Labels: lhf
> Attachments: 12273.patch
>
>
> I am running it in CI with ephemeral workspace  / build dirs. It would be 
> nice if CS would create the directory so my build tool doesn't have to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12273) Casandra stess graph: option to create directory for graph if it doesn't exist

2016-11-09 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-12273:
--
Status: Awaiting Feedback  (was: Open)

> Casandra stess graph: option to create directory for graph if it doesn't exist
> --
>
> Key: CASSANDRA-12273
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12273
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Murukesh Mohanan
>Priority: Minor
>  Labels: lhf
> Attachments: 12273.patch
>
>
> I am running it in CI with ephemeral workspace  / build dirs. It would be 
> nice if CS would create the directory so my build tool doesn't have to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12273) Casandra stess graph: option to create directory for graph if it doesn't exist

2016-11-09 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651814#comment-15651814
 ] 

Joel Knighton commented on CASSANDRA-12273:
---

Thanks for the patch [~muru]! Your approach looks sound.

A very similar issue exists on trunk with hdrfile logging if an {{hdrfile}} is 
specified in {{SettingsLog.java}}. If you're interested, I think it makes a lot 
of sense to also fix that problem as part of this ticket, as people affected by 
this issue will likely also be affected by the fact that hdrfile paths do not 
have their directory created. I also think it makes sense to canonicalize the 
path before {{Files.createDirectories}}, since this would avoid needing to 
special-case symlinks. This could be done by using {{getCanonicalPath}} instead 
of {{toURI}}.

For future patches, it is easier to accept contributions if they include a 
CHANGES.txt entry and an appropriately formatted commit message in a patch 
created with {[git format-patch}}. The details on this are available in the 
[docs|http://cassandra.apache.org/doc/latest/development/patches.html].

If you aren't interested in updating the patch with these changes, I still 
think this patch is worth merging and will update this issue with an 
appropriately formatted commit and will approve it after CI.

> Casandra stess graph: option to create directory for graph if it doesn't exist
> --
>
> Key: CASSANDRA-12273
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12273
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Christopher Batey
>Assignee: Murukesh Mohanan
>Priority: Minor
>  Labels: lhf
> Attachments: 12273.patch
>
>
> I am running it in CI with ephemeral workspace  / build dirs. It would be 
> nice if CS would create the directory so my build tool doesn't have to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12874) dtest failure in materialized_views_test.TestMaterializedViews.populate_mv_after_insert_wide_rows_test

2016-11-09 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12874:

Resolution: Fixed
  Reviewer: Sean McCarthy
Status: Resolved  (was: Patch Available)

> dtest failure in 
> materialized_views_test.TestMaterializedViews.populate_mv_after_insert_wide_rows_test
> --
>
> Key: CASSANDRA-12874
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12874
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/316/testReport/materialized_views_test/TestMaterializedViews/populate_mv_after_insert_wide_rows_test
> {code}
> Error Message
> Expected [[0, 0]] from SELECT * FROM t_by_v WHERE id = 0 AND v = 0, but got []
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 211, in populate_mv_after_insert_wide_rows_test
> assert_one(session, "SELECT * FROM t_by_v WHERE id = {} AND v = 
> {}".format(i, j), [j, i])
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12895) dtest failure in disk_balance_test.TestDiskBalance.disk_balance_stress_test

2016-11-09 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-12895:
--

 Summary: dtest failure in 
disk_balance_test.TestDiskBalance.disk_balance_stress_test
 Key: CASSANDRA-12895
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12895
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/trunk_dtest/1418/testReport/disk_balance_test/TestDiskBalance/disk_balance_stress_test

{noformat}
Error Message

'float' object has no attribute '2f'
 >> begin captured logging << 
dtest: DEBUG: cluster ccm directory: /tmp/dtest-lxr8Vr
dtest: DEBUG: Done setting configuration options:
{   'initial_token': None,
'num_tokens': '32',
'phi_convict_threshold': 5,
'range_request_timeout_in_ms': 1,
'read_request_timeout_in_ms': 1,
'request_timeout_in_ms': 1,
'truncate_request_timeout_in_ms': 1,
'write_request_timeout_in_ms': 1}
- >> end captured logging << -
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/disk_balance_test.py", line 31, in 
disk_balance_stress_test
self.assert_balanced(node)
  File "/home/automaton/cassandra-dtest/disk_balance_test.py", line 120, in 
assert_balanced
assert_almost_equal(*sums, error=0.1, error_message=node.name)
  File "/home/automaton/cassandra-dtest/tools/assertions.py", line 187, in 
assert_almost_equal
assert vmin > vmax * (1.0 - error) or vmin == vmax, "values not within 
{.2f}% of the max: {} ({})".format(error * 100, args, error_message)
"'float' object has no attribute '2f'\n >> begin captured 
logging << \ndtest: DEBUG: cluster ccm directory: 
/tmp/dtest-lxr8Vr\ndtest: DEBUG: Done setting configuration options:\n{   
'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
5,\n'range_request_timeout_in_ms': 1,\n
'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n
'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
1}\n- >> end captured logging << -"
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12874) dtest failure in materialized_views_test.TestMaterializedViews.populate_mv_after_insert_wide_rows_test

2016-11-09 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12874:

Status: Patch Available  (was: In Progress)

https://github.com/riptano/cassandra-dtest/pull/1379

> dtest failure in 
> materialized_views_test.TestMaterializedViews.populate_mv_after_insert_wide_rows_test
> --
>
> Key: CASSANDRA-12874
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12874
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/316/testReport/materialized_views_test/TestMaterializedViews/populate_mv_after_insert_wide_rows_test
> {code}
> Error Message
> Expected [[0, 0]] from SELECT * FROM t_by_v WHERE id = 0 AND v = 0, but got []
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 211, in populate_mv_after_insert_wide_rows_test
> assert_one(session, "SELECT * FROM t_by_v WHERE id = {} AND v = 
> {}".format(i, j), [j, i])
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12874) dtest failure in materialized_views_test.TestMaterializedViews.populate_mv_after_insert_wide_rows_test

2016-11-09 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-12874:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> dtest failure in 
> materialized_views_test.TestMaterializedViews.populate_mv_after_insert_wide_rows_test
> --
>
> Key: CASSANDRA-12874
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12874
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest, test-failure
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/316/testReport/materialized_views_test/TestMaterializedViews/populate_mv_after_insert_wide_rows_test
> {code}
> Error Message
> Expected [[0, 0]] from SELECT * FROM t_by_v WHERE id = 0 AND v = 0, but got []
> {code}{code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 211, in populate_mv_after_insert_wide_rows_test
> assert_one(session, "SELECT * FROM t_by_v WHERE id = {} AND v = 
> {}".format(i, j), [j, i])
>   File "/home/automaton/cassandra-dtest/tools/assertions.py", line 130, in 
> assert_one
> assert list_res == [expected], "Expected {} from {}, but got 
> {}".format([expected], query, list_res)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12894) testall failure in org.apache.cassandra.hints.HintsBufferPoolTest.testBackpressure-compression

2016-11-09 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-12894:
--

 Summary: testall failure in 
org.apache.cassandra.hints.HintsBufferPoolTest.testBackpressure-compression
 Key: CASSANDRA-12894
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12894
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler


example failure:

http://cassci.datastax.com/job/cassandra-3.0_testall/720/testReport/org.apache.cassandra.hints/HintsBufferPoolTest/testBackpressure_compression

{noformat}
Error Message

null
Stacktrace

junit.framework.AssertionFailedError: null
at 
org.apache.cassandra.hints.HintsBufferPoolTest.testBackpressure(HintsBufferPoolTest.java:73)
at 
org.jboss.byteman.contrib.bmunit.BMUnitRunner$10.evaluate(BMUnitRunner.java:371)
at 
org.jboss.byteman.contrib.bmunit.BMUnitRunner$6.evaluate(BMUnitRunner.java:241)
at 
org.jboss.byteman.contrib.bmunit.BMUnitRunner$1.evaluate(BMUnitRunner.java:75)
Standard Output

ERROR 16:26:27 SLF4J: stderr
INFO  16:26:27 Configuration location: 
file:/home/automaton/cassandra/test/conf/cassandra.yaml
INFO  16:26:27 Node configuration:[allocate_tokens_for_keyspace=null; 
authenticator=null; authorizer=null; auto_bootstrap=true; auto_snapshot=true; 
batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; 
batchlog_replay_throttle_in_kb=1024; broadcast_address=null; 
broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; 
cas_contention_timeout_in_ms
...[truncated 8324 chars]...
NA OS native malloc/free
INFO  16:26:30 Initializing counter cache with capacity of 6 MBs
INFO  16:26:30 Scheduling counter cache save to every 7200 seconds (going to 
save all keys).
INFO  16:26:31 Global buffer pool is enabled, when pool is exahusted (max is 
512 mb) it will allocate on heap
INFO  16:26:31 Initializing hints_buffer_test.table
INFO  16:26:31 byteman jar is 
/home/automaton/cassandra/build/lib/jars/byteman-3.0.3.jar
INFO  16:26:31 Setting org.jboss.byteman.allow.config.update=true
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12893) testall failure in org.apache.cassandra.db.commitlog.CommitLogSegmentManagerTest.testCompressedCommitLogBackpressure

2016-11-09 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-12893:
--

 Summary: testall failure in 
org.apache.cassandra.db.commitlog.CommitLogSegmentManagerTest.testCompressedCommitLogBackpressure
 Key: CASSANDRA-12893
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12893
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler


example failure:

http://cassci.datastax.com/job/cassandra-3.0_testall/720/testReport/org.apache.cassandra.db.commitlog/CommitLogSegmentManagerTest/testCompressedCommitLogBackpressure

{noformat}
Error Message

Timeout occurred. Please note the time in the report does not reflect the time 
until the timeout.
Stacktrace

junit.framework.AssertionFailedError: Timeout occurred. Please note the time in 
the report does not reflect the time until the timeout.
at java.lang.Thread.run(Thread.java:745)
Standard Output

ERROR 16:23:51 SLF4J: stderr
INFO  16:23:52 Configuration location: 
file:/home/automaton/cassandra/test/conf/cassandra.yaml
INFO  16:23:52 Node configuration:[allocate_tokens_for_keyspace=null; 
authenticator=null; authorizer=null; auto_bootstrap=true; auto_snapshot=true; 
batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; 
batchlog_replay_throttle_in_kb=1024; broadcast_address=null; 
broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; 
cas_contention_timeout_in_ms
...[truncated 9661 chars]...
duling counter cache save to every 7200 seconds (going to save all keys).
INFO  16:23:55 Global buffer pool is enabled, when pool is exahusted (max is 
512 mb) it will allocate on heap
INFO  16:23:55 Initializing CommitLogTest.Standard1
INFO  16:23:55 Initializing CommitLogTest.Standard2
INFO  16:23:55 byteman jar is 
/home/automaton/cassandra/build/lib/jars/byteman-3.0.3.jar
INFO  16:23:56 Setting org.jboss.byteman.allow.config.update=true
INFO  16:23:56 No commitlog files found; skipping replay
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12892) testall failure in org.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex

2016-11-09 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-12892:
--

 Summary: testall failure in 
org.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex
 Key: CASSANDRA-12892
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12892
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler


example failure:

http://cassci.datastax.com/job/cassandra-2.2_testall/602/testReport/org.apache.cassandra.io.sstable/IndexSummaryManagerTest/testCancelIndex

{noformat}
Error Message

Expected compaction interrupted exception
Stacktrace

junit.framework.AssertionFailedError: Expected compaction interrupted exception
at 
org.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex(IndexSummaryManagerTest.java:641)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12890) testall failure in org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOldMetadata

2016-11-09 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-12890:
--

 Summary: testall failure in 
org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOldMetadata
 Key: CASSANDRA-12890
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12890
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler
Assignee: DS Test Eng


This failed in both 'test' and 'test-compression' targets.

example failure:

http://cassci.datastax.com/job/cassandra-2.2_testall/602/testReport/org.apache.cassandra.db/ColumnFamilyStoreTest/testSliceByNamesCommandOldMetadata
http://cassci.datastax.com/job/cassandra-2.2_testall/602/testReport/org.apache.cassandra.db/ColumnFamilyStoreTest/testSliceByNamesCommandOldMetadata_compression/


{noformat}
Stacktrace

junit.framework.AssertionFailedError
at 
org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:171)
at 
org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166)
at 
org.apache.cassandra.io.sstable.format.SSTableWriter.rename(SSTableWriter.java:266)
at 
org.apache.cassandra.db.ColumnFamilyStore.loadNewSSTables(ColumnFamilyStore.java:791)
at 
org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOldMetadata(ColumnFamilyStoreTest.java:1158)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12891) testall failure in org.apache.cassandra.db.compaction.NeverPurgeTest.minorNeverPurgeTombstonesTest-compression

2016-11-09 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-12891:
--

 Summary: testall failure in 
org.apache.cassandra.db.compaction.NeverPurgeTest.minorNeverPurgeTombstonesTest-compression
 Key: CASSANDRA-12891
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12891
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler


example failure:

http://cassci.datastax.com/job/cassandra-2.2_testall/602/testReport/org.apache.cassandra.db.compaction/NeverPurgeTest/minorNeverPurgeTombstonesTest_compression

{noformat}
Error Message

Memory was freed by Thread[NonPeriodicTasks:1,5,main]
Stacktrace

junit.framework.AssertionFailedError: Memory was freed by 
Thread[NonPeriodicTasks:1,5,main]
at 
org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:103)
at org.apache.cassandra.io.util.Memory.getLong(Memory.java:260)
at 
org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:223)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferMmap(CompressedRandomAccessReader.java:168)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:226)
at 
org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:303)
at 
org.apache.cassandra.io.util.AbstractDataInput.readInt(AbstractDataInput.java:202)
at 
org.apache.cassandra.io.util.AbstractDataInput.readLong(AbstractDataInput.java:264)
at 
org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:131)
at 
org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
at 
org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52)
at 
org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:169)
at 
org.apache.cassandra.db.compaction.NeverPurgeTest.verifyContainsTombstones(NeverPurgeTest.java:114)
at 
org.apache.cassandra.db.compaction.NeverPurgeTest.minorNeverPurgeTombstonesTest(NeverPurgeTest.java:85)
Standard Output

WARN  20:06:47 You are running with -Dcassandra.never_purge_tombstones=true, 
this is dangerous!
WARN  20:06:47 You are running with -Dcassandra.never_purge_tombstones=true, 
this is dangerous!
WARN  20:06:49 You are running with -Dcassandra.never_purge_tombstones=true, 
this is dangerous!
WARN  20:06:49 You are running with -Dcassandra.never_purge_tombstones=true, 
this is dangerous!
WARN  20:06:49 You are running with -Dcassandra.never_purge_tombstones=true, 
this is dangerous!
WARN  20:06:49 You a
...[truncated 2456 chars]...
 this is dangerous!
WARN  20:06:56 You are running with -Dcassandra.never_purge_tombstones=true, 
this is dangerous!
WARN  20:06:56 You are running with -Dcassandra.never_purge_tombstones=true, 
this is dangerous!
WARN  20:06:56 You are running with -Dcassandra.never_purge_tombstones=true, 
this is dangerous!
WARN  20:06:56 You are running with -Dcassandra.never_purge_tombstones=true, 
this is dangerous!
WARN  20:06:56 You are running with -Dcassandra.never_purge_tombstones=true, 
this is dangerous!
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12890) testall failure in org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOldMetadata

2016-11-09 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12890:

Assignee: (was: DS Test Eng)

> testall failure in 
> org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOldMetadata
> ---
>
> Key: CASSANDRA-12890
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12890
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>  Labels: test-failure
>
> This failed in both 'test' and 'test-compression' targets.
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/602/testReport/org.apache.cassandra.db/ColumnFamilyStoreTest/testSliceByNamesCommandOldMetadata
> http://cassci.datastax.com/job/cassandra-2.2_testall/602/testReport/org.apache.cassandra.db/ColumnFamilyStoreTest/testSliceByNamesCommandOldMetadata_compression/
> {noformat}
> Stacktrace
> junit.framework.AssertionFailedError
>   at 
> org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:171)
>   at 
> org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166)
>   at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.rename(SSTableWriter.java:266)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.loadNewSSTables(ColumnFamilyStore.java:791)
>   at 
> org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOldMetadata(ColumnFamilyStoreTest.java:1158)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12296) Better error message when rebuilding

2016-11-09 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12296:

Summary: Better error message when rebuilding  (was: system_auth can't be 
rebuilt by default)

> Better error message when rebuilding
> 
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Kurt Greaves
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.10, 3.10
>
> Attachments: 12296-3.0.patch, 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up consistency guarantees when it's 
> not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12296) Better error message when streaming with insufficient sources in DC

2016-11-09 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12296:

Summary: Better error message when streaming with insufficient sources in 
DC  (was: Better error message when rebuilding)

> Better error message when streaming with insufficient sources in DC
> ---
>
> Key: CASSANDRA-12296
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12296
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Kurt Greaves
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0.10, 3.10
>
> Attachments: 12296-3.0.patch, 12296.patch
>
>
> This came up in discussion of CASSANDRA-11687. {{nodetool rebuild}} was 
> failing in a dtest. [~pauloricardomg] explained:
> bq. before [CASSANDRA-11848] the local node could be considered a source, 
> while now sources are restricted only to dc2, so since {{system_auth}} uses 
> {{SimpleStrategy}} depending on the token arrangement there could or not be 
> sources from dc2. Fix is to either use 
> {{-Dcassandra.consistent.rangemovement=false}} or update {{system_auth}} to 
> use {{NetworkTopologyStrategy}} with 2 dcs..
> This is, at the very least, a UX bug. When {{rebuild}} fails, it fails with
> {code}
> nodetool: Unable to find sufficient sources for streaming range 
> (-3287869951390391138,-1624006824486474209] in keyspace system_auth with 
> RF=1.If you want to ignore this, consider using system property 
> -Dcassandra.consistent.rangemovement=false.
> {code}
> which suggests that a user should give up consistency guarantees when it's 
> not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12889) Pass root cause to CorruptBlockException when uncompression failed

2016-11-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12889:

Reviewer: Paulo Motta

> Pass root cause to CorruptBlockException when uncompression failed
> --
>
> Key: CASSANDRA-12889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12889
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Trivial
> Fix For: 3.0.x, 3.x
>
>
> When reading compressed SSTable failed, CorruptBlockException is thrown 
> without root cause. It is nice to have when investigating uncompression error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12889) Pass root cause to CorruptBlockException when uncompression failed

2016-11-09 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-12889:
---
Status: Patch Available  (was: Open)

> Pass root cause to CorruptBlockException when uncompression failed
> --
>
> Key: CASSANDRA-12889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12889
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Trivial
> Fix For: 3.0.x, 3.x
>
>
> When reading compressed SSTable failed, CorruptBlockException is thrown 
> without root cause. It is nice to have when investigating uncompression error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12889) Pass root cause to CorruptBlockException when uncompression failed

2016-11-09 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651573#comment-15651573
 ] 

Yuki Morishita commented on CASSANDRA-12889:


Trivial patch attached. Tests are running.

||branch||testall||dtest||
|[corrupt-block-exception-3.0|https://github.com/yukim/cassandra/tree/corrupt-block-exception-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-corrupt-block-exception-3.0-testall/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-corrupt-block-exception-3.0-dtest/lastCompletedBuild/testReport/]|
|[corrupt-block-exception-3.X|https://github.com/yukim/cassandra/tree/corrupt-block-exception-3.X]|[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-corrupt-block-exception-3.X-testall/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-corrupt-block-exception-3.X-dtest/lastCompletedBuild/testReport/]|


> Pass root cause to CorruptBlockException when uncompression failed
> --
>
> Key: CASSANDRA-12889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12889
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Trivial
> Fix For: 3.0.x, 3.x
>
>
> When reading compressed SSTable failed, CorruptBlockException is thrown 
> without root cause. It is nice to have when investigating uncompression error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12889) Pass root cause to CorruptBlockException when uncompression failed

2016-11-09 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-12889:
--

 Summary: Pass root cause to CorruptBlockException when 
uncompression failed
 Key: CASSANDRA-12889
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12889
 Project: Cassandra
  Issue Type: Improvement
  Components: Local Write-Read Paths
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Trivial
 Fix For: 3.0.x, 3.x


When reading compressed SSTable failed, CorruptBlockException is thrown without 
root cause. It is nice to have when investigating uncompression error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12651) Failure in SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex

2016-11-09 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651477#comment-15651477
 ] 

Sam Tunnicliffe commented on CASSANDRA-12651:
-

I'm not convinced that the issue is a name clash. Where we've previously seen 
name clashes (CASSANDRA-12834 for instance, which is the same failure as 
reported in the 2 supplementary CI runs mentioned above), the problem is reuse 
of an identifier by different tests within a fixture. In that case, the async 
cleanup in the teardown races with a CREATE statement. Here though, the index 
name is not reused anywhere and if it were, the CREATE INDEX has no IF NOT 
EXISTS, so we'd likely see that request rejected rather than spurious query 
results. 

What seems odd to me is that in all of the SecondaryIndexTest failures above, 
the invalid results are returned only after flush. That is, the test does: 
{code}
run query X;
flush;
run query X;
{code}
So on each occasion, the exact same query executes and returns the expected 
results almost immediately prior to failure. I accept that 7 test runs is a 
small sample, but to me it points towards something flush related. 

I've added some logging to try and pin down the cause a bit better, but as 
[~ifesdjeen] noted, this is pretty damn hard to repro on demand, so I've been 
running the test in pretty much a constant loop, but so far had no luck. I'll 
update if/when I see a failure.

> Failure in 
> SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex
> 
>
> Key: CASSANDRA-12651
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12651
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Alex Petrov
>  Labels: test-failure
>
> This has failed with/without compression.
> Stacktrace:
> {code}
> junit.framework.AssertionFailedError: Got less rows than expected. Expected 2 
> but got 0
>   at org.apache.cassandra.cql3.CQLTester.assertRows(CQLTester.java:909)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest.lambda$testAllowFilteringOnPartitionKeyWithSecondaryIndex$78(SecondaryIndexTest.java:1228)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest$$Lambda$293/218688965.apply(Unknown
>  Source)
>   at 
> org.apache.cassandra.cql3.CQLTester.beforeAndAfterFlush(CQLTester.java:1215)
>   at 
> org.apache.cassandra.cql3.validation.entities.SecondaryIndexTest.testAllowFilteringOnPartitionKeyWithSecondaryIndex(SecondaryIndexTest.java:1218)
> {code}
> Examples:
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex_compression/
> http://cassci.datastax.com/job/trunk_testall/1219/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1216/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1208/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1176/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> http://cassci.datastax.com/job/trunk_testall/1175/testReport/org.apache.cassandra.cql3.validation.entities/SecondaryIndexTest/testAllowFilteringOnPartitionKeyWithSecondaryIndex/
> May or may not be related, but there's a test failure (index duplicate):
> http://cassci.datastax.com/view/Dev/view/carlyeks/job/carlyeks-ticket-11803-3.X-testall/lastCompletedBuild/testReport/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn_compression/
> http://cassci.datastax.com/job/ifesdjeen-11803-test-fix-trunk-testall/1/testReport/junit/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12876) Negative mean write latency

2016-11-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651382#comment-15651382
 ] 

Kévin LOVATO commented on CASSANDRA-12876:
--

I looked at the code as well and can't see an obvious way to have negative 
values in decayingBuckets, even after a rescale(). Also, I noticed that the 
time during which the metric is negative lasts for about 3 minutes and it goes 
down under zero multiple times during this period of time (cf. 
[^negative_mean_details.PNG]), where I would expect to see only one dip under 
zero if it was caused by the rescale. Hope those details will put you in the 
right direction.

> Negative mean write latency
> ---
>
> Key: CASSANDRA-12876
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12876
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Kévin LOVATO
>Assignee: Per Otterström
> Attachments: negative_mean.png, negative_mean_details.PNG, 
> negative_mean_periodicity.PNG
>
>
> The mean write latency returned by JMX turns negative every 30 minutes. As 
> the attached screenshots show, the value turns negative every 30 minutes 
> after the startup of the node.
> We did not experience this behavior in 2.1.16.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12876) Negative mean write latency

2016-11-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kévin LOVATO updated CASSANDRA-12876:
-
Attachment: negative_mean_details.PNG

A zoom on the 30 minutes disturbance.

> Negative mean write latency
> ---
>
> Key: CASSANDRA-12876
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12876
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Kévin LOVATO
>Assignee: Per Otterström
> Attachments: negative_mean.png, negative_mean_details.PNG, 
> negative_mean_periodicity.PNG
>
>
> The mean write latency returned by JMX turns negative every 30 minutes. As 
> the attached screenshots show, the value turns negative every 30 minutes 
> after the startup of the node.
> We did not experience this behavior in 2.1.16.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12808) testall failure inorg.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex

2016-11-09 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12808:

Reviewer: Marcus Eriksson
  Status: Patch Available  (was: Open)

This is a duplicate of CASSANDRA-12218, so I suggest we can just backport the 
fix for that. 
[~krummas] as you reviewed that, mind checking [this 
branch|https://github.com/beobal/cassandra/tree/12808-2.2]? It's essentially 
the same, but the modification to the test class is slightly cleaner (IMO) than 
the original.

> testall failure 
> inorg.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex
> -
>
> Key: CASSANDRA-12808
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12808
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Sam Tunnicliffe
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/594/testReport/org.apache.cassandra.io.sstable/IndexSummaryManagerTest/testCancelIndex/
> {code}
> Error Message
> Expected compaction interrupted exception
> {code}
> {code}
> Stacktrace
> junit.framework.AssertionFailedError: Expected compaction interrupted 
> exception
>   at 
> org.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex(IndexSummaryManagerTest.java:641)
> {code}
> Related failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/600/testReport/org.apache.cassandra.io.sstable/IndexSummaryManagerTest/testCancelIndex_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12510) Disallow decommission when number of replicas will drop below configured RF

2016-11-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12510:

Status: Open  (was: Awaiting Feedback)

> Disallow decommission when number of replicas will drop below configured RF
> ---
>
> Key: CASSANDRA-12510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12510
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
> Environment: C* version 3.3
>Reporter: Atin Sood
>Assignee: Kurt Greaves
>Priority: Minor
>  Labels: lhf
> Attachments: 12501-3.x.patch
>
>
> Steps to replicate :
> - Create a 3 node cluster in DC1 and create a keyspace test_keyspace with 
> table test_table with replication strategy NetworkTopologyStrategy , DC1=3 . 
> Populate some data into this table.
> - Add 5 more nodes to this cluster, but in DC2. Also do not alter the 
> keyspace to add the new DC2 to replication (this is intentional and the 
> reason why the bug shows up). So the desc keyspace should still list 
> NetworkTopologyStrategy with DC1=3 as RF
> - As expected, this will now be a 8 node cluster with 3 nodes in DC1 and 5 in 
> DC2
> - Now start decommissioning the nodes in DC1. Note that the decommission runs 
> fine on all the 3 nodes, but since the new nodes are in DC2 and the RF for 
> keyspace is restricted to DC1, the new 5 nodes won't get any data.
> - You will now end with the 5 node cluster which has no data from the 
> decommissioned 3 nodes and hence ending up in data loss
> I do understand that this problem could have been avoided if we perform an 
> alter stmt and add DC2 replication before adding the 5 nodes. But the fact 
> that decommission ran fine on the 3 nodes on DC1 without complaining that 
> there were no nodes to stream its data seems a little discomforting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12510) Disallow decommission when number of replicas will drop below configured RF

2016-11-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12510:

  Reviewer: Paulo Motta
Issue Type: Improvement  (was: Bug)
   Summary: Disallow decommission when number of replicas will drop below 
configured RF  (was: Decommission process should raise an error flag when nodes 
in a DC don't have any nodes to stream data to)

LGTM, can you just add a simple dtest showing this works as expected? Since 
there are two paths, it would be probably good to test for a multi-dc setting 
as well as SimpleStrategy.

> Disallow decommission when number of replicas will drop below configured RF
> ---
>
> Key: CASSANDRA-12510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12510
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
> Environment: C* version 3.3
>Reporter: Atin Sood
>Assignee: Kurt Greaves
>Priority: Minor
>  Labels: lhf
> Attachments: 12501-3.x.patch
>
>
> Steps to replicate :
> - Create a 3 node cluster in DC1 and create a keyspace test_keyspace with 
> table test_table with replication strategy NetworkTopologyStrategy , DC1=3 . 
> Populate some data into this table.
> - Add 5 more nodes to this cluster, but in DC2. Also do not alter the 
> keyspace to add the new DC2 to replication (this is intentional and the 
> reason why the bug shows up). So the desc keyspace should still list 
> NetworkTopologyStrategy with DC1=3 as RF
> - As expected, this will now be a 8 node cluster with 3 nodes in DC1 and 5 in 
> DC2
> - Now start decommissioning the nodes in DC1. Note that the decommission runs 
> fine on all the 3 nodes, but since the new nodes are in DC2 and the RF for 
> keyspace is restricted to DC1, the new 5 nodes won't get any data.
> - You will now end with the 5 node cluster which has no data from the 
> decommissioned 3 nodes and hence ending up in data loss
> I do understand that this problem could have been avoided if we perform an 
> alter stmt and add DC2 replication before adding the 5 nodes. But the fact 
> that decommission ran fine on the 3 nodes on DC1 without complaining that 
> there were no nodes to stream its data seems a little discomforting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12888) Incremental repairs broken for MVs and CDC

2016-11-09 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-12888:
---
Description: 
SSTables streamed during the repair process will first be written locally and 
afterwards either simply added to the pool of existing sstables or, in case of 
existing MVs or active CDC, replayed on mutation basis:

As described in {{StreamReceiveTask.OnCompletionRunnable}}:

{quote}
We have a special path for views and for CDC.

For views, since the view requires cleaning up any pre-existing state, we must 
put all partitions through the same write path as normal mutations. This also 
ensures any 2is are also updated.

For CDC-enabled tables, we want to ensure that the mutations are run through 
the CommitLog so they can be archived by the CDC process on discard.
{quote}

Using the regular write path turns out to be an issue for incremental repairs, 
as we loose the {{repaired_at}} state in the process. Eventually the streamed 
rows will end up in the unrepaired set, in contrast to the rows on the sender 
site moved to the repaired set. The next repair run will stream the same data 
back again, causing rows to bounce on and on between nodes on each repair.

See linked dtest on steps to reproduce. An example for reproducing this 
manually using ccm can be found 
[here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]

  was:
SSTables streamed during the repair process will first be written locally and 
afterwards either simply added to the pool of existing sstables or, in case of 
existing MVs or active CDC, replayed on mutation basis:

{quote}
We have a special path for views and for CDC.

For views, since the view requires cleaning up any pre-existing state, we must 
put all partitions through the same write path as normal mutations. This also 
ensures any 2is are also updated.

For CDC-enabled tables, we want to ensure that the mutations are run through 
the CommitLog so they can be archived by the CDC process on discard.
{quote}

Using the regular write path turns out to be an issue for incremental repairs, 
as we loose the {{repaired_at}} state in the process. Eventually the streamed 
rows will end up in the unrepaired set, in contrast to the rows on the sender 
site moved to the repaired set. The next repair run will stream the same data 
back again, causing rows to bounce on and on between nodes on each repair.

See linked dtest on steps to reproduce. An example for reproducing this 
manually using ccm can be found 
[here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]


> Incremental repairs broken for MVs and CDC
> --
>
> Key: CASSANDRA-12888
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12888
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Stefan Podkowinski
>Priority: Critical
>
> SSTables streamed during the repair process will first be written locally and 
> afterwards either simply added to the pool of existing sstables or, in case 
> of existing MVs or active CDC, replayed on mutation basis:
> As described in {{StreamReceiveTask.OnCompletionRunnable}}:
> {quote}
> We have a special path for views and for CDC.
> For views, since the view requires cleaning up any pre-existing state, we 
> must put all partitions through the same write path as normal mutations. This 
> also ensures any 2is are also updated.
> For CDC-enabled tables, we want to ensure that the mutations are run through 
> the CommitLog so they can be archived by the CDC process on discard.
> {quote}
> Using the regular write path turns out to be an issue for incremental 
> repairs, as we loose the {{repaired_at}} state in the process. Eventually the 
> streamed rows will end up in the unrepaired set, in contrast to the rows on 
> the sender site moved to the repaired set. The next repair run will stream 
> the same data back again, causing rows to bounce on and on between nodes on 
> each repair.
> See linked dtest on steps to reproduce. An example for reproducing this 
> manually using ccm can be found 
> [here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11752) histograms/metrics in 2.2 do not appear recency biased

2016-11-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651153#comment-15651153
 ] 

Per Otterström commented on CASSANDRA-11752:


Just noticed that this patch set never found its way into 2.2 branch. It's in 
3.0 though.


> histograms/metrics in 2.2 do not appear recency biased
> --
>
> Key: CASSANDRA-11752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Burroughs
>Assignee: Per Otterström
>  Labels: metrics
> Fix For: 2.2.8, 3.0.9, 3.8
>
> Attachments: 11752-2.2-v2.txt, 11752-2.2-v2b.txt, 11752-2.2.txt, 
> boost-metrics.png, c-jconsole-comparison.png, c-metrics.png, 
> default-histogram.png, server-patch-v2.png
>
>
> In addition to upgrading to metrics3, CASSANDRA-5657 switched to using  a 
> custom histogram implementation.  After upgrading to Cassandra 2.2 
> histograms/timer metrics are not suspiciously flat.  To be useful for 
> graphing and alerting metrics need to be biased towards recent events.
> I have attached images that I think illustrate this.
>  * The first two are a comparison between latency observed by a C* 2.2 (us) 
> cluster shoring very flat lines and a client (using metrics 2.2.0, ms) 
> showing server performance problems.  We can't rule out with total certainty 
> that something else isn't the cause (that's why we measure from both the 
> client & server) but they very rarely disagree.
>  * The 3rd image compares jconsole viewing of metrics on a 2.2 and 2.1 
> cluster over several minutes.  Not a single digit changed on the 2.2 cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12888) Incremental repairs broken for MVs and CDC

2016-11-09 Thread Stefan Podkowinski (JIRA)
Stefan Podkowinski created CASSANDRA-12888:
--

 Summary: Incremental repairs broken for MVs and CDC
 Key: CASSANDRA-12888
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12888
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
Reporter: Stefan Podkowinski
Priority: Critical


SSTables streamed during the repair process will first be written locally and 
afterwards either simply added to the pool of existing sstables or, in case of 
existing MVs or active CDC, replayed on mutation basis:

{quote}
We have a special path for views and for CDC.

For views, since the view requires cleaning up any pre-existing state, we must 
put all partitions through the same write path as normal mutations. This also 
ensures any 2is are also updated.

For CDC-enabled tables, we want to ensure that the mutations are run through 
the CommitLog so they can be archived by the CDC process on discard.
{quote}

Using the regular write path turns out to be an issue for incremental repairs, 
as we loose the {{repaired_at}} state in the process. Eventually the streamed 
rows will end up in the unrepaired set, in contrast to the rows on the sender 
site moved to the repaired set. The next repair run will stream the same data 
back again, causing rows to bounce on and on between nodes on each repair.

See linked dtest on steps to reproduce. An example for reproducing this 
manually using ccm can be found 
[here|https://gist.github.com/spodkowinski/2d8e0408516609c7ae701f2bf1e515e8]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12875) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDCLatency-compression

2016-11-09 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-12875:
--
Status: Patch Available  (was: In Progress)

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDCLatency-compression
> --
>
> Key: CASSANDRA-12875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12875
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Chris Lohfink
>  Labels: test-failure, testall
> Attachments: 12875.patch
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/54/testReport/org.apache.cassandra.net/MessagingServiceTest/testDCLatency_compression/
> {code}
> Error Message
> expected:<107964792> but was:<129557750>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<107964792> but was:<129557750>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDCLatency(MessagingServiceTest.java:115)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12875) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDCLatency-compression

2016-11-09 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651130#comment-15651130
 ] 

Chris Lohfink commented on CASSANDRA-12875:
---

was using system.currentmillis and the method compared that to the 
approx.currentmillis which had some windows where it would lead to issues. Just 
made the method take "now" as a arg so we can test it without the error window. 
https://github.com/clohfink/cassandra/pull/5/files

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDCLatency-compression
> --
>
> Key: CASSANDRA-12875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12875
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Chris Lohfink
>  Labels: test-failure, testall
> Attachments: 12875.patch
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/54/testReport/org.apache.cassandra.net/MessagingServiceTest/testDCLatency_compression/
> {code}
> Error Message
> expected:<107964792> but was:<129557750>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<107964792> but was:<129557750>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDCLatency(MessagingServiceTest.java:115)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12875) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDCLatency-compression

2016-11-09 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-12875:
--
Attachment: 12875.patch

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDCLatency-compression
> --
>
> Key: CASSANDRA-12875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12875
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Chris Lohfink
>  Labels: test-failure, testall
> Attachments: 12875.patch
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/54/testReport/org.apache.cassandra.net/MessagingServiceTest/testDCLatency_compression/
> {code}
> Error Message
> expected:<107964792> but was:<129557750>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<107964792> but was:<129557750>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDCLatency(MessagingServiceTest.java:115)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12485) Always require replace_address to replace existing token

2016-11-09 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651125#comment-15651125
 ] 

Paulo Motta commented on CASSANDRA-12485:
-

[~cmlicata] overall the approach looks good, but it should only be done when 
the node is already bootstrapped ({{!shouldBootstrap()}}), and in this case you 
can get the nodes tokens from {{SystemKeyspace.getSavedTokens()}}.

Could you also add a dtest? You can try to reproduce this scenario:
{noformat}
replace a node with another node with a different IP, and after some time you 
restart the original node by mistake. The original node will then take over the 
tokens of the replaced node (since it has a newer gossip generation).
{noformat}

Next time set the jira ticket to "Patch Available" when you have a patch 
otherwise it may never get picked up for review. 

> Always require replace_address to replace existing token
> 
>
> Key: CASSANDRA-12485
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12485
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Paulo Motta
>Priority: Minor
>  Labels: lhf
>
> CASSANDRA-10134 prevented replace an existing node unless 
> {{\-Dcassandra.replace_address}} or 
> {{\-Dcassandra.allow_unsafe_replace=true}} is specified.
> We should extend this behavior to tokens, preventing a node from joining the 
> ring if another node with the same token already existing in the ring, unless 
> {{\-Dcassandra.replace_address}} or 
> {{\-Dcassandra.allow_unsafe_replace=true}} is specified in order to avoid 
> catastrophic scenarios.
> One scenario where this can easily happen is if you replace a node with 
> another node with a different IP, and after some time you restart the 
> original node by mistake. The original node will then take over the tokens of 
> the replaced node (since it has a newer gossip generation).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12485) Always require replace_address to replace existing token

2016-11-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12485:

Reviewer: Paulo Motta

> Always require replace_address to replace existing token
> 
>
> Key: CASSANDRA-12485
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12485
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Paulo Motta
>Priority: Minor
>  Labels: lhf
>
> CASSANDRA-10134 prevented replace an existing node unless 
> {{\-Dcassandra.replace_address}} or 
> {{\-Dcassandra.allow_unsafe_replace=true}} is specified.
> We should extend this behavior to tokens, preventing a node from joining the 
> ring if another node with the same token already existing in the ring, unless 
> {{\-Dcassandra.replace_address}} or 
> {{\-Dcassandra.allow_unsafe_replace=true}} is specified in order to avoid 
> catastrophic scenarios.
> One scenario where this can easily happen is if you replace a node with 
> another node with a different IP, and after some time you restart the 
> original node by mistake. The original node will then take over the tokens of 
> the replaced node (since it has a newer gossip generation).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12861) example/triggers build fail.

2016-11-09 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-12861:


Assignee: Sylvain Lebresne  (was: Yasuharu Goto)

> example/triggers build fail.
> 
>
> Key: CASSANDRA-12861
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12861
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yasuharu Goto
>Assignee: Sylvain Lebresne
>Priority: Trivial
>
> When I tried to build example/trigger on trunk branch, I found that "ant jar" 
> fails with an error like below.
> (Sorry for my language settings for ant. I couldn't find how to change it. 
> The error indicated here is a "cannot find symboll" error of 
> RowUpdateBuilder).
> {code}
> Buildfile: /Users/yasuharu/git/cassandra/examples/triggers/build.xml
> init:
> [mkdir] Created dir: 
> /Users/yasuharu/git/cassandra/examples/triggers/build/classes
> build:
> [javac] Compiling 1 source file to 
> /Users/yasuharu/git/cassandra/examples/triggers/build/classes
> [javac] 警告: 
> 注釈プロセッサ'org.openjdk.jmh.generators.BenchmarkProcessor'から-source 
> '1.8'より小さいソース・バージョン'RELEASE_6'がサポートされています
> [javac] 
> /Users/yasuharu/git/cassandra/examples/triggers/src/org/apache/cassandra/triggers/AuditTrigger.java:27:
>  エラー: シンボルを見つけられません
> [javac] import org.apache.cassandra.db.RowUpdateBuilder;
> [javac]   ^
> [javac]   シンボル:   クラス RowUpdateBuilder
> [javac]   場所: パッケージ org.apache.cassandra.db
> [javac] エラー1個
> [javac] 警告1個
> BUILD FAILED
> /Users/yasuharu/git/cassandra/examples/triggers/build.xml:45: Compile failed; 
> see the compiler error output for details.
> Total time: 1 second
> {code}
> I think the movement of RowUpdateBuilder to test has broken this build.
> https://github.com/apache/cassandra/commit/26838063de6246e3a1e18062114ca92fb81c00cf
> In order to fix this, I moved back RowUpdateBuilder.java to src in my patch.
> https://github.com/apache/cassandra/commit/d133eefe9c5fbebd8d389a9397c3948b8c36bd06
> Could you please review my patch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12861) example/triggers build fail.

2016-11-09 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12861:
-
Reviewer:   (was: Sylvain Lebresne)

> example/triggers build fail.
> 
>
> Key: CASSANDRA-12861
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12861
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yasuharu Goto
>Assignee: Sylvain Lebresne
>Priority: Trivial
>
> When I tried to build example/trigger on trunk branch, I found that "ant jar" 
> fails with an error like below.
> (Sorry for my language settings for ant. I couldn't find how to change it. 
> The error indicated here is a "cannot find symboll" error of 
> RowUpdateBuilder).
> {code}
> Buildfile: /Users/yasuharu/git/cassandra/examples/triggers/build.xml
> init:
> [mkdir] Created dir: 
> /Users/yasuharu/git/cassandra/examples/triggers/build/classes
> build:
> [javac] Compiling 1 source file to 
> /Users/yasuharu/git/cassandra/examples/triggers/build/classes
> [javac] 警告: 
> 注釈プロセッサ'org.openjdk.jmh.generators.BenchmarkProcessor'から-source 
> '1.8'より小さいソース・バージョン'RELEASE_6'がサポートされています
> [javac] 
> /Users/yasuharu/git/cassandra/examples/triggers/src/org/apache/cassandra/triggers/AuditTrigger.java:27:
>  エラー: シンボルを見つけられません
> [javac] import org.apache.cassandra.db.RowUpdateBuilder;
> [javac]   ^
> [javac]   シンボル:   クラス RowUpdateBuilder
> [javac]   場所: パッケージ org.apache.cassandra.db
> [javac] エラー1個
> [javac] 警告1個
> BUILD FAILED
> /Users/yasuharu/git/cassandra/examples/triggers/build.xml:45: Compile failed; 
> see the compiler error output for details.
> Total time: 1 second
> {code}
> I think the movement of RowUpdateBuilder to test has broken this build.
> https://github.com/apache/cassandra/commit/26838063de6246e3a1e18062114ca92fb81c00cf
> In order to fix this, I moved back RowUpdateBuilder.java to src in my patch.
> https://github.com/apache/cassandra/commit/d133eefe9c5fbebd8d389a9397c3948b8c36bd06
> Could you please review my patch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12861) example/triggers build fail.

2016-11-09 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1565#comment-1565
 ] 

Sylvain Lebresne commented on CASSANDRA-12861:
--

Pushed patch related to my comment above 
[here|https://github.com/pcmanus/cassandra/commits/12861] (not triggering CI 
since it only changes the example file which tests don't exercise at all).

> example/triggers build fail.
> 
>
> Key: CASSANDRA-12861
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12861
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yasuharu Goto
>Assignee: Yasuharu Goto
>Priority: Trivial
>
> When I tried to build example/trigger on trunk branch, I found that "ant jar" 
> fails with an error like below.
> (Sorry for my language settings for ant. I couldn't find how to change it. 
> The error indicated here is a "cannot find symboll" error of 
> RowUpdateBuilder).
> {code}
> Buildfile: /Users/yasuharu/git/cassandra/examples/triggers/build.xml
> init:
> [mkdir] Created dir: 
> /Users/yasuharu/git/cassandra/examples/triggers/build/classes
> build:
> [javac] Compiling 1 source file to 
> /Users/yasuharu/git/cassandra/examples/triggers/build/classes
> [javac] 警告: 
> 注釈プロセッサ'org.openjdk.jmh.generators.BenchmarkProcessor'から-source 
> '1.8'より小さいソース・バージョン'RELEASE_6'がサポートされています
> [javac] 
> /Users/yasuharu/git/cassandra/examples/triggers/src/org/apache/cassandra/triggers/AuditTrigger.java:27:
>  エラー: シンボルを見つけられません
> [javac] import org.apache.cassandra.db.RowUpdateBuilder;
> [javac]   ^
> [javac]   シンボル:   クラス RowUpdateBuilder
> [javac]   場所: パッケージ org.apache.cassandra.db
> [javac] エラー1個
> [javac] 警告1個
> BUILD FAILED
> /Users/yasuharu/git/cassandra/examples/triggers/build.xml:45: Compile failed; 
> see the compiler error output for details.
> Total time: 1 second
> {code}
> I think the movement of RowUpdateBuilder to test has broken this build.
> https://github.com/apache/cassandra/commit/26838063de6246e3a1e18062114ca92fb81c00cf
> In order to fix this, I moved back RowUpdateBuilder.java to src in my patch.
> https://github.com/apache/cassandra/commit/d133eefe9c5fbebd8d389a9397c3948b8c36bd06
> Could you please review my patch?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-12834) testall failure in org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn

2016-11-09 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12834:

Comment: was deleted

(was: +1)

> testall failure in 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn
> --
>
> Key: CASSANDRA-12834
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12834
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Sylvain Lebresne
>  Labels: test-failure
> Fix For: 3.0.11, 3.11
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1250/testReport/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn/
> {code}
> Error Message
> Error setting schema for test (query was: CREATE INDEX c_index ON 
> cql_test_keyspace.table_20(c))
> {code}{code}
> Stacktrace
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> INDEX c_index ON cql_test_keyspace.table_20(c))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:705)
>   at org.apache.cassandra.cql3.CQLTester.createIndex(CQLTester.java:627)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.access$400(CassandraIndexTest.java:56)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest$TestScript.run(CassandraIndexTest.java:626)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn(CassandraIndexTest.java:86)
> Caused by: org.apache.cassandra.exceptions.InvalidRequestException: Index 
> c_index already exists
>   at 
> org.apache.cassandra.cql3.statements.CreateIndexStatement.validate(CreateIndexStatement.java:133)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:696)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12834) testall failure in org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn

2016-11-09 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651097#comment-15651097
 ] 

Sam Tunnicliffe commented on CASSANDRA-12834:
-

+1

> testall failure in 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn
> --
>
> Key: CASSANDRA-12834
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12834
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Sylvain Lebresne
>  Labels: test-failure
> Fix For: 3.0.11, 3.11
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1250/testReport/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn/
> {code}
> Error Message
> Error setting schema for test (query was: CREATE INDEX c_index ON 
> cql_test_keyspace.table_20(c))
> {code}{code}
> Stacktrace
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> INDEX c_index ON cql_test_keyspace.table_20(c))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:705)
>   at org.apache.cassandra.cql3.CQLTester.createIndex(CQLTester.java:627)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.access$400(CassandraIndexTest.java:56)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest$TestScript.run(CassandraIndexTest.java:626)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn(CassandraIndexTest.java:86)
> Caused by: org.apache.cassandra.exceptions.InvalidRequestException: Index 
> c_index already exists
>   at 
> org.apache.cassandra.cql3.statements.CreateIndexStatement.validate(CreateIndexStatement.java:133)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:696)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12834) testall failure in org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn

2016-11-09 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12834:

Status: Ready to Commit  (was: Patch Available)

> testall failure in 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn
> --
>
> Key: CASSANDRA-12834
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12834
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Sylvain Lebresne
>  Labels: test-failure
> Fix For: 3.0.11, 3.11
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1250/testReport/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn/
> {code}
> Error Message
> Error setting schema for test (query was: CREATE INDEX c_index ON 
> cql_test_keyspace.table_20(c))
> {code}{code}
> Stacktrace
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> INDEX c_index ON cql_test_keyspace.table_20(c))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:705)
>   at org.apache.cassandra.cql3.CQLTester.createIndex(CQLTester.java:627)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.access$400(CassandraIndexTest.java:56)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest$TestScript.run(CassandraIndexTest.java:626)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn(CassandraIndexTest.java:86)
> Caused by: org.apache.cassandra.exceptions.InvalidRequestException: Index 
> c_index already exists
>   at 
> org.apache.cassandra.cql3.statements.CreateIndexStatement.validate(CreateIndexStatement.java:133)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:696)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12834) testall failure in org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn

2016-11-09 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651096#comment-15651096
 ] 

Sam Tunnicliffe commented on CASSANDRA-12834:
-

+1

> testall failure in 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn
> --
>
> Key: CASSANDRA-12834
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12834
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Sylvain Lebresne
>  Labels: test-failure
> Fix For: 3.0.11, 3.11
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1250/testReport/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn/
> {code}
> Error Message
> Error setting schema for test (query was: CREATE INDEX c_index ON 
> cql_test_keyspace.table_20(c))
> {code}{code}
> Stacktrace
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> INDEX c_index ON cql_test_keyspace.table_20(c))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:705)
>   at org.apache.cassandra.cql3.CQLTester.createIndex(CQLTester.java:627)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.access$400(CassandraIndexTest.java:56)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest$TestScript.run(CassandraIndexTest.java:626)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn(CassandraIndexTest.java:86)
> Caused by: org.apache.cassandra.exceptions.InvalidRequestException: Index 
> c_index already exists
>   at 
> org.apache.cassandra.cql3.statements.CreateIndexStatement.validate(CreateIndexStatement.java:133)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:696)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12023) Schema upgrade bug with super columns

2016-11-09 Thread Oleg Tarakanov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15651068#comment-15651068
 ] 

Oleg Tarakanov commented on CASSANDRA-12023:


Cassandra 2.1.x does not seem to work on Windows 
(https://issues.apache.org/jira/browse/CASSANDRA-8390 and similar)

> Schema upgrade bug with super columns
> -
>
> Key: CASSANDRA-12023
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12023
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Jeremiah Jordan
>Assignee: Aleksey Yeschenko
>Priority: Critical
> Fix For: 3.0.8, 3.8
>
>
> Doing some upgrade tests starting on 2.0 to 2.1 to 3.0 we hit the follow bug 
> that prevents 3.0 nodes from starting.  Running the test a few times with 
> different waits and flushing sometimes or not I have seen the following 
> errors:
> {code}
> ERROR [main] 2016-06-17 10:42:40,112 CassandraDaemon.java:698 - Exception 
> encountered during startup
> org.apache.cassandra.serializers.MarshalException: cannot parse 'value' as 
> hex bytes
>   at 
> org.apache.cassandra.db.marshal.BytesType.fromString(BytesType.java:45) 
> ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.createColumnFromColumnRow(LegacySchemaMigrator.java:682)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.createColumnsFromColumnRows(LegacySchemaMigrator.java:641)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:316)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTable(LegacySchemaMigrator.java:244)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readTables$7(LegacySchemaMigrator.java:237)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_66]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTables(LegacySchemaMigrator.java:237)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:186)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$4(LegacySchemaMigrator.java:177)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_66]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:177)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) 
> [apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:557)
>  [apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:685) 
> [apache-cassandra-3.0.7.jar:3.0.7]
> Caused by: java.lang.NumberFormatException: An hex string representing bytes 
> must have an even length
>   at org.apache.cassandra.utils.Hex.hexToBytes(Hex.java:57) 
> ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.db.marshal.BytesType.fromString(BytesType.java:41) 
> ~[apache-cassandra-3.0.7.jar:3.0.7]
>   ... 16 common frames omitted
> {code}
> {code}
> ERROR [main] 2016-06-17 10:49:21,326 CassandraDaemon.java:698 - Exception 
> encountered during startup
> java.lang.RuntimeException: org.codehaus.jackson.JsonParseException: 
> Unexpected character ('K' (code 75)): expected a valid value (number, String, 
> array, object, 'true', 'false' or 'null')
>  at [Source: java.io.StringReader@60d4475f; line: 1, column: 2]
>   at 
> org.apache.cassandra.utils.FBUtilities.fromJsonMap(FBUtilities.java:561) 
> ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableParams(LegacySchemaMigrator.java:442)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:365)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
>  ~[apache-cassandra-3.0.7.jar:3.0.7]
>   at 
> org.apache.cassandra.schema.LegacySchemaMigr

[jira] [Updated] (CASSANDRA-12834) testall failure in org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn

2016-11-09 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12834:
-
 Reviewer: Sam Tunnicliffe
Fix Version/s: 3.11
   3.0.11
   Status: Patch Available  (was: Open)

This doesn't really reproduce locally but given the error message, and give the 
file in question has 2 different test that name their index {{c_index}}, I 
believe this is a relatively simple name clash. Especially since while both 
test will clean after themselves, that cleaning is done asynchronously (the 
table used by the test is different, so the asynchronous part is not supposed 
to be an issue, but index names are checked for conflict globally), which also 
explain why this is an intermittent failure.

Assuming I'm correct, as one of those test don't really use the name of the 
index, the simplest fix is to just remove the name on that test, letting the 
index name being generated automatically. That trivial fix is 
[here|https://github.com/pcmanus/cassandra/commit/60f9949d6ddafd87f26e1157d0e3c5b28a08f552].
 I tested this locally (to make doubly sure the test was indeed not relying on 
the index name) but I don't feel it's worth wasting CI ressource on this, 
unless someone has an issue with that. Also, the patch is on trunk since that 
where the report is from, but it seems this should be fixed from 3.0 onwards, 
so I'll commit there if we're good with the patch.

Putting [~beobal] as reviewer since he originated the test (not that the fix 
require any particular expertise on the test itself).


> testall failure in 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn
> --
>
> Key: CASSANDRA-12834
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12834
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Sylvain Lebresne
>  Labels: test-failure
> Fix For: 3.0.11, 3.11
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1250/testReport/org.apache.cassandra.index.internal/CassandraIndexTest/indexOnFirstClusteringColumn/
> {code}
> Error Message
> Error setting schema for test (query was: CREATE INDEX c_index ON 
> cql_test_keyspace.table_20(c))
> {code}{code}
> Stacktrace
> java.lang.RuntimeException: Error setting schema for test (query was: CREATE 
> INDEX c_index ON cql_test_keyspace.table_20(c))
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:705)
>   at org.apache.cassandra.cql3.CQLTester.createIndex(CQLTester.java:627)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.access$400(CassandraIndexTest.java:56)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest$TestScript.run(CassandraIndexTest.java:626)
>   at 
> org.apache.cassandra.index.internal.CassandraIndexTest.indexOnFirstClusteringColumn(CassandraIndexTest.java:86)
> Caused by: org.apache.cassandra.exceptions.InvalidRequestException: Index 
> c_index already exists
>   at 
> org.apache.cassandra.cql3.statements.CreateIndexStatement.validate(CreateIndexStatement.java:133)
>   at org.apache.cassandra.cql3.CQLTester.schemaChange(CQLTester.java:696)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9143) Improving consistency of repairAt field across replicas

2016-11-09 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-9143:
---
Reviewer: Marcus Eriksson

> Improving consistency of repairAt field across replicas 
> 
>
> Key: CASSANDRA-9143
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9143
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Blake Eggleston
>Priority: Minor
>
> We currently send an anticompaction request to all replicas. During this, a 
> node will split stables and mark the appropriate ones repaired. 
> The problem is that this could fail on some replicas due to many reasons 
> leading to problems in the next repair. 
> This is what I am suggesting to improve it. 
> 1) Send anticompaction request to all replicas. This can be done at session 
> level. 
> 2) During anticompaction, stables are split but not marked repaired. 
> 3) When we get positive ack from all replicas, coordinator will send another 
> message called markRepaired. 
> 4) On getting this message, replicas will mark the appropriate stables as 
> repaired. 
> This will reduce the window of failure. We can also think of "hinting" 
> markRepaired message if required. 
> Also the stables which are streaming can be marked as repaired like it is 
> done now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12849) The parameter -XX:HeapDumpPath is not ovewritten by cassandra-end.sh

2016-11-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12849:

Labels: lhf  (was: )

> The parameter  -XX:HeapDumpPath is not ovewritten by cassandra-end.sh
> -
>
> Key: CASSANDRA-12849
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12849
> Project: Cassandra
>  Issue Type: Bug
>Reporter: jean carlo rivera ura
>  Labels: lhf
>
> The parameter  -XX:HeapDumpPath appears twice in the java process 
> {panel}
> user@node:~$ sudo ps aux | grep --color  HeapDumpPath
> java -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar 
> -XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
> -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M -Xmn200M 
> -XX:+HeapDumpOnOutOfMemoryError 
> -XX:*HeapDumpPath*=/var/lib/cassandra-1477577769-pid1516.hprof -Xss256k 
> ...
> -XX:*HeapDumpPath*=/home/cassandra/java_1477577769.hprof 
> -XX:ErrorFile=/var/lib/cassandra/hs_err_1477577769.log 
> org.apache.cassandra.service.CassandraDaemon
> {panel}
> The problem is when we have an OOM error, the JVM dump goes to 
> */home/cassandra/java_1477577769.hprof * when the correct behavior is to go 
> to the path defined by cassandra-env.sh  
> */var/lib/cassandra-1477577769-pid1516.hprof*
> This is quite annoying because cassandra takes into account only the path 
> defined by the script init (usually that disk is not that big to keep 8Gb of 
> a heap dump) and not the path defined in cassandra-env.sh
> {noformat}
> user@node:~$ jmx4perl http://localhost:8523/jolokia read 
> com.sun.management:type=HotSpotDiagnostic DiagnosticOptions
>  {
> name => 'HeapDumpPath',
> origin => 'VM_CREATION',
> value => '/home/cassandra/java_1477043835.hprof',
> writeable => '[true]'
>   },
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12846) Repair is taking too much time

2016-11-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta resolved CASSANDRA-12846.
-
Resolution: Duplicate

It's very likely that this is caused by CASSANDRA-12580 as those may increase 
repair times significantly, so I suggest you to upgrade to a version with that 
fix and if the long repair times continue please reopen this ticket.

> Repair is taking too much time 
> ---
>
> Key: CASSANDRA-12846
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12846
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Nanda Kishore Tokala
>
> we are having are only 300 MB data on each node but repair is taking nearly 3 
> hours. Can you please suggest me what needs me checked and improved 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-12830) Stream failed during nodetool REBUILD in a multi-dc AWS env C* version 3.5

2016-11-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta reopened CASSANDRA-12830:
-

> Stream failed during nodetool REBUILD in a multi-dc AWS env C* version 3.5
> --
>
> Key: CASSANDRA-12830
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12830
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS release 6.7
> Linux 2.6.32-573.1.1.el6.x86_64
>Reporter: Bing Wu
>
> We are running a multi-dc (i.e region) cluster in AWS. When one of the nodes 
> in the "us-west" appeared to have corrupted SSTables, and after multiple 
> attempts to sstablescrub failed, I decided to clean up the data and commitlog 
> contents, restarted it and launched a rebuild task
> {code}
> sudo nodetool rebuild us-east
> {code}
> Note I tried to rebuild from a different DC/AWS Region.
> However, 2/3 way to finish the process failed and the error from the nodetool 
> command stderr output is
> {noformat}
> error: Error while rebuilding node: Stream failed
> -- StackTrace --
> java.lang.RuntimeException: Error while rebuilding node: Stream failed
>   at 
> org.apache.cassandra.service.StorageService.rebuild(StorageService.java:1172)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
>   at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>   at sun.rmi.transport.Transport$1.run(Transport.java:200)
>   at sun.rmi.transport.Transport$1.run(Transport.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> In /var/log/cassandra/system.log: {noformat}
> INFO  [StreamReceiveTask:4] 2016-10-23 08:05:08,843 
> StreamResultFuture.java:185 - [Stream #5f22eed0-98bb-11e6-8bac-8d90ab5dafcf] 
> Session with /54.82.131.4 is complete
> ERROR [STREAM-OUT-/54.82.131.4] 2016-10-23 08:05:08,844 
> StreamSession.java:519 - [Stream #5f22eed0-98bb-11e6-8bac-8d90ab5dafcf] 
> Streaming error occurred
> java.net.SocketException: Broken pipe
> at java.n

[jira] [Resolved] (CASSANDRA-12830) Stream failed during nodetool REBUILD in a multi-dc AWS env C* version 3.5

2016-11-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta resolved CASSANDRA-12830.
-
Resolution: Not A Problem

> Stream failed during nodetool REBUILD in a multi-dc AWS env C* version 3.5
> --
>
> Key: CASSANDRA-12830
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12830
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS release 6.7
> Linux 2.6.32-573.1.1.el6.x86_64
>Reporter: Bing Wu
>
> We are running a multi-dc (i.e region) cluster in AWS. When one of the nodes 
> in the "us-west" appeared to have corrupted SSTables, and after multiple 
> attempts to sstablescrub failed, I decided to clean up the data and commitlog 
> contents, restarted it and launched a rebuild task
> {code}
> sudo nodetool rebuild us-east
> {code}
> Note I tried to rebuild from a different DC/AWS Region.
> However, 2/3 way to finish the process failed and the error from the nodetool 
> command stderr output is
> {noformat}
> error: Error while rebuilding node: Stream failed
> -- StackTrace --
> java.lang.RuntimeException: Error while rebuilding node: Stream failed
>   at 
> org.apache.cassandra.service.StorageService.rebuild(StorageService.java:1172)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
>   at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>   at sun.rmi.transport.Transport$1.run(Transport.java:200)
>   at sun.rmi.transport.Transport$1.run(Transport.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> In /var/log/cassandra/system.log: {noformat}
> INFO  [StreamReceiveTask:4] 2016-10-23 08:05:08,843 
> StreamResultFuture.java:185 - [Stream #5f22eed0-98bb-11e6-8bac-8d90ab5dafcf] 
> Session with /54.82.131.4 is complete
> ERROR [STREAM-OUT-/54.82.131.4] 2016-10-23 08:05:08,844 
> StreamSession.java:519 - [Stream #5f22eed0-98bb-11e6-8bac-8d90ab5dafcf] 
> Streaming error occurred
> java.net.SocketException: B

[jira] [Resolved] (CASSANDRA-12830) Stream failed during nodetool REBUILD in a multi-dc AWS env C* version 3.5

2016-11-09 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta resolved CASSANDRA-12830.
-
Resolution: Fixed

I'm closing this since this is not a cassandra bug, but instead idle streaming 
connections being shutdown by external routers before the rebuild stream 
session is finished. In order to fix this you can:
1) Tune your tcp_keepalive settings: 
http://docs.datastax.com/en/cassandra/2.0/cassandra/troubleshooting/trblshootIdleFirewall.html
2) Resume rebuild operation (CASSANDRA-10810) by just triggering rebuild 
command again, and it will resume from where it left-off.
3) Upgrade to cassandra >= 3.10 where application-level keep alives are 
supported (CASSANDRA-11841)

> Stream failed during nodetool REBUILD in a multi-dc AWS env C* version 3.5
> --
>
> Key: CASSANDRA-12830
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12830
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS release 6.7
> Linux 2.6.32-573.1.1.el6.x86_64
>Reporter: Bing Wu
>
> We are running a multi-dc (i.e region) cluster in AWS. When one of the nodes 
> in the "us-west" appeared to have corrupted SSTables, and after multiple 
> attempts to sstablescrub failed, I decided to clean up the data and commitlog 
> contents, restarted it and launched a rebuild task
> {code}
> sudo nodetool rebuild us-east
> {code}
> Note I tried to rebuild from a different DC/AWS Region.
> However, 2/3 way to finish the process failed and the error from the nodetool 
> command stderr output is
> {noformat}
> error: Error while rebuilding node: Stream failed
> -- StackTrace --
> java.lang.RuntimeException: Error while rebuilding node: Stream failed
>   at 
> org.apache.cassandra.service.StorageService.rebuild(StorageService.java:1172)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
>   at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>   at sun.rmi.transport.Transport$1.run(Transport.java:200)
>   at sun.rmi.transport.Transport$1.run(Transport.java:197)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   

[jira] [Commented] (CASSANDRA-12866) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.bug_5732_test

2016-11-09 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15650816#comment-15650816
 ] 

Marcus Eriksson commented on CASSANDRA-12866:
-

Hmm, looks to me like the errors are in 2.1 before upgrade, so the question 
from [~Stefania] in the end of CASSANDRA-12457 is still valid - is there a way 
to ignore these messages only on 2.1 nodes?

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.bug_5732_test
> --
>
> Key: CASSANDRA-12866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12866
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Marcus Eriksson
>  Labels: dtest, test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_dtest_upgrade/17/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 214, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 581, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> {code}{code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:0a1f1c81e641039ca9fd573d5217b6b6f2ad8fb8
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,749 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@4f5697fa) to class 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Cleanup@1100050528:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1-Data.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,750 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@45aefc8a) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@11303515:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,750 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7b3ed4f3) to class 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@837204356:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1-Index.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,752 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@39e499e) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@1619232020:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,752 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6d974cbb) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1765405204:[Memory@[0..4),
>  Memory@[0..e)] was not released before the reference was garbage collected
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11935) Add support for arithmetic operators

2016-11-09 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15650717#comment-15650717
 ] 

Sylvain Lebresne commented on CASSANDRA-11935:
--

bq.  {{c = 1 + 1}} will nearly always require a cast. By consequence, I do not 
think that we can base our decision on this example.

As of this patch, yes, but that's more a bug than a feature imo. We have the 
type of {{c}} in this context, so there is no excuse for requiring a cast, and 
that's exactly why I created CASSANDRA-11946 which would solve that.

bq. In fact most of the relational databases, including ProgreSQL), will 
convert litterals like 1 to integers.

I'll admit I don't know how other DB type system work, but I strongly believe 
our type system should have some form of coherence. Let's admit we're ok with a 
system where literals are always at least {{int}} (in practive, I think it 
would be a bit of a pain to work with {{tinyint}} and {{smallint}}, but let's 
assume we're fine with that for a minute): in that case {{c = 2}} shouldn't 
type-check when {{c}} is a {{tinyint}} and it does (and we cannot change that 
without breaking backward compatibility). So basically, I just think any system 
where {{c = 2}} works but {{c = 1 + 1}} doesn't is kind of obviously broken and 
I prefer not-broken systems :).

bq. On the other hand, I fear that most of the users will be surprise by the 
result of {{100 + 50}} if we narrow the type of {{100}} and {{50}} to tinyint.

I totally agree in the sense that it's not ok if the value overflows when you 
do {{i = 100 + 50}} where {{i}} is an {{int}}: you should get {{150}} in that 
case and we can't have a system that doesn't do that, it's too surprising. That 
said, I "think" CASSANDRA-11946 also help here, assuming we restrict functions 
overloads by return type first. In that case, we'd create overloads with same 
argument types but different return type (so {{tinyint add(tinyint, tinyint)}}, 
{{int add(tinyint, tinyint)}}, {{bigint add(tinyint, tinyint)}}, etc...) and 
{{i = 100 + 50}} would use the proper one. If you do {{SELECT 100 + 50}}, we'd 
use our "prefered type" system to pick the most precise function and the result 
would be a {{tinyint}} but that's pretty consistent really.

Note that unless I'm missed some subtlety, CASSANDRA-11946 is pretty simple and 
I'm happy to include it here if it makes sense.

Outside of this problem, the rest of the fixes lgtm, thanks.


> Add support for arithmetic operators
> 
>
> Key: CASSANDRA-11935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11935
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.x
>
>
> The goal of this ticket is to add support for arithmetic operators:
> * {{-}}: Change the sign of the argument
> * {{+}}: Addition operator
> * {{-}}: Minus operator
> * {{*}}: Multiplication operator
> * {{/}}: Division operator
> * {{%}}: Modulo operator
> This ticket we should focus on adding operator only for numeric types to keep 
> the scope as small as possible. Dates and string operations will be adressed 
> in follow up tickets.
> The operation precedence should be:
> # {{*}}, {{/}}, {{%}}
> # {{+}}, {{-}}
> Some implicit data conversion should be performed when operations are 
> performed on different types (e.g. double + int).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12887) /etc/security/limits.d/cassandra.conf nofile overridden by initscript

2016-11-09 Thread Rolf Larsen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rolf Larsen updated CASSANDRA-12887:

Description: 
After trying to adjust max open files for cassandra in 
/etc/security/limits.d/cassandra.conf and not understanding why my changes 
didnt work i found this in the initscript:

{code}
FD_LIMIT=10
ulimit -n "$FD_LIMIT"
{code}

As per documentation of "Recommended production settings" it says that one 
should modify /etc/security/limits.d/cassandra.conf but these settings are 
overriden by initscript.
https://docs.datastax.com/en/landing_page/doc/landing_page/recommendedSettingsLinux.html

  was:
After trying to adjust max open files for cassandra in 
/etc/security/limits.d/cassandra.conf and not understanding why my changes 
didnt work i found this in the initscript:

{code}
FD_LIMIT=10
ulimit -n "$FD_LIMIT"
{code}

As per documentation of "Recommended production settings" it says that one 
should modify /etc/security/limits.d/cassandra.conf but these settings are 
overriden by initscript.


> /etc/security/limits.d/cassandra.conf nofile overridden by initscript
> -
>
> Key: CASSANDRA-12887
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12887
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
> Environment: Ubuntu 14.04
>Reporter: Rolf Larsen
> Fix For: 3.0.x
>
>
> After trying to adjust max open files for cassandra in 
> /etc/security/limits.d/cassandra.conf and not understanding why my changes 
> didnt work i found this in the initscript:
> {code}
> FD_LIMIT=10
> ulimit -n "$FD_LIMIT"
> {code}
> As per documentation of "Recommended production settings" it says that one 
> should modify /etc/security/limits.d/cassandra.conf but these settings are 
> overriden by initscript.
> https://docs.datastax.com/en/landing_page/doc/landing_page/recommendedSettingsLinux.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12887) /etc/security/limits.d/cassandra.conf nofile overridden by initscript

2016-11-09 Thread Rolf Larsen (JIRA)
Rolf Larsen created CASSANDRA-12887:
---

 Summary: /etc/security/limits.d/cassandra.conf nofile overridden 
by initscript
 Key: CASSANDRA-12887
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12887
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
 Environment: Ubuntu 14.04
Reporter: Rolf Larsen
 Fix For: 3.0.x


After trying to adjust max open files for cassandra in 
/etc/security/limits.d/cassandra.conf and not understanding why my changes 
didnt work i found this in the initscript:

{code}
FD_LIMIT=10
ulimit -n "$FD_LIMIT"
{code}

As per documentation of "Recommended production settings" it says that one 
should modify /etc/security/limits.d/cassandra.conf but these settings are 
overriden by initscript.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12859) Column-level permissions

2016-11-09 Thread Boris Melamed (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Melamed updated CASSANDRA-12859:
--
Description: 
h4. Here is a draft of: 
Cassandra Proposal - Column-level permissions.docx (attached)

h4. Quoting the 'Overview' section:

The purpose of this proposal is to add column-level (field-level) permissions 
to Cassandra. It is my intent to soon start implementing this feature in a 
fork, and to submit a pull request once it’s ready.
h4. Motivation
Cassandra already supports permissions on keyspace and table (column family) 
level. Sources:
* http://www.datastax.com/dev/blog/role-based-access-control-in-cassandra
* https://cassandra.apache.org/doc/latest/cql/security.html#data-control

At IBM, we have use cases in the area of big data analytics where column-level 
access permissions are also a requirement. All industry RDBMS products are 
supporting this level of permission control, and regulators are expecting it 
from all data-based systems.
h4. Main day-one requirements
# Extend CQL (Cassandra Query Language) to be able to optionally specify a list 
of individual columns, in the {{GRANT}} statement. The relevant permission 
types are: {{MODIFY}} (for {{UPDATE}} and {{INSERT}}) and {{SELECT}}.
# Persist the optional information in the appropriate system table 
‘system_auth.role_permissions’.
# Enforce the column access restrictions during execution. Details:
#* Should fit with the existing permission propagation down a role chain.
#* Proposed message format when a user’s roles give access to the queried table 
but not to all of the selected, inserted, or updated columns:
  "User %s has no %s permission on column %s of table %s"
#* Error will report only the first checked column. 
Nice to have: list all inaccessible columns.
#* Error code is the same as for table access denial: 2100.

h4. Additional day-one requirements
# Reflect the column-level permissions in statements of type 
{{LIST ALL PERMISSIONS OF someuser;}}
# When columns are dropped or renamed, trigger purging or adapting of their 
permissions
# Performance should not degrade in any significant way.
# Backwards compatibility
#* Permission enforcement for DBs created before the upgrade should continue to 
work with the same behavior after upgrading to a version that allows 
column-level permissions.
#* Previous CQL syntax will remain valid, and have the same effect as before.

h4. Documentation
* 
https://cassandra.apache.org/doc/latest/cql/security.html#grammar-token-permission
* Feedback request: any others?


  was:
h4. Here is a draft of: 
Cassandra Proposal - Column-level permissions.docx (attached)

h4. Quoting the 'Overview' section:

The purpose of this proposal is to add column-level (field-level) permissions 
to Cassandra. It is my intent to soon start implementing this feature in a 
fork, and to submit a pull request once it’s ready.
h4. Motivation
Cassandra already supports permissions on keyspace and table (column family) 
level. Sources:
* http://www.datastax.com/dev/blog/role-based-access-control-in-cassandra
* https://cassandra.apache.org/doc/latest/cql/security.html#data-control

At IBM, we have use cases in the area of big data analytics where column-level 
access permissions are also a requirement. All industry RDBMS products are 
supporting this level of permission control, and regulators are expecting it 
from all data-based systems.
h4. Main day-one requirements
# Extend CQL (Cassandra Query Language) to be able to optionally specify a list 
of individual columns, in the {{GRANT}} statement. The relevant permission 
types are: {{MODIFY}} (for {{UPDATE}} and {{INSERT}}) and {{SELECT}}.
# Persist the optional information in the appropriate system table 
‘system_auth.role_permissions’.
# Enforce the column access restrictions during execution. Details:
#* Should fit with the existing permission propagation down a role chain.
#* Proposed message format when a user’s roles give access to the queried table 
but not to all of the selected, inserted, or updated columns:
  "User %s has no %s permission on column %s of table %s"
#* Error will report only the first checked column. 
Nice to have: list all inaccessible columns.
#* Error code is the same as for table access denial: 2100.

h4. Additional day-one requirements
# Reflect the column-level permissions in statements of type 
{{LIST ALL PERMISSIONS OF someuser;}}
# Performance should not degrade in any significant way.
# Backwards compatibility
#* Permission enforcement for DBs created before the upgrade should continue to 
work with the same behavior after upgrading to a version that allows 
column-level permissions.
#* Previous CQL syntax will remain valid, and have the same effect as before.

h4. Documentation
* 
https://cassandra.apache.org/doc/latest/cql/security.html#grammar-token-permission
* Feedback request: any others?



> Column-level permission

[jira] [Comment Edited] (CASSANDRA-12847) cqlsh DESCRIBE output doesn't properly quote index names

2016-11-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15647014#comment-15647014
 ] 

Alex Petrov edited comment on CASSANDRA-12847 at 11/9/16 10:33 AM:
---

I've triggered a CI for the dtest corrections:

|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-2.2-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-_trunk-dtest/]|

UPD: I've triggered another build as previous one was using the python driver 
without committed changes ({{cassandra-test}} tag _and_ without [~beobal] 
changes, looks like {{DTEST_REPO}} and {{DTEST_BRANCH}} variables aren't 
getting set on non-autojobs builds).


was (Author: ifesdjeen):
I've triggered a CI for the dtest corrections:

|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-2.2-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-_trunk-dtest/]|

UPD: I've triggered another build as previous one was using the python driver 
without committed changes ({{cassandra-test}} tag _and_ without @beobal 
changes, looks like {{DTEST_REPO}} and {{DTEST_BRANCH}} variables aren't 
getting set on non-autojobs builds).

> cqlsh DESCRIBE output doesn't properly quote index names
> 
>
> Key: CASSANDRA-12847
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12847
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>  Labels: cqlsh
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> CASSANDRA-8365 fixed the CQL grammar so that quoting index names preserves 
> case. The output of DESCRIBE in cqlsh wasn't updated however so this doesn't 
> round-trip properly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12847) cqlsh DESCRIBE output doesn't properly quote index names

2016-11-09 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15647014#comment-15647014
 ] 

Alex Petrov edited comment on CASSANDRA-12847 at 11/9/16 10:32 AM:
---

I've triggered a CI for the dtest corrections:

|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-2.2-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-_trunk-dtest/]|

UPD: I've triggered another build as previous one was using the python driver 
without committed changes ({{cassandra-test}} tag _and_ without @beobal 
changes, looks like {{DTEST_REPO}} and {{DTEST_BRANCH}} variables aren't 
getting set on non-autojobs builds).


was (Author: ifesdjeen):
I've triggered a CI for the dtest corrections:

|[2.1|https://cassci.datastax.com/job/cassandra-2.1_dtest/519/]|[2.2|https://cassci.datastax.com/job/cassandra-2.2_dtest/707/]|[trunk|https://cassci.datastax.com/job/trunk_dtest/1417/]|

> cqlsh DESCRIBE output doesn't properly quote index names
> 
>
> Key: CASSANDRA-12847
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12847
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>  Labels: cqlsh
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> CASSANDRA-8365 fixed the CQL grammar so that quoting index names preserves 
> case. The output of DESCRIBE in cqlsh wasn't updated however so this doesn't 
> round-trip properly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10979) LCS doesn't do L0 STC on new tables while an L0->L1 compaction is in progress

2016-11-09 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-10979:
-
Fix Version/s: 2.1.14

> LCS doesn't do L0 STC on new tables while an L0->L1 compaction is in progress
> -
>
> Key: CASSANDRA-10979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10979
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: 2.1.11 / 4.8.3 DSE.
>Reporter: Jeff Ferland
>Assignee: Carl Yeksigian
>  Labels: compaction, lcs, leveled
> Fix For: 2.1.14, 2.2.5, 3.0.3, 3.3
>
> Attachments: 10979-2.1.txt
>
>
> Reading code from 
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
>  and comparing with behavior shown in 
> https://gist.github.com/autocracy/c95aca6b00e42215daaf, the following happens:
> Score for L1,L2,and L3 is all < 1 (paste shows 20/10 and 200/100, due to 
> incremental repair).
> Relevant code from here is
> if (Sets.intersection(l1overlapping, compacting).size() > 0)
> return Collections.emptyList();
> Since there will be overlap between what is compacting and L1 (in my case, 
> pushing over 1,000 tables in to L1 from L0 SCTS), I get a pile up of 1,000 
> smaller tables in L0 while awaiting the transition from L0 to L1 and destroy 
> my performance.
> Requested outcome is to continue to perform SCTS on non-compacting L0 tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10979) LCS doesn't do L0 STC on new tables while an L0->L1 compaction is in progress

2016-11-09 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-10979:
-
Fix Version/s: (was: 2.1.14)

> LCS doesn't do L0 STC on new tables while an L0->L1 compaction is in progress
> -
>
> Key: CASSANDRA-10979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10979
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: 2.1.11 / 4.8.3 DSE.
>Reporter: Jeff Ferland
>Assignee: Carl Yeksigian
>  Labels: compaction, lcs, leveled
> Fix For: 2.2.5, 3.0.3, 3.3
>
> Attachments: 10979-2.1.txt
>
>
> Reading code from 
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
>  and comparing with behavior shown in 
> https://gist.github.com/autocracy/c95aca6b00e42215daaf, the following happens:
> Score for L1,L2,and L3 is all < 1 (paste shows 20/10 and 200/100, due to 
> incremental repair).
> Relevant code from here is
> if (Sets.intersection(l1overlapping, compacting).size() > 0)
> return Collections.emptyList();
> Since there will be overlap between what is compacting and L1 (in my case, 
> pushing over 1,000 tables in to L1 from L0 SCTS), I get a pile up of 1,000 
> smaller tables in L0 while awaiting the transition from L0 to L1 and destroy 
> my performance.
> Requested outcome is to continue to perform SCTS on non-compacting L0 tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12841) testall failure in org.apache.cassandra.db.compaction.NeverPurgeTest.minorNeverPurgeTombstonesTest-compression

2016-11-09 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-12841:

Fix Version/s: 3.x
   3.0.x
   2.2.x
   Status: Patch Available  (was: Open)

Problem is that there is a small window while we finish the compaction where 
all sstables are opened early, so in this test we need to wait until it is 
actually done to avoid starting to iterate over the early opened file:

||branch||testall||dtest||
|[marcuse/12841-2.2|https://github.com/krummas/cassandra/tree/marcuse/12841-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12841-2.2-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12841-2.2-dtest]|
|[marcuse/12841-3.0|https://github.com/krummas/cassandra/tree/marcuse/12841-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12841-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12841-3.0-dtest]|
|[marcuse/12841-3.X|https://github.com/krummas/cassandra/tree/marcuse/12841-3.X]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12841-3.X-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12841-3.X-dtest]|
|[marcuse/12841-trunk|https://github.com/krummas/cassandra/tree/marcuse/12841-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12841-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12841-trunk-dtest]|

> testall failure in 
> org.apache.cassandra.db.compaction.NeverPurgeTest.minorNeverPurgeTombstonesTest-compression
> --
>
> Key: CASSANDRA-12841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12841
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Marcus Eriksson
>  Labels: test-failure, testall
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/597/testReport/org.apache.cassandra.db.compaction/NeverPurgeTest/minorNeverPurgeTombstonesTest_compression/
> {code}
> Error Message
> Memory was freed by Thread[NonPeriodicTasks:1,5,main]
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: Memory was freed by 
> Thread[NonPeriodicTasks:1,5,main]
>   at 
> org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:103)
>   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:260)
>   at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:223)
>   at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBufferMmap(CompressedRandomAccessReader.java:168)
>   at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:226)
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:303)
>   at 
> org.apache.cassandra.io.util.AbstractDataInput.readInt(AbstractDataInput.java:202)
>   at 
> org.apache.cassandra.io.util.AbstractDataInput.readLong(AbstractDataInput.java:264)
>   at 
> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:131)
>   at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
>   at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52)
>   at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46)
>   at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>   at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>   at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:169)
>   at 
> org.apache.cassandra.db.compaction.NeverPurgeTest.verifyContainsTombstones(NeverPurgeTest.java:114)
>   at 
> org.apache.cassandra.db.compaction.NeverPurgeTest.minorNeverPurgeTombstonesTest(NeverPurgeTest.java:85)
> {code}
> Related failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/598/testReport/org.apache.cassandra.db.compaction/NeverPurgeTest/minorNeverPurgeTombstonesTest/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12825) testall failure in org.apache.cassandra.db.compaction.CompactionsCQLTest.testTriggerMinorCompactionDTCS-compression

2016-11-09 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-12825:

Fix Version/s: 3.x
   3.0.x
   2.2.x
   Status: Patch Available  (was: Open)

problem is that we can end up with sstables in different windows, using the 
same timestamp for both inserts should fix it:

||branch||testall||dtest||
|[marcuse/12825-2.2|https://github.com/krummas/cassandra/tree/marcuse/12825-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12825-2.2-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12825-2.2-dtest]|
|[marcuse/12825-3.0|https://github.com/krummas/cassandra/tree/marcuse/12825-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12825-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12825-3.0-dtest]|
|[marcuse/12825-3.X|https://github.com/krummas/cassandra/tree/marcuse/12825-3.X]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12825-3.X-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12825-3.X-dtest]|
|[marcuse/12825-trunk|https://github.com/krummas/cassandra/tree/marcuse/12825-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12825-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12825-trunk-dtest]|


> testall failure in 
> org.apache.cassandra.db.compaction.CompactionsCQLTest.testTriggerMinorCompactionDTCS-compression
> ---
>
> Key: CASSANDRA-12825
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12825
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Marcus Eriksson
>  Labels: test-failure
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_testall/1243/testReport/org.apache.cassandra.db.compaction/CompactionsCQLTest/testTriggerMinorCompactionDTCS_compression/
> {code}
> Error Message
> No minor compaction triggered in 5000ms
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: No minor compaction triggered in 5000ms
>   at 
> org.apache.cassandra.db.compaction.CompactionsCQLTest.waitForMinor(CompactionsCQLTest.java:247)
>   at 
> org.apache.cassandra.db.compaction.CompactionsCQLTest.testTriggerMinorCompactionDTCS(CompactionsCQLTest.java:72)
> {code}
> Related failure:
> http://cassci.datastax.com/job/cassandra-3.X_testall/47/testReport/org.apache.cassandra.db.compaction/CompactionsCQLTest/testTriggerMinorCompactionDTCS/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)