[jira] [Commented] (CASSANDRA-7486) Migrate to G1GC by default

2015-09-21 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901846#comment-14901846
 ] 

Ryan McGuire commented on CASSANDRA-7486:
-

[~benedict] I modified the schedule GUI to allow you to change the version of 
stress per operation. Just change the default 'apache/trunk' to 
'enigmacurry/stress-report-interval' where I took your branch and applied a 4G 
stress heap. If you want to tweak that, you can put your own branch name in 
instead. The GC logs have been logged for awhile now, they're wrapped up in the 
same tarball as the other logs that you can download.

See this example for how to specify your test: 
http://cstar.datastax.com/schedule?clone=0c2efd50-60d5-11e5-b6a8-42010af0688f

> Migrate to G1GC by default
> --
>
> Key: CASSANDRA-7486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7486
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Config
>Reporter: Jonathan Ellis
> Fix For: 3.0 alpha 1
>
>
> See 
> http://www.slideshare.net/MonicaBeckwith/garbage-first-garbage-collector-g1-7486gc-migration-to-expectations-and-advanced-tuning
>  and https://twitter.com/rbranson/status/482113561431265281
> May want to default 2.1 to G1.
> 2.1 is a different animal from 2.0 after moving most of memtables off heap.  
> Suspect this will help G1 even more than CMS.  (NB this is off by default but 
> needs to be part of the test.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9258) Range movement causes CPU & performance impact

2015-09-21 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901702#comment-14901702
 ] 

Benedict commented on CASSANDRA-9258:
-

I've added you to the contributor list and assigned you the ticket (you should 
be able to assign yourself from now on).

> Range movement causes CPU & performance impact
> --
>
> Key: CASSANDRA-9258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9258
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.4
>Reporter: Rick Branson
>Assignee: Dikang Gu
> Fix For: 2.1.x
>
>
> Observing big CPU & latency regressions when doing range movements on 
> clusters with many tens of thousands of vnodes. See CPU usage increase by 
> ~80% when a single node is being replaced.
> Top methods are:
> 1) Ljava/math/BigInteger;.compareTo in 
> Lorg/apache/cassandra/dht/ComparableObjectToken;.compareTo 
> 2) Lcom/google/common/collect/AbstractMapBasedMultimap;.wrapCollection in 
> Lcom/google/common/collect/AbstractMapBasedMultimap$AsMap$AsMapIterator;.next
> 3) Lorg/apache/cassandra/db/DecoratedKey;.compareTo in 
> Lorg/apache/cassandra/dht/Range;.contains
> Here's a sample stack from a thread dump:
> {code}
> "Thrift:50673" daemon prio=10 tid=0x7f2f20164800 nid=0x3a04af runnable 
> [0x7f2d878d]
>java.lang.Thread.State: RUNNABLE
>   at org.apache.cassandra.dht.Range.isWrapAround(Range.java:260)
>   at org.apache.cassandra.dht.Range.contains(Range.java:51)
>   at org.apache.cassandra.dht.Range.contains(Range.java:110)
>   at 
> org.apache.cassandra.locator.TokenMetadata.pendingEndpointsFor(TokenMetadata.java:916)
>   at 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:775)
>   at 
> org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:541)
>   at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:616)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1101)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1083)
>   at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9258) Range movement causes CPU & performance impact

2015-09-21 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9258:

Assignee: Dikang Gu  (was: Benedict)

> Range movement causes CPU & performance impact
> --
>
> Key: CASSANDRA-9258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9258
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.4
>Reporter: Rick Branson
>Assignee: Dikang Gu
> Fix For: 2.1.x
>
>
> Observing big CPU & latency regressions when doing range movements on 
> clusters with many tens of thousands of vnodes. See CPU usage increase by 
> ~80% when a single node is being replaced.
> Top methods are:
> 1) Ljava/math/BigInteger;.compareTo in 
> Lorg/apache/cassandra/dht/ComparableObjectToken;.compareTo 
> 2) Lcom/google/common/collect/AbstractMapBasedMultimap;.wrapCollection in 
> Lcom/google/common/collect/AbstractMapBasedMultimap$AsMap$AsMapIterator;.next
> 3) Lorg/apache/cassandra/db/DecoratedKey;.compareTo in 
> Lorg/apache/cassandra/dht/Range;.contains
> Here's a sample stack from a thread dump:
> {code}
> "Thrift:50673" daemon prio=10 tid=0x7f2f20164800 nid=0x3a04af runnable 
> [0x7f2d878d]
>java.lang.Thread.State: RUNNABLE
>   at org.apache.cassandra.dht.Range.isWrapAround(Range.java:260)
>   at org.apache.cassandra.dht.Range.contains(Range.java:51)
>   at org.apache.cassandra.dht.Range.contains(Range.java:110)
>   at 
> org.apache.cassandra.locator.TokenMetadata.pendingEndpointsFor(TokenMetadata.java:916)
>   at 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:775)
>   at 
> org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:541)
>   at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:616)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1101)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1083)
>   at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9258) Range movement causes CPU & performance impact

2015-09-21 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901655#comment-14901655
 ] 

Dikang Gu commented on CASSANDRA-9258:
--

[~benedict], I'm going to work on this, since this caused problem to us, can 
you please assign it to me? (Seems I can not assign it to myself).

> Range movement causes CPU & performance impact
> --
>
> Key: CASSANDRA-9258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9258
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.4
>Reporter: Rick Branson
>Assignee: Benedict
> Fix For: 2.1.x
>
>
> Observing big CPU & latency regressions when doing range movements on 
> clusters with many tens of thousands of vnodes. See CPU usage increase by 
> ~80% when a single node is being replaced.
> Top methods are:
> 1) Ljava/math/BigInteger;.compareTo in 
> Lorg/apache/cassandra/dht/ComparableObjectToken;.compareTo 
> 2) Lcom/google/common/collect/AbstractMapBasedMultimap;.wrapCollection in 
> Lcom/google/common/collect/AbstractMapBasedMultimap$AsMap$AsMapIterator;.next
> 3) Lorg/apache/cassandra/db/DecoratedKey;.compareTo in 
> Lorg/apache/cassandra/dht/Range;.contains
> Here's a sample stack from a thread dump:
> {code}
> "Thrift:50673" daemon prio=10 tid=0x7f2f20164800 nid=0x3a04af runnable 
> [0x7f2d878d]
>java.lang.Thread.State: RUNNABLE
>   at org.apache.cassandra.dht.Range.isWrapAround(Range.java:260)
>   at org.apache.cassandra.dht.Range.contains(Range.java:51)
>   at org.apache.cassandra.dht.Range.contains(Range.java:110)
>   at 
> org.apache.cassandra.locator.TokenMetadata.pendingEndpointsFor(TokenMetadata.java:916)
>   at 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:775)
>   at 
> org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:541)
>   at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:616)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1101)
>   at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1083)
>   at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10246) Named values don't work with batches

2015-09-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-10246:

Reproduced In: 2.2.1, 2.1.9  (was: 2.1.9, 2.2.1)
Fix Version/s: 3.0.x
   2.2.x
   2.1.x

> Named values don't work with batches
> 
>
> Key: CASSANDRA-10246
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10246
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Michael Penick
>  Labels: client-impacting
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> This is broken at the protocol-level and in the implementation.
> At the protocol-level the {{}} component of the batch comes after the 
> queries. That means the protocol parser would need to read ahead (and back 
> track) to determine the values encoding and correctly read the values from 
> the query entries. Also, a batch-level setting for named values forces all 
> queries to use the same encoding. Should batches force a single, homogenous 
> query value encoding? (This is confusing)
> In the implementation, values are indiscriminately read using 
> {{CBUtil.readValueList()}}, and the batch flags are never checked (for 
> {{(Flag.NAMES_FOR_VALUES}}) to see if {{CBUtil.readNameAndValueList()}} 
> should be called instead: 
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/transport/messages/BatchMessage.java#L64
> Proposed solution: CASSANDRA-10247



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10074) cqlsh HELP SELECT_EXPR gives outdated incorrect information

2015-09-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-10074:

Labels: cqlsh lhf  (was: )

> cqlsh HELP SELECT_EXPR gives outdated incorrect information
> ---
>
> Key: CASSANDRA-10074
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10074
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: 3.0.0-alpha1-SNAPSHOT
>Reporter: Jim Meyer
>Priority: Trivial
>  Labels: cqlsh, lhf
> Fix For: 3.x
>
>
> Within cqlsh, the HELP SELECT_EXPR states that COUNT is the only function 
> supported by CQL.
> It is missing a description of the SUM, AVG, MIN, and MAX built in functions.
> It should probably also mention that user defined functions can be invoked 
> via SELECT.
> The outdated text is in pylib/cqlshlib/helptopics.py under def 
> help_select_expr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10381) NullPointerException in cqlsh paging through CF with static columns

2015-09-21 Thread Michael Keeney (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901574#comment-14901574
 ] 

Michael Keeney commented on CASSANDRA-10381:


The latter, when running 2.0.14 cqlsh against a 2.1.8 node.

> NullPointerException in cqlsh paging through CF with static columns
> ---
>
> Key: CASSANDRA-10381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Michael Keeney
>Assignee: Benjamin Lerer
>  Labels: cqlsh, nullpointerexception, range
> Fix For: 2.1.x
>
>
> When running select count( * ) from cqlsh with limit, the following NPE 
> occurs:
> select count( * ) from tbl1 limit 5 ; 
> {code}
> ERROR [SharedPool-Worker-4] 2015-09-16 14:49:43,480 QueryMessage.java:132 - 
> Unexpected error during query
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.containsPreviousLast(RangeSliceQueryPager.java:99)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:37)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:286)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:230)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$StatementExecution.execute(DseQueryHandler.java:291)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithTiming(DseQueryHandler.java:223)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithAuditLogging(DseQueryHandler.java:259)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:94)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_75]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
> {code}
> Table definition looks something like:
> {code}
> CREATE TABLE tbl1 (
> field1 bigint,
> field2 int,
> field3 timestamp,
> field4 map,
> field5 text static,
> field6 text static,
> field7 text static
> PRIMARY KEY (field1, field2, field3)
> ) WITH CLUSTERING ORDER BY (field2 ASC, field3 ASC)
> AND bloom_filter_fp_chance = 0.1
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>...
> {code}
> Following appears in debug log leading up to the error:
> {code}
> DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  
> AbstractQueryPager.java:95 - Fetched 101 live rows
> DEBUG [SharedPool-W

[jira] [Comment Edited] (CASSANDRA-10381) NullPointerException in cqlsh paging through CF with static columns

2015-09-21 Thread Michael Keeney (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901574#comment-14901574
 ] 

Michael Keeney edited comment on CASSANDRA-10381 at 9/21/15 10:52 PM:
--

The latter, when running 2.0.14 cqlsh against a 2.1.8 node.  Edit: just saw 
your comment. Makes sense thanks


was (Author: michael keeney):
The latter, when running 2.0.14 cqlsh against a 2.1.8 node.

> NullPointerException in cqlsh paging through CF with static columns
> ---
>
> Key: CASSANDRA-10381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Michael Keeney
>Assignee: Benjamin Lerer
>  Labels: cqlsh, nullpointerexception, range
> Fix For: 2.1.x
>
>
> When running select count( * ) from cqlsh with limit, the following NPE 
> occurs:
> select count( * ) from tbl1 limit 5 ; 
> {code}
> ERROR [SharedPool-Worker-4] 2015-09-16 14:49:43,480 QueryMessage.java:132 - 
> Unexpected error during query
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.containsPreviousLast(RangeSliceQueryPager.java:99)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:37)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:286)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:230)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$StatementExecution.execute(DseQueryHandler.java:291)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithTiming(DseQueryHandler.java:223)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithAuditLogging(DseQueryHandler.java:259)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:94)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_75]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
> {code}
> Table definition looks something like:
> {code}
> CREATE TABLE tbl1 (
> field1 bigint,
> field2 int,
> field3 timestamp,
> field4 map,
> field5 text static,
> field6 text static,
> field7 text static
> PRIMARY KEY (field1, field2, field3)
> ) WITH CLUSTERING ORDER BY (field2 ASC, field3 ASC)
> AND bloom_filter_fp_chance = 0.1
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>...
> {code}
> Fo

[jira] [Updated] (CASSANDRA-10381) NullPointerException in cqlsh paging through CF with static columns

2015-09-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-10381:

Assignee: Benjamin Lerer

> NullPointerException in cqlsh paging through CF with static columns
> ---
>
> Key: CASSANDRA-10381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Michael Keeney
>Assignee: Benjamin Lerer
>  Labels: cqlsh, nullpointerexception, range
> Fix For: 2.1.x
>
>
> When running select count( * ) from cqlsh with limit, the following NPE 
> occurs:
> select count( * ) from tbl1 limit 5 ; 
> {code}
> ERROR [SharedPool-Worker-4] 2015-09-16 14:49:43,480 QueryMessage.java:132 - 
> Unexpected error during query
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.containsPreviousLast(RangeSliceQueryPager.java:99)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:37)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:286)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:230)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$StatementExecution.execute(DseQueryHandler.java:291)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithTiming(DseQueryHandler.java:223)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithAuditLogging(DseQueryHandler.java:259)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:94)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_75]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
> {code}
> Table definition looks something like:
> {code}
> CREATE TABLE tbl1 (
> field1 bigint,
> field2 int,
> field3 timestamp,
> field4 map,
> field5 text static,
> field6 text static,
> field7 text static
> PRIMARY KEY (field1, field2, field3)
> ) WITH CLUSTERING ORDER BY (field2 ASC, field3 ASC)
> AND bloom_filter_fp_chance = 0.1
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>...
> {code}
> Following appears in debug log leading up to the error:
> {code}
> DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  
> AbstractQueryPager.java:95 - Fetched 101 live rows
> DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  
> AbstractQueryPager.java:133 - Remaining rows to pa

[jira] [Commented] (CASSANDRA-10381) NullPointerException in cqlsh paging through CF with static columns

2015-09-21 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901573#comment-14901573
 ] 

Philip Thompson commented on CASSANDRA-10381:
-

Okay, I believe the reason the 2.0.14 cqlsh works fine is because it is 
connecting via thrift, and this NPE is in cql3 server code. Which would explain 
why the 2.1.8 cqlsh sees the issue, as it uses the native python driver. I 
suspect the query will fail if you connect via any native driver.

> NullPointerException in cqlsh paging through CF with static columns
> ---
>
> Key: CASSANDRA-10381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Michael Keeney
>  Labels: cqlsh, nullpointerexception, range
> Fix For: 2.1.x
>
>
> When running select count( * ) from cqlsh with limit, the following NPE 
> occurs:
> select count( * ) from tbl1 limit 5 ; 
> {code}
> ERROR [SharedPool-Worker-4] 2015-09-16 14:49:43,480 QueryMessage.java:132 - 
> Unexpected error during query
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.containsPreviousLast(RangeSliceQueryPager.java:99)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:37)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:286)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:230)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$StatementExecution.execute(DseQueryHandler.java:291)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithTiming(DseQueryHandler.java:223)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithAuditLogging(DseQueryHandler.java:259)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:94)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_75]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
> {code}
> Table definition looks something like:
> {code}
> CREATE TABLE tbl1 (
> field1 bigint,
> field2 int,
> field3 timestamp,
> field4 map,
> field5 text static,
> field6 text static,
> field7 text static
> PRIMARY KEY (field1, field2, field3)
> ) WITH CLUSTERING ORDER BY (field2 ASC, field3 ASC)
> AND bloom_filter_fp_chance = 0.1
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>...
> {cod

[jira] [Commented] (CASSANDRA-10380) SELECT count within a partition does not respect LIMIT

2015-09-21 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901561#comment-14901561
 ] 

Philip Thompson commented on CASSANDRA-10380:
-

[~blerer] can confirm, but I think this is working as intended.

> SELECT count within a partition does not respect LIMIT
> --
>
> Key: CASSANDRA-10380
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10380
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Adam Holmberg
>Priority: Minor
>
> {code}
> cassandra@cqlsh> create KEYSPACE test WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': '1'};
> cassandra@cqlsh> use test;
> cassandra@cqlsh:test> create table t (k int, c int, v int, primary key (k, 
> c));
> cassandra@cqlsh:test> INSERT INTO t (k, c, v) VALUES (0, 0, 0);
> cassandra@cqlsh:test> INSERT INTO t (k, c, v) VALUES (0, 1, 0);
> cassandra@cqlsh:test> INSERT INTO t (k, c, v) VALUES (0, 2, 0);
> cassandra@cqlsh:test> select * from t where k = 0;
>  k | c | v
> ---+---+---
>  0 | 0 | 0
>  0 | 1 | 0
>  0 | 2 | 0
> (3 rows)
> cassandra@cqlsh:test> select count(*) from t where k = 0 limit 2;
>  count
> ---
>  3
> (1 rows)
> {code}
> Expected: count should return 2, according to limit.
> Actual: count of all rows in partition
> This manifests in 3.0, does not appear in 2.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10381) NullPointerException in cqlsh paging through CF with static columns

2015-09-21 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901557#comment-14901557
 ] 

Philip Thompson commented on CASSANDRA-10381:
-

This seems like an NPE in CQL, not in cqlsh. I'm confused by your last 
sentence. Are you saying when you run the 2.0.14 cqlsh against a 2.0.14 node, 
the problem does not appear? Or are you running the 2.0.14 cqlsh against a 
2.1.8 node?

> NullPointerException in cqlsh paging through CF with static columns
> ---
>
> Key: CASSANDRA-10381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Michael Keeney
>  Labels: cqlsh, nullpointerexception, range
> Fix For: 2.1.x
>
>
> When running select count( * ) from cqlsh with limit, the following NPE 
> occurs:
> select count( * ) from tbl1 limit 5 ; 
> {code}
> ERROR [SharedPool-Worker-4] 2015-09-16 14:49:43,480 QueryMessage.java:132 - 
> Unexpected error during query
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.containsPreviousLast(RangeSliceQueryPager.java:99)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:37)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:286)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:230)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$StatementExecution.execute(DseQueryHandler.java:291)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithTiming(DseQueryHandler.java:223)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithAuditLogging(DseQueryHandler.java:259)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:94)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_75]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
> {code}
> Table definition looks something like:
> {code}
> CREATE TABLE tbl1 (
> field1 bigint,
> field2 int,
> field3 timestamp,
> field4 map,
> field5 text static,
> field6 text static,
> field7 text static
> PRIMARY KEY (field1, field2, field3)
> ) WITH CLUSTERING ORDER BY (field2 ASC, field3 ASC)
> AND bloom_filter_fp_chance = 0.1
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>...
> {code}
> Following appears in debug log leading up to the err

[jira] [Updated] (CASSANDRA-10381) NullPointerException in cqlsh paging through CF with static columns

2015-09-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-10381:

Description: 
When running select count( * ) from cqlsh with limit, the following NPE occurs:

select count( * ) from tbl1 limit 5 ; 
{code}
ERROR [SharedPool-Worker-4] 2015-09-16 14:49:43,480 QueryMessage.java:132 - 
Unexpected error during query
java.lang.NullPointerException: null
at 
org.apache.cassandra.service.pager.RangeSliceQueryPager.containsPreviousLast(RangeSliceQueryPager.java:99)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:119)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:37)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:286)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:230)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$StatementExecution.execute(DseQueryHandler.java:291)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithTiming(DseQueryHandler.java:223)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithAuditLogging(DseQueryHandler.java:259)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:94)
 ~[dse-4.7.2.jar:4.7.2]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_75]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-2.1.8.621.jar:2.1.8.621]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
{code}

Table definition looks something like:
{code}
CREATE TABLE tbl1 (
field1 bigint,
field2 int,
field3 timestamp,
field4 map,
field5 text static,
field6 text static,
field7 text static
PRIMARY KEY (field1, field2, field3)
) WITH CLUSTERING ORDER BY (field2 ASC, field3 ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
   ...
{code}
Following appears in debug log leading up to the error:
{code}
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  AbstractQueryPager.java:95 
- Fetched 101 live rows
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  
AbstractQueryPager.java:133 - Remaining rows to page: 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,485  SelectStatement.java:285 - 
New maxLimit for paged count query is 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,486  StorageProxy.java:1646 - 
Estimated result rows per range: 2586.375; requested rows: 2, ranges.size(): 
762; concurrent range requests: 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,487  AbstractQueryPager.java:95 
- Fetched 2 live rows
ERROR [SharedPool-Worker-1] 2015-09-17 15:32:06,487  QueryMessage.java:132 - 
Unexpected error during query
java.lang.NullPointerException: null
{code}

I'm working on recreating to have a workable dataset.  When running cqlsh from 
remote node version 2.0.14, query returns successfully

  was:
W

[jira] [Updated] (CASSANDRA-10381) NullPointerException in cqlsh paging through CF with static columns

2015-09-21 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-10381:

Fix Version/s: 2.1.x

> NullPointerException in cqlsh paging through CF with static columns
> ---
>
> Key: CASSANDRA-10381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Michael Keeney
>  Labels: cqlsh, nullpointerexception, range
> Fix For: 2.1.x
>
>
> When running select count( * ) from cqlsh with limit, the following NPE 
> occurs:
> select count( * ) from tbl1 limit 5 ; 
> {code}
> ERROR [SharedPool-Worker-4] 2015-09-16 14:49:43,480 QueryMessage.java:132 - 
> Unexpected error during query
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.containsPreviousLast(RangeSliceQueryPager.java:99)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:37)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:286)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:230)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$StatementExecution.execute(DseQueryHandler.java:291)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithTiming(DseQueryHandler.java:223)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithAuditLogging(DseQueryHandler.java:259)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:94)
>  ~[dse-4.7.2.jar:4.7.2]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_75]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-2.1.8.621.jar:2.1.8.621]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
> {code}
> Table definition looks something like:
> {code}
> CREATE TABLE tbl1 (
> field1 bigint,
> field2 int,
> field3 timestamp,
> field4 map,
> field5 text static,
> field6 text static,
> field7 text static
> PRIMARY KEY (field1, field2, field3)
> ) WITH CLUSTERING ORDER BY (field2 ASC, field3 ASC)
> AND bloom_filter_fp_chance = 0.1
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>...
> {code}
> Following appears in debug log leading up to the error:
> {code}
> DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  
> AbstractQueryPager.java:95 - Fetched 101 live rows
> DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  
> AbstractQueryPager.java:133 - Remaining rows to page: 1
> DEBUG [SharedPool-Worker-1] 2015-0

[jira] [Commented] (CASSANDRA-10246) Named values don't work with batches

2015-09-21 Thread Michael Penick (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901477#comment-14901477
 ] 

Michael Penick commented on CASSANDRA-10246:


[~mambocab] It affects protocol version 3 and higher. Yes, it also affects C* 
3.0+.

> Named values don't work with batches
> 
>
> Key: CASSANDRA-10246
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10246
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Michael Penick
>  Labels: client-impacting
>
> This is broken at the protocol-level and in the implementation.
> At the protocol-level the {{}} component of the batch comes after the 
> queries. That means the protocol parser would need to read ahead (and back 
> track) to determine the values encoding and correctly read the values from 
> the query entries. Also, a batch-level setting for named values forces all 
> queries to use the same encoding. Should batches force a single, homogenous 
> query value encoding? (This is confusing)
> In the implementation, values are indiscriminately read using 
> {{CBUtil.readValueList()}}, and the batch flags are never checked (for 
> {{(Flag.NAMES_FOR_VALUES}}) to see if {{CBUtil.readNameAndValueList()}} 
> should be called instead: 
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/transport/messages/BatchMessage.java#L64
> Proposed solution: CASSANDRA-10247



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10381) NullPointerException in cqlsh paging through CF with static columns

2015-09-21 Thread Michael Keeney (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Keeney updated CASSANDRA-10381:
---
Description: 
When running select count(*) from cqlsh with limit, the following NPE occurs:

select count(*) from tbl1 limit 5 ; 

ERROR [SharedPool-Worker-4] 2015-09-16 14:49:43,480 QueryMessage.java:132 - 
Unexpected error during query
java.lang.NullPointerException: null
at 
org.apache.cassandra.service.pager.RangeSliceQueryPager.containsPreviousLast(RangeSliceQueryPager.java:99)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:119)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:37)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:286)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:230)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$StatementExecution.execute(DseQueryHandler.java:291)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithTiming(DseQueryHandler.java:223)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithAuditLogging(DseQueryHandler.java:259)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:94)
 ~[dse-4.7.2.jar:4.7.2]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_75]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-2.1.8.621.jar:2.1.8.621]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]


Table definition looks something like:

CREATE TABLE tbl1 (
field1 bigint,
field2 int,
field3 timestamp,
field4 map,
field5 text static,
field6 text static,
field7 text static
PRIMARY KEY (field1, field2, field3)
) WITH CLUSTERING ORDER BY (field2 ASC, field3 ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
   ...

Following appears in debug log leading up to the error:

DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  AbstractQueryPager.java:95 
- Fetched 101 live rows
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  
AbstractQueryPager.java:133 - Remaining rows to page: 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,485  SelectStatement.java:285 - 
New maxLimit for paged count query is 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,486  StorageProxy.java:1646 - 
Estimated result rows per range: 2586.375; requested rows: 2, ranges.size(): 
762; concurrent range requests: 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,487  AbstractQueryPager.java:95 
- Fetched 2 live rows
ERROR [SharedPool-Worker-1] 2015-09-17 15:32:06,487  QueryMessage.java:132 - 
Unexpected error during query
java.lang.NullPointerException: null


I'm working on recreating to have a workable dataset.  When running cqlsh from 
remote node version 2.0.14, query returns successfully

  was:
When running select * from cqlsh with limit

[jira] [Updated] (CASSANDRA-10381) NullPointerException in cqlsh paging through CF with static columns

2015-09-21 Thread Michael Keeney (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Keeney updated CASSANDRA-10381:
---
Description: 
When running select count( * ) from cqlsh with limit, the following NPE occurs:

select count( * ) from tbl1 limit 5 ; 

ERROR [SharedPool-Worker-4] 2015-09-16 14:49:43,480 QueryMessage.java:132 - 
Unexpected error during query
java.lang.NullPointerException: null
at 
org.apache.cassandra.service.pager.RangeSliceQueryPager.containsPreviousLast(RangeSliceQueryPager.java:99)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:119)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:37)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:286)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:230)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$StatementExecution.execute(DseQueryHandler.java:291)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithTiming(DseQueryHandler.java:223)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithAuditLogging(DseQueryHandler.java:259)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:94)
 ~[dse-4.7.2.jar:4.7.2]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_75]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-2.1.8.621.jar:2.1.8.621]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]


Table definition looks something like:

CREATE TABLE tbl1 (
field1 bigint,
field2 int,
field3 timestamp,
field4 map,
field5 text static,
field6 text static,
field7 text static
PRIMARY KEY (field1, field2, field3)
) WITH CLUSTERING ORDER BY (field2 ASC, field3 ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
   ...

Following appears in debug log leading up to the error:

DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  AbstractQueryPager.java:95 
- Fetched 101 live rows
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  
AbstractQueryPager.java:133 - Remaining rows to page: 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,485  SelectStatement.java:285 - 
New maxLimit for paged count query is 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,486  StorageProxy.java:1646 - 
Estimated result rows per range: 2586.375; requested rows: 2, ranges.size(): 
762; concurrent range requests: 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,487  AbstractQueryPager.java:95 
- Fetched 2 live rows
ERROR [SharedPool-Worker-1] 2015-09-17 15:32:06,487  QueryMessage.java:132 - 
Unexpected error during query
java.lang.NullPointerException: null


I'm working on recreating to have a workable dataset.  When running cqlsh from 
remote node version 2.0.14, query returns successfully

  was:
When running select count(*) from cqlsh

[jira] [Created] (CASSANDRA-10381) NullPointerException in cqlsh paging through CF with static columns

2015-09-21 Thread Michael Keeney (JIRA)
Michael Keeney created CASSANDRA-10381:
--

 Summary: NullPointerException in cqlsh paging through CF with 
static columns
 Key: CASSANDRA-10381
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10381
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Michael Keeney


When running select * from cqlsh with limit, the following NPE occurs:

select count(*) from tbl1 limit 5 ; 

ERROR [SharedPool-Worker-4] 2015-09-16 14:49:43,480 QueryMessage.java:132 - 
Unexpected error during query
java.lang.NullPointerException: null
at 
org.apache.cassandra.service.pager.RangeSliceQueryPager.containsPreviousLast(RangeSliceQueryPager.java:99)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:119)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.service.pager.RangeSliceQueryPager.fetchPage(RangeSliceQueryPager.java:37)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.pageCountQuery(SelectStatement.java:286)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:230)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:67)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$StatementExecution.execute(DseQueryHandler.java:291)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithTiming(DseQueryHandler.java:223)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithAuditLogging(DseQueryHandler.java:259)
 ~[dse-4.7.2.jar:4.7.2]
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:94)
 ~[dse-4.7.2.jar:4.7.2]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
 ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
 [netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_75]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [cassandra-all-2.1.8.621.jar:2.1.8.621]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-2.1.8.621.jar:2.1.8.621]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]


Table definition looks something like:

CREATE TABLE tbl1 (
field1 bigint,
field2 int,
field3 timestamp,
field4 map,
field5 text static,
field6 text static,
field7 text static
PRIMARY KEY (field1, field2, field3)
) WITH CLUSTERING ORDER BY (field2 ASC, field3 ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
   ...

Following appears in debug log leading up to the error:

DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  AbstractQueryPager.java:95 
- Fetched 101 live rows
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,484  
AbstractQueryPager.java:133 - Remaining rows to page: 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,485  SelectStatement.java:285 - 
New maxLimit for paged count query is 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,486  StorageProxy.java:1646 - 
Estimated result rows per range: 2586.375; requested rows: 2, ranges.size(): 
762; concurrent range requests: 1
DEBUG [SharedPool-Worker-1] 2015-09-17 15:32:06,487  AbstractQueryPager.java:95 
- Fetched 2 live rows
ERROR [SharedPool-Worker-1] 2015-09-17 15:32:06,487  QueryMessage.java:132 - 
Unexpected error during query
java.lang.NullPointerException: null


I'm working

[jira] [Commented] (CASSANDRA-7486) Migrate to G1GC by default

2015-09-21 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901127#comment-14901127
 ] 

Benedict commented on CASSANDRA-7486:
-

[~enigmacurry]: could you upgrade stress to run from [this 
branch|https://github.com/belliottsmith/cassandra/tree/stress-report-interval]?

Could you also ensure it's running with a largeish heap (at least a couple of 
Gb)? I'll file tickets to update the mainline source tree on both these counts. 
We should start enabling stress gc logging in cstar at some point as well, so 
we can diagnose issues in the run, and see if they're attributable to stress 
itself.

> Migrate to G1GC by default
> --
>
> Key: CASSANDRA-7486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7486
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Config
>Reporter: Jonathan Ellis
> Fix For: 3.0 alpha 1
>
>
> See 
> http://www.slideshare.net/MonicaBeckwith/garbage-first-garbage-collector-g1-7486gc-migration-to-expectations-and-advanced-tuning
>  and https://twitter.com/rbranson/status/482113561431265281
> May want to default 2.1 to G1.
> 2.1 is a different animal from 2.0 after moving most of memtables off heap.  
> Suspect this will help G1 even more than CMS.  (NB this is off by default but 
> needs to be part of the test.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10380) SELECT count within a partition does not respect LIMIT

2015-09-21 Thread Adam Holmberg (JIRA)
Adam Holmberg created CASSANDRA-10380:
-

 Summary: SELECT count within a partition does not respect LIMIT
 Key: CASSANDRA-10380
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10380
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Adam Holmberg
Priority: Minor


{code}
cassandra@cqlsh> create KEYSPACE test WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': '1'};
cassandra@cqlsh> use test;
cassandra@cqlsh:test> create table t (k int, c int, v int, primary key (k, c));
cassandra@cqlsh:test> INSERT INTO t (k, c, v) VALUES (0, 0, 0);
cassandra@cqlsh:test> INSERT INTO t (k, c, v) VALUES (0, 1, 0);
cassandra@cqlsh:test> INSERT INTO t (k, c, v) VALUES (0, 2, 0);
cassandra@cqlsh:test> select * from t where k = 0;

 k | c | v
---+---+---
 0 | 0 | 0
 0 | 1 | 0
 0 | 2 | 0

(3 rows)
cassandra@cqlsh:test> select count(*) from t where k = 0 limit 2;

 count
---
 3

(1 rows)
{code}

Expected: count should return 2, according to limit.
Actual: count of all rows in partition

This manifests in 3.0, does not appear in 2.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7715) Add a credentials cache to the PasswordAuthenticator

2015-09-21 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901071#comment-14901071
 ] 

sankalp kohli commented on CASSANDRA-7715:
--

cc [~iamaleksey] Can you please take this?

> Add a credentials cache to the PasswordAuthenticator
> 
>
> Key: CASSANDRA-7715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7715
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Mike Adamson
>Priority: Minor
> Fix For: 3.x
>
>
> If the PasswordAuthenticator cached credentials for a short time it would 
> reduce the overhead of user journeys when they need to do multiple 
> authentications in quick succession.
> This cache should work in the same way as the cache in CassandraAuthorizer in 
> that if it's TTL is set to 0 the cache will be disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7715) Add a credentials cache to the PasswordAuthenticator

2015-09-21 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-7715:
-
Assignee: (was: sankalp kohli)

> Add a credentials cache to the PasswordAuthenticator
> 
>
> Key: CASSANDRA-7715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7715
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Mike Adamson
>Priority: Minor
> Fix For: 3.x
>
>
> If the PasswordAuthenticator cached credentials for a short time it would 
> reduce the overhead of user journeys when they need to do multiple 
> authentications in quick succession.
> This cache should work in the same way as the cache in CassandraAuthorizer in 
> that if it's TTL is set to 0 the cache will be disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10357) mmap file boundary selection is broken for some large files

2015-09-21 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901030#comment-14901030
 ] 

Benedict commented on CASSANDRA-10357:
--

Sorry, I renamed the branch now we have a ticket number. it's 
[here|https://github.com/belliottsmith/cassandra/tree/10357]

It's just failed a bunch of dtests, however running them locally everything is 
fine, so going to kick them off again.

> mmap file boundary selection is broken for some large files 
> 
>
> Key: CASSANDRA-10357
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10357
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 2.1.10
>
>
> If an early open interval occurs to close to an mmap boundary, the boundary 
> can be lost. Patch available 
> [here|https://github.com/belliottsmith/cassandra/tree/mmap-boundaries].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10280) Make DTCS work well with old data

2015-09-21 Thread Jonathan Shook (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14901028#comment-14901028
 ] 

Jonathan Shook commented on CASSANDRA-10280:


I've read the patch and the comments. Deprecating max_sstable_age_days in favor 
of the max window size is a good simplification. It also does what I originally 
had hoped max_sstable_age_days would do. So +1 on all of that.
Just to make sure, can we identify whether or not this might affect tombstone 
compaction scheduling? As in, could it cause tombstone compactions that would 
otherwise happen to not?

> Make DTCS work well with old data
> -
>
> Key: CASSANDRA-10280
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10280
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> Operational tasks become incredibly expensive if you keep around a long 
> timespan of data with DTCS - with default settings and 1 year of data, the 
> oldest window covers about 180 days. Bootstrapping a node with vnodes with 
> this data layout will force cassandra to compact very many sstables in this 
> window.
> We should probably put a cap on how big the biggest windows can get. We could 
> probably default this to something sane based on max_sstable_age (ie, say we 
> can reasonably handle 1000 sstables per node, then we can calculate how big 
> the windows should be to allow that)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10280) Make DTCS work well with old data

2015-09-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900985#comment-14900985
 ] 

Björn Hegerfors edited comment on CASSANDRA-10280 at 9/21/15 5:06 PM:
--

Yes, I'm absolutely in favor of expressing this in terms of max window size 
instead of max SSTable age. And it's also become more and more clear to me that 
rather than never compacting SSTables that are too old, we should just keep 
fixed size windows around, so that if SSTables come in there (bootstrap, 
repairs), compaction will happen.

I haven't looked at the patch, but is there a clear way to express maximum 
window size? If base_time_seconds=1 do you then say something like 
max_window_seconds=10? And in that case, will the larges windows be 4 or 16? I 
guess only 4 would make sense with that name...

I've suggested before declaring how many times a window will be coalesced. But 
that might sound really complicated to users. What I mean is a setting like 
"window_coalitions" or "write_amplification" which you can set to 5 in order to 
get a maximum window size of 4^5=1024 times the base window. But let's go with 
whatever is easiest to understand.

EDIT: never mind, I looked at the patch, and it's done exactly how I would have 
done it. So +1.


was (Author: bj0rn):
Yes, I'm absolutely in favor of expressing this in terms of max window size 
instead of max SSTable age. And it's also become more and more clear to me that 
rather than never compacting SSTables that are too old, we should just keep 
fixed size windows around, so that if SSTables come in there (bootstrap, 
repairs), compaction will happen.

I haven't looked at the patch, but is there a clear way to express maximum 
window size? If base_time_seconds=1 do you then say something like 
max_window_seconds=10? And in that case, will the larges windows be 4 or 16? I 
guess only 4 would make sense with that name...

I've suggested before declaring how many times a window will be coalesced. But 
that might sound really complicated to users. What I mean is a setting like 
"window_coalitions" or "write_amplification" which you can set to 5 in order to 
get a maximum window size of 4^5=1024 times the base window. But let's go with 
whatever is easiest to understand.

> Make DTCS work well with old data
> -
>
> Key: CASSANDRA-10280
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10280
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> Operational tasks become incredibly expensive if you keep around a long 
> timespan of data with DTCS - with default settings and 1 year of data, the 
> oldest window covers about 180 days. Bootstrapping a node with vnodes with 
> this data layout will force cassandra to compact very many sstables in this 
> window.
> We should probably put a cap on how big the biggest windows can get. We could 
> probably default this to something sane based on max_sstable_age (ie, say we 
> can reasonably handle 1000 sstables per node, then we can calculate how big 
> the windows should be to allow that)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10342) Read defragmentation can cause unnecessary repairs

2015-09-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900984#comment-14900984
 ] 

Aleksey Yeschenko edited comment on CASSANDRA-10342 at 9/21/15 5:05 PM:


See [this Sylvain's 
comment|https://issues.apache.org/jira/browse/CASSANDRA-7085?focusedCommentId=14593427&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14593427].


was (Author: iamaleksey):
See [this Sylvain's 
comment|https://issues.apache.org/jira/browse/CASSANDRA-7085?focusedCommentId=14594456&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14594456].

> Read defragmentation can cause unnecessary repairs
> --
>
> Key: CASSANDRA-10342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10342
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Olsson
>Assignee: Marcus Eriksson
>Priority: Minor
>
> After applying the fix from CASSANDRA-10299 to the cluster we started having 
> a problem of ~20k small sstables appearing for the table with static data 
> when running incremental repair.
> In the logs there were several messages about flushes for that table, one for 
> each repaired range. The flushed sstables were 0.000kb in size with < 100 ops 
> in each. When checking cfstats there were several writes to that table, even 
> though we were only reading from it and read repair did not repair anything.
> After digging around in the codebase I noticed that defragmentation of data 
> can occur while reading, depending on the query and some other conditions. 
> This causes the read data to be inserted again to have it in a more recent 
> sstable, which can be a problem if that data was repaired using incremental 
> repair. The defragmentation is done in 
> [CollationController.java|https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/CollationController.java#L151].
> I guess this wasn't a problem with full repairs since I assume that the 
> digest should be the same even if you have two copies of the same data. But 
> with incremental repair this will most probably cause a mismatch between 
> nodes if that data already was repaired, since the other nodes probably won't 
> have that data in their unrepaired set.
> --
> I can add that the problems on our cluster was probably due to the fact that 
> CASSANDRA-10299 caused the same data to be streamed multiple times and ending 
> up in several sstables. One of the conditions for the defragmentation is that 
> the number of sstables read during a read request have to be more than the 
> minimum number of sstables needed for a compaction(> 4 in our case). So 
> normally I don't think this would cause ~20k sstables to appear, we probably 
> hit an extreme.
> One workaround for this is to use another compaction strategy than STCS(it 
> seems to be the only affected strategy, atleast in 2.1), but the solution 
> might be to either make defragmentation configurable per table or avoid 
> reinserting the data if any of the sstables involved in the read are repaired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10342) Read defragmentation can cause unnecessary repairs

2015-09-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900984#comment-14900984
 ] 

Aleksey Yeschenko commented on CASSANDRA-10342:
---

See [this Sylvain's 
comment|https://issues.apache.org/jira/browse/CASSANDRA-7085?focusedCommentId=14594456&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14594456].

> Read defragmentation can cause unnecessary repairs
> --
>
> Key: CASSANDRA-10342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10342
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Olsson
>Assignee: Marcus Eriksson
>Priority: Minor
>
> After applying the fix from CASSANDRA-10299 to the cluster we started having 
> a problem of ~20k small sstables appearing for the table with static data 
> when running incremental repair.
> In the logs there were several messages about flushes for that table, one for 
> each repaired range. The flushed sstables were 0.000kb in size with < 100 ops 
> in each. When checking cfstats there were several writes to that table, even 
> though we were only reading from it and read repair did not repair anything.
> After digging around in the codebase I noticed that defragmentation of data 
> can occur while reading, depending on the query and some other conditions. 
> This causes the read data to be inserted again to have it in a more recent 
> sstable, which can be a problem if that data was repaired using incremental 
> repair. The defragmentation is done in 
> [CollationController.java|https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/CollationController.java#L151].
> I guess this wasn't a problem with full repairs since I assume that the 
> digest should be the same even if you have two copies of the same data. But 
> with incremental repair this will most probably cause a mismatch between 
> nodes if that data already was repaired, since the other nodes probably won't 
> have that data in their unrepaired set.
> --
> I can add that the problems on our cluster was probably due to the fact that 
> CASSANDRA-10299 caused the same data to be streamed multiple times and ending 
> up in several sstables. One of the conditions for the defragmentation is that 
> the number of sstables read during a read request have to be more than the 
> minimum number of sstables needed for a compaction(> 4 in our case). So 
> normally I don't think this would cause ~20k sstables to appear, we probably 
> hit an extreme.
> One workaround for this is to use another compaction strategy than STCS(it 
> seems to be the only affected strategy, atleast in 2.1), but the solution 
> might be to either make defragmentation configurable per table or avoid 
> reinserting the data if any of the sstables involved in the read are repaired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10280) Make DTCS work well with old data

2015-09-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900985#comment-14900985
 ] 

Björn Hegerfors commented on CASSANDRA-10280:
-

Yes, I'm absolutely in favor of expressing this in terms of max window size 
instead of max SSTable age. And it's also become more and more clear to me that 
rather than never compacting SSTables that are too old, we should just keep 
fixed size windows around, so that if SSTables come in there (bootstrap, 
repairs), compaction will happen.

I haven't looked at the patch, but is there a clear way to express maximum 
window size? If base_time_seconds=1 do you then say something like 
max_window_seconds=10? And in that case, will the larges windows be 4 or 16? I 
guess only 4 would make sense with that name...

I've suggested before declaring how many times a window will be coalesced. But 
that might sound really complicated to users. What I mean is a setting like 
"window_coalitions" or "write_amplification" which you can set to 5 in order to 
get a maximum window size of 4^5=1024 times the base window. But let's go with 
whatever is easiest to understand.

> Make DTCS work well with old data
> -
>
> Key: CASSANDRA-10280
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10280
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> Operational tasks become incredibly expensive if you keep around a long 
> timespan of data with DTCS - with default settings and 1 year of data, the 
> oldest window covers about 180 days. Bootstrapping a node with vnodes with 
> this data layout will force cassandra to compact very many sstables in this 
> window.
> We should probably put a cap on how big the biggest windows can get. We could 
> probably default this to something sane based on max_sstable_age (ie, say we 
> can reasonably handle 1000 sstables per node, then we can calculate how big 
> the windows should be to allow that)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10342) Read defragmentation can cause unnecessary repairs

2015-09-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900982#comment-14900982
 ] 

Aleksey Yeschenko commented on CASSANDRA-10342:
---

bq. we can leverage the time-ordered path for a lot more queries in 3.x

This is not actually true at the moment.

> Read defragmentation can cause unnecessary repairs
> --
>
> Key: CASSANDRA-10342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10342
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Olsson
>Assignee: Marcus Eriksson
>Priority: Minor
>
> After applying the fix from CASSANDRA-10299 to the cluster we started having 
> a problem of ~20k small sstables appearing for the table with static data 
> when running incremental repair.
> In the logs there were several messages about flushes for that table, one for 
> each repaired range. The flushed sstables were 0.000kb in size with < 100 ops 
> in each. When checking cfstats there were several writes to that table, even 
> though we were only reading from it and read repair did not repair anything.
> After digging around in the codebase I noticed that defragmentation of data 
> can occur while reading, depending on the query and some other conditions. 
> This causes the read data to be inserted again to have it in a more recent 
> sstable, which can be a problem if that data was repaired using incremental 
> repair. The defragmentation is done in 
> [CollationController.java|https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/CollationController.java#L151].
> I guess this wasn't a problem with full repairs since I assume that the 
> digest should be the same even if you have two copies of the same data. But 
> with incremental repair this will most probably cause a mismatch between 
> nodes if that data already was repaired, since the other nodes probably won't 
> have that data in their unrepaired set.
> --
> I can add that the problems on our cluster was probably due to the fact that 
> CASSANDRA-10299 caused the same data to be streamed multiple times and ending 
> up in several sstables. One of the conditions for the defragmentation is that 
> the number of sstables read during a read request have to be more than the 
> minimum number of sstables needed for a compaction(> 4 in our case). So 
> normally I don't think this would cause ~20k sstables to appear, we probably 
> hit an extreme.
> One workaround for this is to use another compaction strategy than STCS(it 
> seems to be the only affected strategy, atleast in 2.1), but the solution 
> might be to either make defragmentation configurable per table or avoid 
> reinserting the data if any of the sstables involved in the read are repaired.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9324) Map Mutation rejected by Cassandra: IllegalArgumentException

2015-09-21 Thread David Loegering (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900979#comment-14900979
 ] 

David Loegering commented on CASSANDRA-9324:


Made the change but now I see a different but similar error..  
[CassandraConnection.cpp][613]Write InvalidRequestException: Default 
TException.  [Not enough bytes to read value of component 2]



> Map Mutation rejected by Cassandra: IllegalArgumentException
> 
>
> Key: CASSANDRA-9324
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9324
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: Windows 7, Cassandra 2.1.5
>Reporter: Mark Wick
>Assignee: Tyler Hobbs
>Priority: Minor
>
> We use a collection (map) in a CQL3 table. We write into that 
> cql3 table using thrift mutations, from a c++ application. We are prototyping 
> migrating from our current Cassandra (2.0.7) to 2.1.5, and are unable to 
> write rows to this cql3 table. We have no problems when we remove the writes 
> to the map column, and all other writes succeed in this case. Cassandra is 
> rejecting our writes and we are catching a TTransportException (no more data 
> to read). The below call stack is from the Cassandra instance that is 
> rejecting the write.
> {code}
> ERROR 14:08:10 Error occurred during processing of message.
> java.lang.IllegalArgumentException: null
> at java.nio.Buffer.limit(Unknown Source) ~[na:1.7.0_71]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543) 
> ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.MapSerializer.validateForNativeProtocol(MapSerializer.java:80)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.CollectionSerializer.validate(CollectionSerializer.java:61)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.db.marshal.AbstractType.validate(AbstractType.java:97) 
> ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnData(ThriftValidation.java:449)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnOrSuperColumn(ThriftValidation.java:318)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateMutation(ThriftValidation.java:385)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:861)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[libthrift-0.9.2.jar:0.9.2]
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[libthrift-0.9.2.jar:0.9.2]
> at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0_71]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0_71]
> at java.lang.Thread.run(Unknown Source) [na:1.7.0_71]{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10357) mmap file boundary selection is broken for some large files

2015-09-21 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900975#comment-14900975
 ] 

T Jake Luciani commented on CASSANDRA-10357:


I don't see it on the linked branch

> mmap file boundary selection is broken for some large files 
> 
>
> Key: CASSANDRA-10357
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10357
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 2.1.10
>
>
> If an early open interval occurs to close to an mmap boundary, the boundary 
> can be lost. Patch available 
> [here|https://github.com/belliottsmith/cassandra/tree/mmap-boundaries].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7486) Migrate to G1GC by default

2015-09-21 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900957#comment-14900957
 ] 

Jeremy Hanna commented on CASSANDRA-7486:
-

I was kind of curious about whether 2.1 or 2.2 branches exhibited the same 
behavior with the same gc settings as other data points.

> Migrate to G1GC by default
> --
>
> Key: CASSANDRA-7486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7486
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Config
>Reporter: Jonathan Ellis
> Fix For: 3.0 alpha 1
>
>
> See 
> http://www.slideshare.net/MonicaBeckwith/garbage-first-garbage-collector-g1-7486gc-migration-to-expectations-and-advanced-tuning
>  and https://twitter.com/rbranson/status/482113561431265281
> May want to default 2.1 to G1.
> 2.1 is a different animal from 2.0 after moving most of memtables off heap.  
> Suspect this will help G1 even more than CMS.  (NB this is off by default but 
> needs to be part of the test.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7486) Migrate to G1GC by default

2015-09-21 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900923#comment-14900923
 ] 

Ryan McGuire commented on CASSANDRA-7486:
-

Here's the logs: http://scp.datastax.com/~ryan/cstar_perf/01b714d8_logs.tar.bz2

> Migrate to G1GC by default
> --
>
> Key: CASSANDRA-7486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7486
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Config
>Reporter: Jonathan Ellis
> Fix For: 3.0 alpha 1
>
>
> See 
> http://www.slideshare.net/MonicaBeckwith/garbage-first-garbage-collector-g1-7486gc-migration-to-expectations-and-advanced-tuning
>  and https://twitter.com/rbranson/status/482113561431265281
> May want to default 2.1 to G1.
> 2.1 is a different animal from 2.0 after moving most of memtables off heap.  
> Suspect this will help G1 even more than CMS.  (NB this is off by default but 
> needs to be part of the test.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7486) Migrate to G1GC by default

2015-09-21 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900923#comment-14900923
 ] 

Ryan McGuire edited comment on CASSANDRA-7486 at 9/21/15 4:23 PM:
--

[~benedict] Here's the logs: 
http://scp.datastax.com/~ryan/cstar_perf/01b714d8_logs.tar.bz2


was (Author: enigmacurry):
Here's the logs: http://scp.datastax.com/~ryan/cstar_perf/01b714d8_logs.tar.bz2

> Migrate to G1GC by default
> --
>
> Key: CASSANDRA-7486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7486
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Config
>Reporter: Jonathan Ellis
> Fix For: 3.0 alpha 1
>
>
> See 
> http://www.slideshare.net/MonicaBeckwith/garbage-first-garbage-collector-g1-7486gc-migration-to-expectations-and-advanced-tuning
>  and https://twitter.com/rbranson/status/482113561431265281
> May want to default 2.1 to G1.
> 2.1 is a different animal from 2.0 after moving most of memtables off heap.  
> Suspect this will help G1 even more than CMS.  (NB this is off by default but 
> needs to be part of the test.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9324) Map Mutation rejected by Cassandra: IllegalArgumentException

2015-09-21 Thread David Loegering (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900906#comment-14900906
 ] 

David Loegering commented on CASSANDRA-9324:


Ok.  I found where we are encoding the values... We use the same encoding with 
16 bit sizes for composite keys.. I assume that only pertains to 
collections/maps? I need to have two ways to encode one for collections and the 
old way for columns?

> Map Mutation rejected by Cassandra: IllegalArgumentException
> 
>
> Key: CASSANDRA-9324
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9324
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: Windows 7, Cassandra 2.1.5
>Reporter: Mark Wick
>Assignee: Tyler Hobbs
>Priority: Minor
>
> We use a collection (map) in a CQL3 table. We write into that 
> cql3 table using thrift mutations, from a c++ application. We are prototyping 
> migrating from our current Cassandra (2.0.7) to 2.1.5, and are unable to 
> write rows to this cql3 table. We have no problems when we remove the writes 
> to the map column, and all other writes succeed in this case. Cassandra is 
> rejecting our writes and we are catching a TTransportException (no more data 
> to read). The below call stack is from the Cassandra instance that is 
> rejecting the write.
> {code}
> ERROR 14:08:10 Error occurred during processing of message.
> java.lang.IllegalArgumentException: null
> at java.nio.Buffer.limit(Unknown Source) ~[na:1.7.0_71]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543) 
> ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.MapSerializer.validateForNativeProtocol(MapSerializer.java:80)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.CollectionSerializer.validate(CollectionSerializer.java:61)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.db.marshal.AbstractType.validate(AbstractType.java:97) 
> ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnData(ThriftValidation.java:449)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnOrSuperColumn(ThriftValidation.java:318)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateMutation(ThriftValidation.java:385)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:861)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[libthrift-0.9.2.jar:0.9.2]
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[libthrift-0.9.2.jar:0.9.2]
> at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0_71]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0_71]
> at java.lang.Thread.run(Unknown Source) [na:1.7.0_71]{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-9324) Map Mutation rejected by Cassandra: IllegalArgumentException

2015-09-21 Thread David Loegering (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Loegering updated CASSANDRA-9324:
---
Comment: was deleted

(was: Hi Tyler,

Thanks for the explanation, but that is where the confusion is, we are not 
using 2.0 nodes.  We are using 2.1 with the 2.1 generated thrift interface and 
it is not working.  We are not using 2.0 nodes nor a 2.0 interface.  Wouldn’t 
that indicate that there is an issue with the 2.1 interface?  Where in the 2.1 
thrift interface layer do we need to make the change?  Should this issue be 
reopened?

Kind Regards,

David


)

> Map Mutation rejected by Cassandra: IllegalArgumentException
> 
>
> Key: CASSANDRA-9324
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9324
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: Windows 7, Cassandra 2.1.5
>Reporter: Mark Wick
>Assignee: Tyler Hobbs
>Priority: Minor
>
> We use a collection (map) in a CQL3 table. We write into that 
> cql3 table using thrift mutations, from a c++ application. We are prototyping 
> migrating from our current Cassandra (2.0.7) to 2.1.5, and are unable to 
> write rows to this cql3 table. We have no problems when we remove the writes 
> to the map column, and all other writes succeed in this case. Cassandra is 
> rejecting our writes and we are catching a TTransportException (no more data 
> to read). The below call stack is from the Cassandra instance that is 
> rejecting the write.
> {code}
> ERROR 14:08:10 Error occurred during processing of message.
> java.lang.IllegalArgumentException: null
> at java.nio.Buffer.limit(Unknown Source) ~[na:1.7.0_71]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543) 
> ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.MapSerializer.validateForNativeProtocol(MapSerializer.java:80)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.CollectionSerializer.validate(CollectionSerializer.java:61)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.db.marshal.AbstractType.validate(AbstractType.java:97) 
> ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnData(ThriftValidation.java:449)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnOrSuperColumn(ThriftValidation.java:318)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateMutation(ThriftValidation.java:385)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:861)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[libthrift-0.9.2.jar:0.9.2]
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[libthrift-0.9.2.jar:0.9.2]
> at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0_71]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0_71]
> at java.lang.Thread.run(Unknown Source) [na:1.7.0_71]{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10357) mmap file boundary selection is broken for some large files

2015-09-21 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900885#comment-14900885
 ] 

Benedict commented on CASSANDRA-10357:
--

I've pushed a new version, with tests included, and the boundary logic 
rewritten, as I encountered another bug while fixing it and decided it was time 
to make the boundary logic more consistent across all users (this bug was 
related to the fact we do not serialize the length into the file, and if we 
haven't updated the length as a "potential boundary" we may not serialize the 
boundary just before the length into the file - in which case when we 
deserialize the bounds the last segment may be larger than {{MAX_SEGMENT_SIZE}} 
even though it didn't need to be). 


> mmap file boundary selection is broken for some large files 
> 
>
> Key: CASSANDRA-10357
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10357
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 2.1.10
>
>
> If an early open interval occurs to close to an mmap boundary, the boundary 
> can be lost. Patch available 
> [here|https://github.com/belliottsmith/cassandra/tree/mmap-boundaries].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


svn commit: r1704333 - in /cassandra/site: publish/download/index.html src/settings.py

2015-09-21 Thread jake
Author: jake
Date: Mon Sep 21 15:53:52 2015
New Revision: 1704333

URL: http://svn.apache.org/viewvc?rev=1704333&view=rev
Log:
2.0.17 and 3.0.0-rc1

Modified:
cassandra/site/publish/download/index.html
cassandra/site/src/settings.py

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1704333&r1=1704332&r2=1704333&view=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Mon Sep 21 15:53:52 2015
@@ -125,16 +125,16 @@
 
   Development Cassandra Server Releases (not production 
ready)
   
-  The latest development release is 3.0.0-beta2 (released on
-  2015-09-08).
+  The latest development release is 3.0.0-rc1 (released on
+  2015-09-21).
   
 
   
 
-http://www.apache.org/dyn/closer.lua/cassandra/3.0.0/apache-cassandra-3.0.0-beta2-bin.tar.gz";>apache-cassandra-3.0.0-beta2-bin.tar.gz
-[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-beta2-bin.tar.gz.asc";>PGP]
-[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-beta2-bin.tar.gz.md5";>MD5]
-[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-beta2-bin.tar.gz.sha1";>SHA1]
+http://www.apache.org/dyn/closer.lua/cassandra/3.0.0/apache-cassandra-3.0.0-rc1-bin.tar.gz";>apache-cassandra-3.0.0-rc1-bin.tar.gz
+[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-rc1-bin.tar.gz.asc";>PGP]
+[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-rc1-bin.tar.gz.md5";>MD5]
+[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-rc1-bin.tar.gz.sha1";>SHA1]
 
   
   
@@ -146,15 +146,15 @@
   
   
   The lastest release on the 2.0 branch is
-  2.0.16 (released on 2015-06-22).
+  2.0.17 (released on 2015-09-21).
   
 
   
 
-http://www.apache.org/dyn/closer.lua/cassandra/2.0.16/apache-cassandra-2.0.16-bin.tar.gz";>apache-cassandra-2.0.16-bin.tar.gz
-[http://www.apache.org/dist/cassandra/2.0.16/apache-cassandra-2.0.16-bin.tar.gz.asc";>PGP]
-[http://www.apache.org/dist/cassandra/2.0.16/apache-cassandra-2.0.16-bin.tar.gz.md5";>MD5]
-[http://www.apache.org/dist/cassandra/2.0.16/apache-cassandra-2.0.16-bin.tar.gz.sha1";>SHA1]
+http://www.apache.org/dyn/closer.lua/cassandra/2.0.17/apache-cassandra-2.0.17-bin.tar.gz";>apache-cassandra-2.0.17-bin.tar.gz
+[http://www.apache.org/dist/cassandra/2.0.17/apache-cassandra-2.0.17-bin.tar.gz.asc";>PGP]
+[http://www.apache.org/dist/cassandra/2.0.17/apache-cassandra-2.0.17-bin.tar.gz.md5";>MD5]
+[http://www.apache.org/dist/cassandra/2.0.17/apache-cassandra-2.0.17-bin.tar.gz.sha1";>SHA1]
 
   
   
@@ -189,18 +189,18 @@
   
   
 
-http://www.apache.org/dyn/closer.lua/cassandra/2.0.16/apache-cassandra-2.0.16-src.tar.gz";>apache-cassandra-2.0.16-src.tar.gz
-[http://www.apache.org/dist/cassandra/2.0.16/apache-cassandra-2.0.16-src.tar.gz.asc";>PGP]
-[http://www.apache.org/dist/cassandra/2.0.16/apache-cassandra-2.0.16-src.tar.gz.md5";>MD5]
-[http://www.apache.org/dist/cassandra/2.0.16/apache-cassandra-2.0.16-src.tar.gz.sha1";>SHA1]
+http://www.apache.org/dyn/closer.lua/cassandra/2.0.17/apache-cassandra-2.0.17-src.tar.gz";>apache-cassandra-2.0.17-src.tar.gz
+[http://www.apache.org/dist/cassandra/2.0.17/apache-cassandra-2.0.17-src.tar.gz.asc";>PGP]
+[http://www.apache.org/dist/cassandra/2.0.17/apache-cassandra-2.0.17-src.tar.gz.md5";>MD5]
+[http://www.apache.org/dist/cassandra/2.0.17/apache-cassandra-2.0.17-src.tar.gz.sha1";>SHA1]
 
   
   
 
-http://www.apache.org/dyn/closer.lua/cassandra/3.0.0/apache-cassandra-3.0.0-beta2-src.tar.gz";>apache-cassandra-3.0.0-beta2-src.tar.gz
-[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-beta2-src.tar.gz.asc";>PGP]
-[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-beta2-src.tar.gz.md5";>MD5]
-[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-beta2-src.tar.gz.sha1";>SHA1]
+http://www.apache.org/dyn/closer.lua/cassandra/3.0.0/apache-cassandra-3.0.0-rc1-src.tar.gz";>apache-cassandra-3.0.0-rc1-src.tar.gz
+[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-rc1-src.tar.gz.asc";>PGP]
+[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-rc1-src.tar.gz.md5";>MD5]
+[http://www.apache.org/dist/cassandra/3.0.0/apache-cassandra-3.0.0-rc1-src.tar.gz.sha1";>SHA1]
 
   


Modified: cassandra/site/src/settings.py
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/settings.py?rev=1704333&r1=1704332&r2=1704333&view=diff
==
--- cassandra/site/src/settings.py (original)
+++ cassandra/site/src/settings.py Mon Sep 21 15:53:52 2015
@@ -98,11 +98,11 @@ class CassandraDef(object):
 oldstable_version = '2.1.9'

[jira] [Comment Edited] (CASSANDRA-7486) Migrate to G1GC by default

2015-09-21 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900842#comment-14900842
 ] 

Benedict edited comment on CASSANDRA-7486 at 9/21/15 3:40 PM:
--

Regrettably, that run crashed. Will have to diagnose the logs to see what may 
have happened - [~EnigmaCurry], could you unstick the cstar job, and collect 
the log files?

(The weird thing about CMS on that particular run is that 2.2 does not degrade; 
it is still 40% faster)


was (Author: benedict):
Regrettably, that run crashed. Will have to diagnose the logs to see what may 
have happened - [~EnigmaCurry], could you unstick the cstar job, and collect 
the log files?

> Migrate to G1GC by default
> --
>
> Key: CASSANDRA-7486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7486
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Config
>Reporter: Jonathan Ellis
> Fix For: 3.0 alpha 1
>
>
> See 
> http://www.slideshare.net/MonicaBeckwith/garbage-first-garbage-collector-g1-7486gc-migration-to-expectations-and-advanced-tuning
>  and https://twitter.com/rbranson/status/482113561431265281
> May want to default 2.1 to G1.
> 2.1 is a different animal from 2.0 after moving most of memtables off heap.  
> Suspect this will help G1 even more than CMS.  (NB this is off by default but 
> needs to be part of the test.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10280) Make DTCS work well with old data

2015-09-21 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900847#comment-14900847
 ] 

Marcus Eriksson commented on CASSANDRA-10280:
-

[~jjirsa] I am

[~jshook], [~Bj0rn] [~anissinen] wdyt? The idea is that when we limit window 
sizes we will naturally stop compacting old windows - unless we really have to 
(like after a bootstrap for example), keeping max_sstable_age_days is mostly 
pointless.

> Make DTCS work well with old data
> -
>
> Key: CASSANDRA-10280
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10280
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.x, 2.1.x, 2.2.x
>
>
> Operational tasks become incredibly expensive if you keep around a long 
> timespan of data with DTCS - with default settings and 1 year of data, the 
> oldest window covers about 180 days. Bootstrapping a node with vnodes with 
> this data layout will force cassandra to compact very many sstables in this 
> window.
> We should probably put a cap on how big the biggest windows can get. We could 
> probably default this to something sane based on max_sstable_age (ie, say we 
> can reasonably handle 1000 sstables per node, then we can calculate how big 
> the windows should be to allow that)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7486) Migrate to G1GC by default

2015-09-21 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900842#comment-14900842
 ] 

Benedict commented on CASSANDRA-7486:
-

Regrettably, that run crashed. Will have to diagnose the logs to see what may 
have happened - [~EnigmaCurry], could you unstick the cstar job, and collect 
the log files?

> Migrate to G1GC by default
> --
>
> Key: CASSANDRA-7486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7486
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Config
>Reporter: Jonathan Ellis
> Fix For: 3.0 alpha 1
>
>
> See 
> http://www.slideshare.net/MonicaBeckwith/garbage-first-garbage-collector-g1-7486gc-migration-to-expectations-and-advanced-tuning
>  and https://twitter.com/rbranson/status/482113561431265281
> May want to default 2.1 to G1.
> 2.1 is a different animal from 2.0 after moving most of memtables off heap.  
> Suspect this will help G1 even more than CMS.  (NB this is off by default but 
> needs to be part of the test.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10369) cqlsh prompt includes name of keyspace after failed `use` statement

2015-09-21 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-10369:
-
Reviewer: Aleksey Yeschenko

> cqlsh prompt includes name of keyspace after failed `use` statement
> ---
>
> Key: CASSANDRA-10369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10369
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.2.x, 3.0.x
>
>
> I found this while addressing CASSANDRA-10289.
> In cqlsh, if the user enters {{USE ks}}, but there is no keyspace named 
> {{ks}}, the prompt will read {{cqlsh:ks>}}. It should just read {{cqlsh>}}, 
> since the underlying session did not actually switch to use {{ks}}.
> I believe the bug is in cqlsh and not, e.g., the driver, because the 
> statement, as expected, raises an {{InvalidRequest}} error.
> The behavior shows in a test in the cqlshlib nosetests here:
> https://github.com/apache/cassandra/blob/03f556ffa8718754fe4eb329af2002d83ffc7147/pylib/cqlshlib/test/test_cqlsh_output.py#L545
> An example failure on CassCI is here:
> http://cassci.datastax.com/job/scratch_mambocab-fix_cqlsh/11/testReport/cqlshlib.test.test_cqlsh_output/TestCqlshOutput/test_prompt/
> You can also reproduce it trivially in ccm, or however you choose to run 
> clusters locally:
> {code}
> ccm create cqlsh -v git:trunk -n 1 ; ccm start --wait-for-binary-proto ; ccm 
> node1 cqlsh
> http://git-wip-us.apache.org/repos/asf/cassandra.git git:trunk
> Fetching Cassandra updates...
> Current cluster is now: cqlsh
> Connected to cqlsh at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 3.0.0-beta2-SNAPSHOT | CQL spec 3.3.1 | Native 
> protocol v4]
> Use HELP for help.
> cqlsh> use nonexistentkeyspace;
> InvalidRequest: code=2200 [Invalid query] message="Keyspace 
> 'nonexistentkeyspace' does not exist"
> cqlsh:nonexistentkeyspace> 
> {code}
> That last line should read {{cqlsh>}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10369) cqlsh prompt includes name of keyspace after failed `use` statement

2015-09-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900792#comment-14900792
 ] 

Aleksey Yeschenko commented on CASSANDRA-10369:
---

+1

> cqlsh prompt includes name of keyspace after failed `use` statement
> ---
>
> Key: CASSANDRA-10369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10369
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.2.x, 3.0.x
>
>
> I found this while addressing CASSANDRA-10289.
> In cqlsh, if the user enters {{USE ks}}, but there is no keyspace named 
> {{ks}}, the prompt will read {{cqlsh:ks>}}. It should just read {{cqlsh>}}, 
> since the underlying session did not actually switch to use {{ks}}.
> I believe the bug is in cqlsh and not, e.g., the driver, because the 
> statement, as expected, raises an {{InvalidRequest}} error.
> The behavior shows in a test in the cqlshlib nosetests here:
> https://github.com/apache/cassandra/blob/03f556ffa8718754fe4eb329af2002d83ffc7147/pylib/cqlshlib/test/test_cqlsh_output.py#L545
> An example failure on CassCI is here:
> http://cassci.datastax.com/job/scratch_mambocab-fix_cqlsh/11/testReport/cqlshlib.test.test_cqlsh_output/TestCqlshOutput/test_prompt/
> You can also reproduce it trivially in ccm, or however you choose to run 
> clusters locally:
> {code}
> ccm create cqlsh -v git:trunk -n 1 ; ccm start --wait-for-binary-proto ; ccm 
> node1 cqlsh
> http://git-wip-us.apache.org/repos/asf/cassandra.git git:trunk
> Fetching Cassandra updates...
> Current cluster is now: cqlsh
> Connected to cqlsh at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 3.0.0-beta2-SNAPSHOT | CQL spec 3.3.1 | Native 
> protocol v4]
> Use HELP for help.
> cqlsh> use nonexistentkeyspace;
> InvalidRequest: code=2200 [Invalid query] message="Keyspace 
> 'nonexistentkeyspace' does not exist"
> cqlsh:nonexistentkeyspace> 
> {code}
> That last line should read {{cqlsh>}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10369) cqlsh prompt includes name of keyspace after failed `use` statement

2015-09-21 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900727#comment-14900727
 ] 

Robert Stupp commented on CASSANDRA-10369:
--

[~mambocab],wanna review?

> cqlsh prompt includes name of keyspace after failed `use` statement
> ---
>
> Key: CASSANDRA-10369
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10369
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.2.x, 3.0.x
>
>
> I found this while addressing CASSANDRA-10289.
> In cqlsh, if the user enters {{USE ks}}, but there is no keyspace named 
> {{ks}}, the prompt will read {{cqlsh:ks>}}. It should just read {{cqlsh>}}, 
> since the underlying session did not actually switch to use {{ks}}.
> I believe the bug is in cqlsh and not, e.g., the driver, because the 
> statement, as expected, raises an {{InvalidRequest}} error.
> The behavior shows in a test in the cqlshlib nosetests here:
> https://github.com/apache/cassandra/blob/03f556ffa8718754fe4eb329af2002d83ffc7147/pylib/cqlshlib/test/test_cqlsh_output.py#L545
> An example failure on CassCI is here:
> http://cassci.datastax.com/job/scratch_mambocab-fix_cqlsh/11/testReport/cqlshlib.test.test_cqlsh_output/TestCqlshOutput/test_prompt/
> You can also reproduce it trivially in ccm, or however you choose to run 
> clusters locally:
> {code}
> ccm create cqlsh -v git:trunk -n 1 ; ccm start --wait-for-binary-proto ; ccm 
> node1 cqlsh
> http://git-wip-us.apache.org/repos/asf/cassandra.git git:trunk
> Fetching Cassandra updates...
> Current cluster is now: cqlsh
> Connected to cqlsh at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 3.0.0-beta2-SNAPSHOT | CQL spec 3.3.1 | Native 
> protocol v4]
> Use HELP for help.
> cqlsh> use nonexistentkeyspace;
> InvalidRequest: code=2200 [Invalid query] message="Keyspace 
> 'nonexistentkeyspace' does not exist"
> cqlsh:nonexistentkeyspace> 
> {code}
> That last line should read {{cqlsh>}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10379) Consider using -XX:+TrustFinalNonStaticFields

2015-09-21 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-10379:


 Summary: Consider using -XX:+TrustFinalNonStaticFields
 Key: CASSANDRA-10379
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10379
 Project: Cassandra
  Issue Type: Improvement
Reporter: Robert Stupp
 Fix For: 3.x


The JVM option {{-XX:+TrustFinalNonStaticFields}}, although experimental, seems 
to improve performance a bit without any code change. Therefore I propose to 
include it in {{cassandra-env.sh/psl}}.

[cstar perf 
benchmark|http://cstar.datastax.com/graph?stats=a6e75018-5ff4-11e5-bf84-42010af0688f&metric=op_rate&operation=1_user&smoothing=1&show_aggregates=true&xmin=0&xmax=865.59&ymin=0&ymax=145568.5]
The cstar test was run with 8u45.

{noformat}
JVM_OPTS="$JVM_OPTS -XX:+UnlockExperimentalVMOptions"
JVM_OPTS="$JVM_OPTS -XX:+TrustFinalNonStaticFields"
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9324) Map Mutation rejected by Cassandra: IllegalArgumentException

2015-09-21 Thread David Loegering (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900441#comment-14900441
 ] 

David Loegering edited comment on CASSANDRA-9324 at 9/21/15 12:47 PM:
--

Hi Tyler,

Thanks for the explanation, but that is where the confusion is, we are not 
using 2.0 nodes.  We are using 2.1 with the 2.1 generated thrift interface and 
it is not working.  We are not using 2.0 nodes nor a 2.0 interface.  Wouldn’t 
that indicate that there is an issue with the 2.1 interface?  Where in the 2.1 
thrift interface layer do we need to make the change?  Should this issue be 
reopened?

Kind Regards,

David





was (Author: dloegering):
Hi Tyler,





Thanks for the explanation, but that is where the confusion is, we are not 
using 2.0 nodes.  We are using 2.1 with the 2.1 generated thrift interface and 
it is not working.  We are not using 2.0 nodes nor a 2.0 interface.  Wouldn’t 
that indicate that there is an issue with the 2.1 interface?  Where in the 2.1 
thrift interface layer do we need to make the change?  Should this issue be 
reopened?


Kind Regards,


David





From: Tyler Hobbs (JIRA)
Sent: ‎Friday‎, ‎September‎ ‎18‎, ‎2015 ‎4‎:‎33‎ ‎PM
To: dloeger...@comcast.net






[ 
https://issues.apache.org/jira/browse/CASSANDRA-9324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876478#comment-14876478
 ] 

Tyler Hobbs commented on CASSANDRA-9324:


[~dloegering] no, collections are still supported, you just need to encode 
collections slightly different when talking to 2.1 nodes vs 2.0 nodes.  With 
2.0 nodes, the collection element count and the sizes of individual collection 
elements should be shorts (two bytes).  With 2.1 nodes, the collection element 
count and sizes are ints (four bytes).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


> Map Mutation rejected by Cassandra: IllegalArgumentException
> 
>
> Key: CASSANDRA-9324
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9324
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: Windows 7, Cassandra 2.1.5
>Reporter: Mark Wick
>Assignee: Tyler Hobbs
>Priority: Minor
>
> We use a collection (map) in a CQL3 table. We write into that 
> cql3 table using thrift mutations, from a c++ application. We are prototyping 
> migrating from our current Cassandra (2.0.7) to 2.1.5, and are unable to 
> write rows to this cql3 table. We have no problems when we remove the writes 
> to the map column, and all other writes succeed in this case. Cassandra is 
> rejecting our writes and we are catching a TTransportException (no more data 
> to read). The below call stack is from the Cassandra instance that is 
> rejecting the write.
> {code}
> ERROR 14:08:10 Error occurred during processing of message.
> java.lang.IllegalArgumentException: null
> at java.nio.Buffer.limit(Unknown Source) ~[na:1.7.0_71]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543) 
> ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.MapSerializer.validateForNativeProtocol(MapSerializer.java:80)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.CollectionSerializer.validate(CollectionSerializer.java:61)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.db.marshal.AbstractType.validate(AbstractType.java:97) 
> ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnData(ThriftValidation.java:449)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnOrSuperColumn(ThriftValidation.java:318)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateMutation(ThriftValidation.java:385)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:861)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
> at org.

[jira] [Commented] (CASSANDRA-9324) Map Mutation rejected by Cassandra: IllegalArgumentException

2015-09-21 Thread David Loegering (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900441#comment-14900441
 ] 

David Loegering commented on CASSANDRA-9324:


Hi Tyler,





Thanks for the explanation, but that is where the confusion is, we are not 
using 2.0 nodes.  We are using 2.1 with the 2.1 generated thrift interface and 
it is not working.  We are not using 2.0 nodes nor a 2.0 interface.  Wouldn’t 
that indicate that there is an issue with the 2.1 interface?  Where in the 2.1 
thrift interface layer do we need to make the change?  Should this issue be 
reopened?


Kind Regards,


David





From: Tyler Hobbs (JIRA)
Sent: ‎Friday‎, ‎September‎ ‎18‎, ‎2015 ‎4‎:‎33‎ ‎PM
To: dloeger...@comcast.net






[ 
https://issues.apache.org/jira/browse/CASSANDRA-9324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876478#comment-14876478
 ] 

Tyler Hobbs commented on CASSANDRA-9324:


[~dloegering] no, collections are still supported, you just need to encode 
collections slightly different when talking to 2.1 nodes vs 2.0 nodes.  With 
2.0 nodes, the collection element count and the sizes of individual collection 
elements should be shorts (two bytes).  With 2.1 nodes, the collection element 
count and sizes are ints (four bytes).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


> Map Mutation rejected by Cassandra: IllegalArgumentException
> 
>
> Key: CASSANDRA-9324
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9324
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: Windows 7, Cassandra 2.1.5
>Reporter: Mark Wick
>Assignee: Tyler Hobbs
>Priority: Minor
>
> We use a collection (map) in a CQL3 table. We write into that 
> cql3 table using thrift mutations, from a c++ application. We are prototyping 
> migrating from our current Cassandra (2.0.7) to 2.1.5, and are unable to 
> write rows to this cql3 table. We have no problems when we remove the writes 
> to the map column, and all other writes succeed in this case. Cassandra is 
> rejecting our writes and we are catching a TTransportException (no more data 
> to read). The below call stack is from the Cassandra instance that is 
> rejecting the write.
> {code}
> ERROR 14:08:10 Error occurred during processing of message.
> java.lang.IllegalArgumentException: null
> at java.nio.Buffer.limit(Unknown Source) ~[na:1.7.0_71]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543) 
> ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.MapSerializer.validateForNativeProtocol(MapSerializer.java:80)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.serializers.CollectionSerializer.validate(CollectionSerializer.java:61)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.db.marshal.AbstractType.validate(AbstractType.java:97) 
> ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnData(ThriftValidation.java:449)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnOrSuperColumn(ThriftValidation.java:318)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.ThriftValidation.validateMutation(ThriftValidation.java:385)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:861)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
> at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
>  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> ~[libthrift-0.9.2.jar:0.9.2]
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[libthrift-0.9.2.jar:0.9.2]
> at 
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
>  ~[apache-cassandra-2.1.5.jar:2.1.5]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0_71]
> at java.util.concurrent.ThreadPoolExecutor$Work

[jira] [Commented] (CASSANDRA-9956) Stream failed during a rebuild

2015-09-21 Thread Boudigue (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900398#comment-14900398
 ] 

Boudigue commented on CASSANDRA-9956:
-

Yes you may close it.
Thanks.
Didier Boudigue

> Stream failed during a rebuild
> --
>
> Key: CASSANDRA-9956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9956
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Linux el6.x86_64 #1 SMP Fri May 29 10:16:43 EDT 2015 
> x86_64 x86_64 x86_64 GNU/Linux
> Red Hat Enterprise Linux Server release 6.6 (Santiago)
>Reporter: Boudigue
> Fix For: 2.1.x
>
> Attachments: system.log.tgz
>
>
> In an attempt to rebuild a node of a datacenter cass2 from datacenter cass, 
> stream failed :
>  /opt/dse/bin/nodetool rebuild -- cass
> error: Error while rebuilding node: Stream failed
> -- StackTrace --
> java.lang.RuntimeException: Error while rebuilding node: Stream failed
> at 
> org.apache.cassandra.service.StorageService.rebuild(StorageService.java:1048)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> see system.log attached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)