[jira] [Updated] (CASSANDRA-6075) The token function should allow column identifiers in the correct order only

2014-09-23 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-6075:
--
Attachment: CASSANDRA-2.1-6075-PART2.txt
CASSANDRA-2.0-6075-PART2.txt

Completely forgot the case of the slice with start and end bound(e.g. 
token(key) >= token(1) and token(key) < token(2))
That is what broke the Pig-Tests. Sorry.
Here are the patchs to fix that problem on the 2.0 and 2.1+ branches.

I also dicover while trying to add more tests than the current approach to 
handle token functions is broken. With the current approach we cannot reject 
queries like:
{{SELECT * FROM %s WHERE token(a, b) > token(?, ?) and token(b) < token(?, ?)}} 
or {{SELECT * FROM %s WHERE token(a) > token(?, ?) and token(b) > token(?, ?)}} 
I will try to find another solution as part of #CASSANDRA-7981

> The token function should allow column identifiers in the correct order only
> 
>
> Key: CASSANDRA-6075
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6075
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 1.2.9
>Reporter: Michaël Figuière
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: cql
> Fix For: 2.0.11, 2.1.1
>
> Attachments: CASSANDRA-2.0-6075-PART2.txt, 
> CASSANDRA-2.1-6075-PART2.txt, CASSANDRA-2.1-6075.txt, CASSANDRA-6075.txt
>
>
> Given the following table:
> {code}
> CREATE TABLE t1 (a int, b text, PRIMARY KEY ((a, b)));
> {code}
> The following request returns an error in cqlsh as literal arguments order is 
> incorrect:
> {code}
> SELECT * FROM t1 WHERE token(a, b) > token('s', 1);
> Bad Request: Type error: 's' cannot be passed as argument 0 of function token 
> of type int
> {code}
> But surprisingly if we provide the column identifier arguments in the wrong 
> order no error is returned:
> {code}
> SELECT * FROM t1 WHERE token(a, b) > token(1, 'a'); // correct order is valid
> SELECT * FROM t1 WHERE token(b, a) > token(1, 'a'); // incorrect order is 
> valid as well
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7986) The Pig tests cannot run on Cygwin on Windows

2014-09-23 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144567#comment-14144567
 ] 

Benjamin Lerer commented on CASSANDRA-7986:
---

The patch worked fine on my environment.

> The Pig tests cannot run on Cygwin on Windows
> -
>
> Key: CASSANDRA-7986
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7986
> Project: Cassandra
>  Issue Type: Bug
> Environment: Windows 8.1, Cygwin 1.7.32 
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0
>
> Attachments: CASSANDRA-7986.txt
>
>
> When running the Pig-Tests on Cygwin Windows I run into 
> https://issues.apache.org/jira/browse/HADOOP-7682. 
> Ideally this issue should be properly fix in HADOOP but as the issue is open 
> since September 2011 it will be good if we implemented the workaround 
> mentionned by Joshua Caplan for the Pig-Tests 
> (https://issues.apache.org/jira/browse/HADOOP-7682?focusedCommentId=13440120&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13440120)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7981) Refactor SelectStatement

2014-09-23 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144570#comment-14144570
 ] 

Benjamin Lerer commented on CASSANDRA-7981:
---

We should also fix as part of this ticket the problem of the token function. 
The current approach is not able to dectect as invalid queries like:
* SELECT * FROM %s WHERE token(a, b) > token(?, ?) and token(b) < token(?, ?)
* SELECT * FROM %s WHERE token(a) > token(?, ?) and token(b) > token(?, ?)

> Refactor SelectStatement
> 
>
> Key: CASSANDRA-7981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7981
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.0
>
>
> The current state of the code of SelectStatement make fixing some issues or 
> adding new functionnalities really hard. It also contains some 
> functionnalities that we would like to reuse in ModificationStatement but 
> cannot for the moment.
> Ideally I would like to:
> * Perform as much validation as possible on Relations instead of performing 
> it on Restrictions as it will help for problem like the one of 
> #CASSANDRA-6075 (I believe that by clearly separating validation and 
> Restrictions building we will also make the code a lot clearer)
> * Provide a way to easily merge restrictions on the same columns as needed 
> for #CASSANDRA-7016
> * Have a preparation logic (validation + pre-processing) that we can easily 
> reuse for Delete statement #CASSANDRA-6237
> * Make the code much easier to read and safer to modify.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-7990) CompoundDenseCellNameType AssertionError and BoundedComposite to CellName ClasCastException

2014-09-23 Thread Christian Spriegel (JIRA)
Christian Spriegel created CASSANDRA-7990:
-

 Summary: CompoundDenseCellNameType AssertionError and 
BoundedComposite to CellName ClasCastException
 Key: CASSANDRA-7990
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7990
 Project: Cassandra
  Issue Type: Bug
Reporter: Christian Spriegel
Priority: Minor


I just updated my laptop to Cassandra 2.1 and created a fresh data folder.

When trying to run my automated tests i get a lot these exceptions in the 
Cassandra log:

{code}
ERROR [SharedPool-Worker-1] 2014-09-23 12:59:17,812 ErrorMessage.java:218 - 
Unexpected exception during request
java.lang.AssertionError: null
at 
org.apache.cassandra.db.composites.CompoundDenseCellNameType.create(CompoundDenseCellNameType.java:57)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:313) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:91)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:235)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:181)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:283)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:269)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:264)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:187)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:206) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:118)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:422)
 [apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:318)
 [apache-cassandra-2.1.0.jar:2.1.0]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:103)
 [netty-all-4.0.20.Final.jar:4.0.20.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
 [netty-all-4.0.20.Final.jar:4.0.20.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:31)
 [netty-all-4.0.20.Final.jar:4.0.20.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:323)
 [netty-all-4.0.20.Final.jar:4.0.20.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_67]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
 [apache-cassandra-2.1.0.jar:2.1.0]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
[apache-cassandra-2.1.0.jar:2.1.0]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]

ERROR [Thrift:9] 2014-09-23 12:59:17,823 CustomTThreadPoolServer.java:219 - 
Error occurred during processing of message.
java.lang.ClassCastException: 
org.apache.cassandra.db.composites.BoundedComposite cannot be cast to 
org.apache.cassandra.db.composites.CellName
at 
org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:170)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.thrift.CassandraServer.deleteColumnOrSuperColumn(CassandraServer.java:936)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:860)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:971)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
 ~[apache-cassandra-thrift-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
 ~[apache-cassandra-thrift-2.1.0.jar:2.1.0]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
~[libthrift-0.9.1.jar:0.9.1]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
~[libthrift-0.9.1.jar:0.9.1]
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProc

[jira] [Updated] (CASSANDRA-7990) CompoundDenseCellNameType AssertionError and BoundedComposite to CellName ClasCastException

2014-09-23 Thread Christian Spriegel (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Spriegel updated CASSANDRA-7990:
--
Environment: Ubuntu, Java 1.7.0_67, Cassandra 2.1.0

> CompoundDenseCellNameType AssertionError and BoundedComposite to CellName 
> ClasCastException
> ---
>
> Key: CASSANDRA-7990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7990
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu, Java 1.7.0_67, Cassandra 2.1.0
>Reporter: Christian Spriegel
>Priority: Minor
>
> I just updated my laptop to Cassandra 2.1 and created a fresh data folder.
> When trying to run my automated tests i get a lot these exceptions in the 
> Cassandra log:
> {code}
> ERROR [SharedPool-Worker-1] 2014-09-23 12:59:17,812 ErrorMessage.java:218 - 
> Unexpected exception during request
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.composites.CompoundDenseCellNameType.create(CompoundDenseCellNameType.java:57)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:313) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:91)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:181)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:283)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:269)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:264)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:187)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:206) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:118)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:422)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:318)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:103)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:31)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:323)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_67]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
> [apache-cassandra-2.1.0.jar:2.1.0]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
> ERROR [Thrift:9] 2014-09-23 12:59:17,823 CustomTThreadPoolServer.java:219 - 
> Error occurred during processing of message.
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.BoundedComposite cannot be cast to 
> org.apache.cassandra.db.composites.CellName
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:170)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.thrift.CassandraServer.deleteColumnOrSuperColumn(CassandraServer.java:936)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:860)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:971)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
>  ~[apache-cassandra-thrift-2.1

[jira] [Updated] (CASSANDRA-7990) CompoundDenseCellNameType AssertionError and BoundedComposite to CellName ClasCastException

2014-09-23 Thread Christian Spriegel (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Spriegel updated CASSANDRA-7990:
--
Environment: Ubuntu, Java 1.7.0_67, Cassandra 2.1.0,  
cassandra-driver-core:jar:2.0.6  (was: Ubuntu, Java 1.7.0_67, Cassandra 2.1.0)

> CompoundDenseCellNameType AssertionError and BoundedComposite to CellName 
> ClasCastException
> ---
>
> Key: CASSANDRA-7990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7990
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu, Java 1.7.0_67, Cassandra 2.1.0,  
> cassandra-driver-core:jar:2.0.6
>Reporter: Christian Spriegel
>Priority: Minor
>
> I just updated my laptop to Cassandra 2.1 and created a fresh data folder.
> When trying to run my automated tests i get a lot these exceptions in the 
> Cassandra log:
> {code}
> ERROR [SharedPool-Worker-1] 2014-09-23 12:59:17,812 ErrorMessage.java:218 - 
> Unexpected exception during request
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.composites.CompoundDenseCellNameType.create(CompoundDenseCellNameType.java:57)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:313) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:91)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:181)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:283)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:269)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:264)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:187)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:206) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:118)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:422)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:318)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:103)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:31)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:323)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_67]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
> [apache-cassandra-2.1.0.jar:2.1.0]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
> ERROR [Thrift:9] 2014-09-23 12:59:17,823 CustomTThreadPoolServer.java:219 - 
> Error occurred during processing of message.
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.BoundedComposite cannot be cast to 
> org.apache.cassandra.db.composites.CellName
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:170)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.thrift.CassandraServer.deleteColumnOrSuperColumn(CassandraServer.java:936)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:860)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:971)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.a

[jira] [Updated] (CASSANDRA-7988) 2.1 broke cqlsh for IPv6

2014-09-23 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7988:
---
Labels: cqlsh  (was: )

> 2.1 broke cqlsh for IPv6 
> -
>
> Key: CASSANDRA-7988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7988
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Josh Wright
>  Labels: cqlsh
> Fix For: 2.1.1
>
>
> cqlsh in 2.1 switched to the cassandra-driver Python library, which only 
> recently added IPv6 support. The version bundled with 2.1.0 does not include 
> a sufficiently recent version, so cqlsh is unusable for those of us running 
> IPv6 (us? me...?)
> The fix is to simply upgrade the bundled version of the Python 
> cassandra-driver to at least version 2.1.1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7983) nodetool repair triggers OOM

2014-09-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144774#comment-14144774
 ] 

Jimmy Mårdell commented on CASSANDRA-7983:
--

We got the same problem on Cassandra 2.0.10. I've traced it to a bug in 
StorageService#createRepairRangeFrom which gets stuck in an infinite loop 
allocating memory. This happens when you try to repair (using -st and -et) the 
very "first" range in the ring and the lowest token in the ring is not the 
minimum token for that partitioner. The problem is the following lines:

{code}
Token previous = 
tokenMetadata.getPredecessor(TokenMetadata.firstToken(tokenMetadata.sortedTokens(),
 parsedEndToken));
while (parsedBeginToken.compareTo(previous) < 0)
  ...
  previous = tokenMetadata.getPredecessor(previous);
}
{code}

previous will never become less than parsedBeginToken.

This bug was introduced with CASSANDRA-7317.


> nodetool repair triggers OOM
> 
>
> Key: CASSANDRA-7983
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7983
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment:  
> {noformat}
>  INFO [main] 2014-09-16 14:23:14,621 DseDaemon.java (line 368) DSE version: 
> 4.5.0
>  INFO [main] 2014-09-16 14:23:14,622 DseDaemon.java (line 369) Hadoop 
> version: 1.0.4.13
>  INFO [main] 2014-09-16 14:23:14,627 DseDaemon.java (line 370) Hive version: 
> 0.12.0.3
>  INFO [main] 2014-09-16 14:23:14,628 DseDaemon.java (line 371) Pig version: 
> 0.10.1
>  INFO [main] 2014-09-16 14:23:14,629 DseDaemon.java (line 372) Solr version: 
> 4.6.0.2.4
>  INFO [main] 2014-09-16 14:23:14,630 DseDaemon.java (line 373) Sqoop version: 
> 1.4.4.14.1
>  INFO [main] 2014-09-16 14:23:14,630 DseDaemon.java (line 374) Mahout 
> version: 0.8
>  INFO [main] 2014-09-16 14:23:14,631 DseDaemon.java (line 375) Appender 
> version: 3.0.2
>  INFO [main] 2014-09-16 14:23:14,632 DseDaemon.java (line 376) Spark version: 
> 0.9.1
>  INFO [main] 2014-09-16 14:23:14,632 DseDaemon.java (line 377) Shark version: 
> 0.9.1.1
>  INFO [main] 2014-09-16 14:23:20,270 CassandraDaemon.java (line 160) JVM 
> vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.7.0_51
>  INFO [main] 2014-09-16 14:23:20,270 CassandraDaemon.java (line 188) Heap 
> size: 6316621824/6316621824
> {noformat}
>Reporter: Jose Martinez Poblete
> Attachments: gc.log.0, nbcqa-chc-a01_systemlog.tar.Z, 
> nbcqa-chc-a03_systemlog.tar.Z, system.log
>
>
> Customer has a 3 node cluster with 500Mb data on each node
> {noformat}
> [cassandra@nbcqa-chc-a02 ~]$ nodetool status
> Note: Ownership information does not include topology; for complete 
> information, specify a keyspace
> Datacenter: CH2
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  Owns   Host ID  
>  Rack
> UN  162.150.4.234  255.26 MB  256 33.2%  
> 4ad1b6a8-8759-4920-b54a-f059126900df  RAC1
> UN  162.150.4.235  318.37 MB  256 32.6%  
> 3eb0ec58-4b81-442e-bee5-4c91da447f38  RAC1
> UN  162.150.4.167  243.7 MB   256 34.2%  
> 5b2c1900-bf03-41c1-bb4e-82df1655b8d8  RAC1
> [cassandra@nbcqa-chc-a02 ~]$
> {noformat}
> After we run repair command, system runs into OOM after some 45 minutes
> Nothing else is running
> {noformat}
> [cassandra@nbcqa-chc-a02 ~]$ date
> Fri Sep 19 15:55:33 UTC 2014
> [cassandra@nbcqa-chc-a02 ~]$ nodetool repair -st -9220354588320251877 -et 
> -9220354588320251873
> Sep 19, 2014 4:06:08 PM ClientCommunicatorAdmin Checker-run
> WARNING: Failed to check the connection: java.net.SocketTimeoutException: 
> Read timed out
> {noformat}
> Herer is when we run OOM
> {noformat}
> ERROR [ReadStage:28914] 2014-09-19 16:34:50,381 CassandraDaemon.java (line 
> 199) Exception in thread Thread[ReadStage:28914,5,main]
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:69)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:43)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.createReader(CompressedPoolingSegmentedFile.java:48)
> at 
> org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:39)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1195)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:57)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableS

[jira] [Commented] (CASSANDRA-7924) Optimization of Java UDFs

2014-09-23 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144803#comment-14144803
 ] 

Benjamin Lerer commented on CASSANDRA-7924:
---

The code of {{JavaSourceUDFFactory}} is much easier to read. Nice work.

A few minor stuff:
* Your large comments are good but they should be in the method javadocs and 
not in block comments. That way they will not interfer with the reading of the 
code but are there when the reader need to refer to them.
* You sometime chain append and sometimes not. It is a bit disturbing. You 
should choose one and be consistent.



> Optimization of Java UDFs
> -
>
> Key: CASSANDRA-7924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>  Labels: docs, lhf, udf
> Fix For: 3.0
>
> Attachments: 7924.txt, 7924v2.txt
>
>
> Refactor 'java' UDFs to optimize invocation. Goal is to remove reflection 
> code. Implementation uses javassist to generate an instance of {{Function}} 
> that can be directly used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-7990) CompoundDenseCellNameType AssertionError and BoundedComposite to CellName ClasCastException

2014-09-23 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-7990:
---

Assignee: Tyler Hobbs

> CompoundDenseCellNameType AssertionError and BoundedComposite to CellName 
> ClasCastException
> ---
>
> Key: CASSANDRA-7990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7990
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu, Java 1.7.0_67, Cassandra 2.1.0,  
> cassandra-driver-core:jar:2.0.6
>Reporter: Christian Spriegel
>Assignee: Tyler Hobbs
>Priority: Minor
>
> I just updated my laptop to Cassandra 2.1 and created a fresh data folder.
> When trying to run my automated tests i get a lot these exceptions in the 
> Cassandra log:
> {code}
> ERROR [SharedPool-Worker-1] 2014-09-23 12:59:17,812 ErrorMessage.java:218 - 
> Unexpected exception during request
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.composites.CompoundDenseCellNameType.create(CompoundDenseCellNameType.java:57)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:313) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:91)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:181)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:283)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:269)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:264)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:187)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:206) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:118)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:422)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:318)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:103)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:31)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:323)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_67]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
> [apache-cassandra-2.1.0.jar:2.1.0]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
> ERROR [Thrift:9] 2014-09-23 12:59:17,823 CustomTThreadPoolServer.java:219 - 
> Error occurred during processing of message.
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.BoundedComposite cannot be cast to 
> org.apache.cassandra.db.composites.CellName
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:170)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.thrift.CassandraServer.deleteColumnOrSuperColumn(CassandraServer.java:936)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:860)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:971)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandr

[jira] [Commented] (CASSANDRA-7939) checkForEndpointCollision should ignore joining nodes

2014-09-23 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144850#comment-14144850
 ] 

Brandon Williams commented on CASSANDRA-7939:
-

bq. Based on the description I'm not clear why the patch is to check if it's 
not a fatClient?

Because you can't retry a failed bootstrap until the fat client expires from 
gossip.

bq. nit: in the 2.1 patch you should use RangeStreamer.useStrictConsistency vs 
re-parsing the property

Fair enough, can fix on commit.

> checkForEndpointCollision should ignore joining nodes
> -
>
> Key: CASSANDRA-7939
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7939
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Brandon Williams
>Priority: Minor
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 7939-2.1.txt, 7939.txt
>
>
> If you fail a bootstrap, then immediately retry it, cFEC erroneously tells 
> you to replace it:
> {noformat}
> ERROR 00:04:50 Exception encountered during startup
> java.lang.RuntimeException: A node with address bw-3/10.208.8.63 already 
> exists, cancelling join. Use cassandra.replace_address if you want to replace 
> this node.
> at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:453)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:666)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:507)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7939) checkForEndpointCollision should ignore joining nodes

2014-09-23 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144880#comment-14144880
 ] 

T Jake Luciani commented on CASSANDRA-7939:
---

+1

> checkForEndpointCollision should ignore joining nodes
> -
>
> Key: CASSANDRA-7939
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7939
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Brandon Williams
>Priority: Minor
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 7939-2.1.txt, 7939.txt
>
>
> If you fail a bootstrap, then immediately retry it, cFEC erroneously tells 
> you to replace it:
> {noformat}
> ERROR 00:04:50 Exception encountered during startup
> java.lang.RuntimeException: A node with address bw-3/10.208.8.63 already 
> exists, cancelling join. Use cassandra.replace_address if you want to replace 
> this node.
> at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:453)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:666)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:507)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7969) Properly track min/max timestamps and maxLocalDeletionTimes for range and row tombstones

2014-09-23 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144875#comment-14144875
 ] 

T Jake Luciani commented on CASSANDRA-7969:
---

Overall looks good,  

It might be clearer if we create a descriptive name for the default min/max 
tombstone times than MAX_VALUE/MIN_VALUE (like we do with partitioners).

Also, it would be good to add a test for partition only tables (Row markers).

> Properly track min/max timestamps and maxLocalDeletionTimes for range and row 
> tombstones
> 
>
> Key: CASSANDRA-7969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7969
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.0.11
>
> Attachments: 
> 0001-track-min-max-timestamps-and-maxLocalDeletionTime-co.patch, 
> 0001-track-min-max-timestamps-and-maxLocalDeletionTime-v2.patch
>
>
> First problem is that when we have only row or range tombstones in an sstable 
> we dont update the maxLocalDeletionTime for the sstable
> Second problem is that if we have a range tombstone in an sstable, 
> minTimestamp will always be Long.MIN_VALUE for flushed sstables due to how we 
> set the default values for the variables



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7939) checkForEndpointCollision should ignore joining nodes

2014-09-23 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144880#comment-14144880
 ] 

T Jake Luciani edited comment on CASSANDRA-7939 at 9/23/14 3:20 PM:


+1, nit: the state string split etc looks fragile, can you assert the split 
worked as expected?


was (Author: tjake):
+1

> checkForEndpointCollision should ignore joining nodes
> -
>
> Key: CASSANDRA-7939
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7939
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Brandon Williams
>Priority: Minor
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 7939-2.1.txt, 7939.txt
>
>
> If you fail a bootstrap, then immediately retry it, cFEC erroneously tells 
> you to replace it:
> {noformat}
> ERROR 00:04:50 Exception encountered during startup
> java.lang.RuntimeException: A node with address bw-3/10.208.8.63 already 
> exists, cancelling join. Use cassandra.replace_address if you want to replace 
> this node.
> at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:453)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:666)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:507)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) 
> [main/:na]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7409) Allow multiple overlapping sstables in L1

2014-09-23 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144912#comment-14144912
 ] 

Carl Yeksigian commented on CASSANDRA-7409:
---

This was the original purpose of the same level compaction; it was supposed to 
place the results of the compaction into the same level. It seems that I broke 
it at some point, but with that correction, it should behave correctly for the 
last part.

For the earlier, it makes sense to try to include the sstables which overlap 
the least into an uplevel compaction. This will keep the keys which are being 
written to a lot in a level which allows for overlapping.

I'm happy to change the overlap estimator, assuming we can figure out which one 
we'd like to say is going to be our estimator going forward. If it's still 
going to be experimental, I think I'd rather just leave it as a really rough 
estimator of the overlap, and change it afterwards.

There are two measures against vanilla LCS that we want to test.

- L0 Compaction
  No reads, no writes. Take sstables and dump into L0.
  Metric: Time to 0 compactions remaining.

- Heavy write
  Heavy writes (such that LCS is overwhelmed), some reads
  Metric: read 0.99

I expect that L0 compaction times should be similar between LCS w/ STCS and 
OCS, with OCS being slightly slower. Under the heavy write scenario, however, 
there should be a large benefit to using OCS.


> Allow multiple overlapping sstables in L1
> -
>
> Key: CASSANDRA-7409
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7409
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
>  Labels: compaction
> Fix For: 3.0
>
>
> Currently, when a normal L0 compaction takes place (not STCS), we take up to 
> MAX_COMPACTING_L0 L0 sstables and all of the overlapping L1 sstables and 
> compact them together. If we didn't have to deal with the overlapping L1 
> tables, we could compact a higher number of L0 sstables together into a set 
> of non-overlapping L1 sstables.
> This could be done by delaying the invariant that L1 has no overlapping 
> sstables. Going from L1 to L2, we would be compacting fewer sstables together 
> which overlap.
> When reading, we will not have the same one sstable per level (except L0) 
> guarantee, but this can be bounded (once we have too many sets of sstables, 
> either compact them back into the same level, or compact them up to the next 
> level).
> This could be generalized to allow any level to be the maximum for this 
> overlapping strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-7991) RowIndexEntryTest and SelectWithTokenFunctionTest fail in trunk

2014-09-23 Thread Carl Yeksigian (JIRA)
Carl Yeksigian created CASSANDRA-7991:
-

 Summary: RowIndexEntryTest and SelectWithTokenFunctionTest fail in 
trunk
 Key: CASSANDRA-7991
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7991
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: Carl Yeksigian
 Fix For: 3.0
 Attachments: 7991-trunk.txt

org.apache.cassandra.db.RowIndexEntryTest and 
org.apache.cassandra.cql3.SelectWithTokenFunctionTest fail consistently on 
trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7983) nodetool repair triggers OOM

2014-09-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7983:
--
  Component/s: (was: Core)
   Tools
 Reviewer: Jimmy Mårdell
Fix Version/s: 2.0.11
 Assignee: Yuki Morishita

Good detective work, [~yarin].

> nodetool repair triggers OOM
> 
>
> Key: CASSANDRA-7983
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7983
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment:  
> {noformat}
>  INFO [main] 2014-09-16 14:23:14,621 DseDaemon.java (line 368) DSE version: 
> 4.5.0
>  INFO [main] 2014-09-16 14:23:14,622 DseDaemon.java (line 369) Hadoop 
> version: 1.0.4.13
>  INFO [main] 2014-09-16 14:23:14,627 DseDaemon.java (line 370) Hive version: 
> 0.12.0.3
>  INFO [main] 2014-09-16 14:23:14,628 DseDaemon.java (line 371) Pig version: 
> 0.10.1
>  INFO [main] 2014-09-16 14:23:14,629 DseDaemon.java (line 372) Solr version: 
> 4.6.0.2.4
>  INFO [main] 2014-09-16 14:23:14,630 DseDaemon.java (line 373) Sqoop version: 
> 1.4.4.14.1
>  INFO [main] 2014-09-16 14:23:14,630 DseDaemon.java (line 374) Mahout 
> version: 0.8
>  INFO [main] 2014-09-16 14:23:14,631 DseDaemon.java (line 375) Appender 
> version: 3.0.2
>  INFO [main] 2014-09-16 14:23:14,632 DseDaemon.java (line 376) Spark version: 
> 0.9.1
>  INFO [main] 2014-09-16 14:23:14,632 DseDaemon.java (line 377) Shark version: 
> 0.9.1.1
>  INFO [main] 2014-09-16 14:23:20,270 CassandraDaemon.java (line 160) JVM 
> vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.7.0_51
>  INFO [main] 2014-09-16 14:23:20,270 CassandraDaemon.java (line 188) Heap 
> size: 6316621824/6316621824
> {noformat}
>Reporter: Jose Martinez Poblete
>Assignee: Yuki Morishita
> Fix For: 2.0.11
>
> Attachments: gc.log.0, nbcqa-chc-a01_systemlog.tar.Z, 
> nbcqa-chc-a03_systemlog.tar.Z, system.log
>
>
> Customer has a 3 node cluster with 500Mb data on each node
> {noformat}
> [cassandra@nbcqa-chc-a02 ~]$ nodetool status
> Note: Ownership information does not include topology; for complete 
> information, specify a keyspace
> Datacenter: CH2
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  Owns   Host ID  
>  Rack
> UN  162.150.4.234  255.26 MB  256 33.2%  
> 4ad1b6a8-8759-4920-b54a-f059126900df  RAC1
> UN  162.150.4.235  318.37 MB  256 32.6%  
> 3eb0ec58-4b81-442e-bee5-4c91da447f38  RAC1
> UN  162.150.4.167  243.7 MB   256 34.2%  
> 5b2c1900-bf03-41c1-bb4e-82df1655b8d8  RAC1
> [cassandra@nbcqa-chc-a02 ~]$
> {noformat}
> After we run repair command, system runs into OOM after some 45 minutes
> Nothing else is running
> {noformat}
> [cassandra@nbcqa-chc-a02 ~]$ date
> Fri Sep 19 15:55:33 UTC 2014
> [cassandra@nbcqa-chc-a02 ~]$ nodetool repair -st -9220354588320251877 -et 
> -9220354588320251873
> Sep 19, 2014 4:06:08 PM ClientCommunicatorAdmin Checker-run
> WARNING: Failed to check the connection: java.net.SocketTimeoutException: 
> Read timed out
> {noformat}
> Herer is when we run OOM
> {noformat}
> ERROR [ReadStage:28914] 2014-09-19 16:34:50,381 CassandraDaemon.java (line 
> 199) Exception in thread Thread[ReadStage:28914,5,main]
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:69)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:43)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.createReader(CompressedPoolingSegmentedFile.java:48)
> at 
> org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:39)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1195)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:57)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
> at 
> org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
> at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevel

[jira] [Commented] (CASSANDRA-7986) The Pig tests cannot run on Cygwin on Windows

2014-09-23 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144934#comment-14144934
 ] 

Joshua McKenzie commented on CASSANDRA-7986:


Note: This relies on applying part2 from CASSANDRA-6075 to behave.

Couple of points on the patch:
# Use FBUtilities.isUnix() instead of re-rolling your own isWindows() check
# nit: Comment in WindowsLocalFileSystem should read:
{noformat}  // Just swallow the Exception as logging it produces too 
much output. {noformat}

Might be worth keeping in mind that the setPermission method isn't doing 
anything useful and, as mentioned in the referenced hadoop ticket:
{quote}or, if you're feeling ambitious, does something more appropriate when 
trying to set them.{quote}

But for now, this seems sufficient to me to get the tests to run.  
Incidentally, this patch fixes running on Windows in general, not just under 
Cygwin.

+1 with those minor changes.

> The Pig tests cannot run on Cygwin on Windows
> -
>
> Key: CASSANDRA-7986
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7986
> Project: Cassandra
>  Issue Type: Bug
> Environment: Windows 8.1, Cygwin 1.7.32 
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0
>
> Attachments: CASSANDRA-7986.txt
>
>
> When running the Pig-Tests on Cygwin Windows I run into 
> https://issues.apache.org/jira/browse/HADOOP-7682. 
> Ideally this issue should be properly fix in HADOOP but as the issue is open 
> since September 2011 it will be good if we implemented the workaround 
> mentionned by Joshua Caplan for the Pig-Tests 
> (https://issues.apache.org/jira/browse/HADOOP-7682?focusedCommentId=13440120&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13440120)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


git commit: CrcCheckChance should adjust based on live CFMetadata not sstable metadata

2014-09-23 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 d96485ff1 -> 62db20a77


CrcCheckChance should adjust based on live CFMetadata not sstable metadata

patch by tjake; reviewed by Jason Brown for CASSANDRA-7978


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/62db20a7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/62db20a7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/62db20a7

Branch: refs/heads/cassandra-2.0
Commit: 62db20a779fac3235c0e4dbade8c3d340d3c310b
Parents: d96485f
Author: Jake Luciani 
Authored: Tue Sep 23 12:30:25 2014 -0400
Committer: Jake Luciani 
Committed: Tue Sep 23 12:34:56 2014 -0400

--
 CHANGES.txt|  2 ++
 .../cassandra/io/compress/CompressionParameters.java   | 13 -
 .../org/apache/cassandra/io/sstable/SSTableReader.java |  6 +-
 3 files changed, 19 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/62db20a7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fd49b09..00603f3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.11:
+ * CrcCheckChance should adjust based on live CFMetadata not 
+   sstable metadata (CASSANDRA-7978)
  * token() should only accept columns in the partitioning
key order (CASSANDRA-6075)
  * Add method to invalidate permission cache via JMX (CASSANDRA-7977)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/62db20a7/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressionParameters.java 
b/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
index 7baaedd..2df64b4 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
@@ -30,6 +30,7 @@ import java.util.Set;
 
 import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.Sets;
+import org.apache.cassandra.config.CFMetaData;
 import org.apache.commons.lang3.builder.EqualsBuilder;
 import org.apache.commons.lang3.builder.HashCodeBuilder;
 
@@ -53,6 +54,7 @@ public class CompressionParameters
 private final Integer chunkLength;
 private volatile double crcCheckChance;
 public final Map otherOptions; // Unrecognized options, 
can be use by the compressor
+private CFMetaData liveMetadata;
 
 public static CompressionParameters create(Map opts) throws ConfigurationException
 {
@@ -101,15 +103,24 @@ public class CompressionParameters
 }
 }
 
+public void setLiveMetadata(final CFMetaData liveMetadata)
+{
+assert this.liveMetadata == null || this.liveMetadata == liveMetadata;
+this.liveMetadata = liveMetadata;
+}
+
 public void setCrcCheckChance(double crcCheckChance) throws 
ConfigurationException
 {
 validateCrcCheckChance(crcCheckChance);
 this.crcCheckChance = crcCheckChance;
+
+if (liveMetadata != null)
+
liveMetadata.compressionParameters.setCrcCheckChance(crcCheckChance);
 }
 
 public double getCrcCheckChance()
 {
-return this.crcCheckChance;
+return liveMetadata == null ? this.crcCheckChance : 
liveMetadata.compressionParameters.crcCheckChance;
 }
 
 private static double parseCrcCheckChance(String crcCheckChance) throws 
ConfigurationException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/62db20a7/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index f632c87..92dee99 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -624,7 +624,11 @@ public class SSTableReader extends SSTable implements 
Closeable
 if (!compression)
 throw new IllegalStateException(this + " is not compressed");
 
-return ((ICompressedFile) dfile).getMetadata();
+CompressionMetadata cmd = ((ICompressedFile) dfile).getMetadata();
+
+
cmd.parameters.setLiveMetadata(Schema.instance.getCFMetaData(descriptor));
+
+return cmd;
 }
 
 /**



[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-09-23 Thread jake
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3fd90ae6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3fd90ae6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3fd90ae6

Branch: refs/heads/cassandra-2.1
Commit: 3fd90ae6614a5619e44b2f615f49da2bdcee26d7
Parents: 192468f 62db20a
Author: Jake Luciani 
Authored: Tue Sep 23 12:39:34 2014 -0400
Committer: Jake Luciani 
Committed: Tue Sep 23 12:39:34 2014 -0400

--
 CHANGES.txt|  2 ++
 .../cassandra/io/compress/CompressionParameters.java   | 13 -
 .../org/apache/cassandra/io/sstable/SSTableReader.java |  6 +-
 3 files changed, 19 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd90ae6/CHANGES.txt
--
diff --cc CHANGES.txt
index 2f8a95b,00603f3..ce0e76a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,52 -1,6 +1,54 @@@
 -2.0.11:
 +2.1.1
 + * (cqlsh) Tab completeion for indexes on map keys (CASSANDRA-7972)
 + * (cqlsh) Fix UDT field selection in select clause (CASSANDRA-7891)
 + * Fix resource leak in event of corrupt sstable
 + * (cqlsh) Add command line option for cqlshrc file path (CASSANDRA-7131)
 + * Provide visibility into prepared statements churn (CASSANDRA-7921, 
CASSANDRA-7930)
 + * Invalidate prepared statements when their keyspace or table is
 +   dropped (CASSANDRA-7566)
 + * cassandra-stress: fix support for NetworkTopologyStrategy (CASSANDRA-7945)
 + * Fix saving caches when a table is dropped (CASSANDRA-7784)
 + * Add better error checking of new stress profile (CASSANDRA-7716)
 + * Use ThreadLocalRandom and remove FBUtilities.threadLocalRandom 
(CASSANDRA-7934)
 + * Prevent operator mistakes due to simultaneous bootstrap (CASSANDRA-7069)
 + * cassandra-stress supports whitelist mode for node config (CASSANDRA-7658)
 + * GCInspector more closely tracks GC; cassandra-stress and nodetool report 
it (CASSANDRA-7916)
 + * nodetool won't output bogus ownership info without a keyspace 
(CASSANDRA-7173)
 + * Add human readable option to nodetool commands (CASSANDRA-5433)
 + * Don't try to set repairedAt on old sstables (CASSANDRA-7913)
 + * Add metrics for tracking PreparedStatement use (CASSANDRA-7719)
 + * (cqlsh) tab-completion for triggers (CASSANDRA-7824)
 + * (cqlsh) Support for query paging (CASSANDRA-7514)
 + * (cqlsh) Show progress of COPY operations (CASSANDRA-7789)
 + * Add syntax to remove multiple elements from a map (CASSANDRA-6599)
 + * Support non-equals conditions in lightweight transactions (CASSANDRA-6839)
 + * Add IF [NOT] EXISTS to create/drop triggers (CASSANDRA-7606)
 + * (cqlsh) Display the current logged-in user (CASSANDRA-7785)
 + * (cqlsh) Don't ignore CTRL-C during COPY FROM execution (CASSANDRA-7815)
 + * (cqlsh) Order UDTs according to cross-type dependencies in DESCRIBE
 +   output (CASSANDRA-7659)
 + * (cqlsh) Fix handling of CAS statement results (CASSANDRA-7671)
 + * (cqlsh) COPY TO/FROM improvements (CASSANDRA-7405)
 + * Support list index operations with conditions (CASSANDRA-7499)
 + * Add max live/tombstoned cells to nodetool cfstats output (CASSANDRA-7731)
 + * Validate IPv6 wildcard addresses properly (CASSANDRA-7680)
 + * (cqlsh) Error when tracing query (CASSANDRA-7613)
 + * Avoid IOOBE when building SyntaxError message snippet (CASSANDRA-7569)
 + * SSTableExport uses correct validator to create string representation of 
partition
 +   keys (CASSANDRA-7498)
 + * Avoid NPEs when receiving type changes for an unknown keyspace 
(CASSANDRA-7689)
 + * Add support for custom 2i validation (CASSANDRA-7575)
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Add listen_interface and rpc_interface options (CASSANDRA-7417)
 + * Improve schema merge performance (CASSANDRA-7444)
 + * Adjust MT depth based on # of partition validating (CASSANDRA-5263)
 + * Optimise NativeCell comparisons (CASSANDRA-6755)
 + * Configurable client timeout for cqlsh (CASSANDRA-7516)
 + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111)
 + * Make repair -pr work with -local (CASSANDRA-7450)
 +Merged from 2.0:
+  * CrcCheckChance should adjust based on live CFMetadata not 
+sstable metadata (CASSANDRA-7978)
   * token() should only accept columns in the partitioning
 key order (CASSANDRA-6075)
   * Add method to invalidate permission cache via JMX (CASSANDRA-7977)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd90ae6/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd90ae6

[1/3] git commit: CrcCheckChance should adjust based on live CFMetadata not sstable metadata

2014-09-23 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 192468f7a -> c2c9835f7


CrcCheckChance should adjust based on live CFMetadata not sstable metadata

patch by tjake; reviewed by Jason Brown for CASSANDRA-7978


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/62db20a7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/62db20a7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/62db20a7

Branch: refs/heads/cassandra-2.1
Commit: 62db20a779fac3235c0e4dbade8c3d340d3c310b
Parents: d96485f
Author: Jake Luciani 
Authored: Tue Sep 23 12:30:25 2014 -0400
Committer: Jake Luciani 
Committed: Tue Sep 23 12:34:56 2014 -0400

--
 CHANGES.txt|  2 ++
 .../cassandra/io/compress/CompressionParameters.java   | 13 -
 .../org/apache/cassandra/io/sstable/SSTableReader.java |  6 +-
 3 files changed, 19 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/62db20a7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fd49b09..00603f3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.11:
+ * CrcCheckChance should adjust based on live CFMetadata not 
+   sstable metadata (CASSANDRA-7978)
  * token() should only accept columns in the partitioning
key order (CASSANDRA-6075)
  * Add method to invalidate permission cache via JMX (CASSANDRA-7977)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/62db20a7/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressionParameters.java 
b/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
index 7baaedd..2df64b4 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
@@ -30,6 +30,7 @@ import java.util.Set;
 
 import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.Sets;
+import org.apache.cassandra.config.CFMetaData;
 import org.apache.commons.lang3.builder.EqualsBuilder;
 import org.apache.commons.lang3.builder.HashCodeBuilder;
 
@@ -53,6 +54,7 @@ public class CompressionParameters
 private final Integer chunkLength;
 private volatile double crcCheckChance;
 public final Map otherOptions; // Unrecognized options, 
can be use by the compressor
+private CFMetaData liveMetadata;
 
 public static CompressionParameters create(Map opts) throws ConfigurationException
 {
@@ -101,15 +103,24 @@ public class CompressionParameters
 }
 }
 
+public void setLiveMetadata(final CFMetaData liveMetadata)
+{
+assert this.liveMetadata == null || this.liveMetadata == liveMetadata;
+this.liveMetadata = liveMetadata;
+}
+
 public void setCrcCheckChance(double crcCheckChance) throws 
ConfigurationException
 {
 validateCrcCheckChance(crcCheckChance);
 this.crcCheckChance = crcCheckChance;
+
+if (liveMetadata != null)
+
liveMetadata.compressionParameters.setCrcCheckChance(crcCheckChance);
 }
 
 public double getCrcCheckChance()
 {
-return this.crcCheckChance;
+return liveMetadata == null ? this.crcCheckChance : 
liveMetadata.compressionParameters.crcCheckChance;
 }
 
 private static double parseCrcCheckChance(String crcCheckChance) throws 
ConfigurationException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/62db20a7/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index f632c87..92dee99 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -624,7 +624,11 @@ public class SSTableReader extends SSTable implements 
Closeable
 if (!compression)
 throw new IllegalStateException(this + " is not compressed");
 
-return ((ICompressedFile) dfile).getMetadata();
+CompressionMetadata cmd = ((ICompressedFile) dfile).getMetadata();
+
+
cmd.parameters.setLiveMetadata(Schema.instance.getCFMetaData(descriptor));
+
+return cmd;
 }
 
 /**



[3/3] git commit: Adds test for CASSANDRA-7978

2014-09-23 Thread jake
Adds test for CASSANDRA-7978


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c2c9835f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c2c9835f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c2c9835f

Branch: refs/heads/cassandra-2.1
Commit: c2c9835f7a431d4d267e501f6b31fdc7a7e0b5fc
Parents: 3fd90ae
Author: Jake Luciani 
Authored: Tue Sep 23 12:42:11 2014 -0400
Committer: Jake Luciani 
Committed: Tue Sep 23 12:42:11 2014 -0400

--
 .../org/apache/cassandra/cql3/CQLTester.java|  2 +-
 .../cassandra/cql3/CrcCheckChanceTest.java  | 70 
 2 files changed, 71 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2c9835f/test/unit/org/apache/cassandra/cql3/CQLTester.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CQLTester.java 
b/test/unit/org/apache/cassandra/cql3/CQLTester.java
index e776fc7..236a9ff 100644
--- a/test/unit/org/apache/cassandra/cql3/CQLTester.java
+++ b/test/unit/org/apache/cassandra/cql3/CQLTester.java
@@ -54,7 +54,7 @@ public abstract class CQLTester
 {
 protected static final Logger logger = 
LoggerFactory.getLogger(CQLTester.class);
 
-private static final String KEYSPACE = "cql_test_keyspace";
+public static final String KEYSPACE = "cql_test_keyspace";
 private static final boolean USE_PREPARED_VALUES = 
Boolean.valueOf(System.getProperty("cassandra.test.use_prepared", "true"));
 private static final AtomicInteger seqNumber = new AtomicInteger();
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2c9835f/test/unit/org/apache/cassandra/cql3/CrcCheckChanceTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CrcCheckChanceTest.java 
b/test/unit/org/apache/cassandra/cql3/CrcCheckChanceTest.java
new file mode 100644
index 000..0cd9202
--- /dev/null
+++ b/test/unit/org/apache/cassandra/cql3/CrcCheckChanceTest.java
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3;
+
+import junit.framework.Assert;
+import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.Keyspace;
+import org.junit.Test;
+
+
+public class CrcCheckChanceTest extends CQLTester
+{
+@Test
+public void testChangingCrcCheckChance() throws Throwable
+{
+//Start with crc_check_chance of 99%
+createTable("CREATE TABLE %s (p text, c text, v text, s text static, 
PRIMARY KEY (p, c)) WITH compression = {'sstable_compression': 'LZ4Compressor', 
'crc_check_chance' : 0.99}");
+
+execute("INSERT INTO %s(p, c, v, s) values (?, ?, ?, ?)", "p1", "k1", 
"v1", "sv1");
+execute("INSERT INTO %s(p, c, v) values (?, ?, ?)", "p1", "k2", "v2");
+execute("INSERT INTO %s(p, s) values (?, ?)", "p2", "sv2");
+
+
+ColumnFamilyStore cfs = 
Keyspace.open(CQLTester.KEYSPACE).getColumnFamilyStore(currentTable());
+cfs.forceBlockingFlush();
+
+Assert.assertEquals(0.99, 
cfs.metadata.compressionParameters.getCrcCheckChance());
+Assert.assertEquals(0.99, 
cfs.getSSTables().iterator().next().getCompressionMetadata().parameters.getCrcCheckChance());
+
+assertRows(execute("SELECT * FROM %s WHERE p=?", "p1"),
+row("p1", "k1", "sv1", "v1"),
+row("p1", "k2", "sv1", "v2")
+);
+
+
+//Verify when we alter the value the live sstable readers hold the new 
one
+alterTable("ALTER TABLE %s WITH compression = {'sstable_compression': 
'LZ4Compressor', 'crc_check_chance': 0.01}");
+
+Assert.assertEquals( 0.01, 
cfs.metadata.compressionParameters.getCrcCheckChance());
+Assert.assertEquals( 0.01, 
cfs.getSSTables().iterator().next().getCompressionMetadata().parameters.getCrcCheckChance());
+
+assertRows(execute("SELECT * FROM %s WHERE p=?", "p1"),
+row("p1", "k1", "sv1", "v1"),
+  

[1/4] git commit: CrcCheckChance should adjust based on live CFMetadata not sstable metadata

2014-09-23 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk 3f181f063 -> 1ecb70165


CrcCheckChance should adjust based on live CFMetadata not sstable metadata

patch by tjake; reviewed by Jason Brown for CASSANDRA-7978


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/62db20a7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/62db20a7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/62db20a7

Branch: refs/heads/trunk
Commit: 62db20a779fac3235c0e4dbade8c3d340d3c310b
Parents: d96485f
Author: Jake Luciani 
Authored: Tue Sep 23 12:30:25 2014 -0400
Committer: Jake Luciani 
Committed: Tue Sep 23 12:34:56 2014 -0400

--
 CHANGES.txt|  2 ++
 .../cassandra/io/compress/CompressionParameters.java   | 13 -
 .../org/apache/cassandra/io/sstable/SSTableReader.java |  6 +-
 3 files changed, 19 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/62db20a7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fd49b09..00603f3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.11:
+ * CrcCheckChance should adjust based on live CFMetadata not 
+   sstable metadata (CASSANDRA-7978)
  * token() should only accept columns in the partitioning
key order (CASSANDRA-6075)
  * Add method to invalidate permission cache via JMX (CASSANDRA-7977)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/62db20a7/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
--
diff --git 
a/src/java/org/apache/cassandra/io/compress/CompressionParameters.java 
b/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
index 7baaedd..2df64b4 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
@@ -30,6 +30,7 @@ import java.util.Set;
 
 import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.Sets;
+import org.apache.cassandra.config.CFMetaData;
 import org.apache.commons.lang3.builder.EqualsBuilder;
 import org.apache.commons.lang3.builder.HashCodeBuilder;
 
@@ -53,6 +54,7 @@ public class CompressionParameters
 private final Integer chunkLength;
 private volatile double crcCheckChance;
 public final Map otherOptions; // Unrecognized options, 
can be use by the compressor
+private CFMetaData liveMetadata;
 
 public static CompressionParameters create(Map opts) throws ConfigurationException
 {
@@ -101,15 +103,24 @@ public class CompressionParameters
 }
 }
 
+public void setLiveMetadata(final CFMetaData liveMetadata)
+{
+assert this.liveMetadata == null || this.liveMetadata == liveMetadata;
+this.liveMetadata = liveMetadata;
+}
+
 public void setCrcCheckChance(double crcCheckChance) throws 
ConfigurationException
 {
 validateCrcCheckChance(crcCheckChance);
 this.crcCheckChance = crcCheckChance;
+
+if (liveMetadata != null)
+
liveMetadata.compressionParameters.setCrcCheckChance(crcCheckChance);
 }
 
 public double getCrcCheckChance()
 {
-return this.crcCheckChance;
+return liveMetadata == null ? this.crcCheckChance : 
liveMetadata.compressionParameters.crcCheckChance;
 }
 
 private static double parseCrcCheckChance(String crcCheckChance) throws 
ConfigurationException

http://git-wip-us.apache.org/repos/asf/cassandra/blob/62db20a7/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index f632c87..92dee99 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -624,7 +624,11 @@ public class SSTableReader extends SSTable implements 
Closeable
 if (!compression)
 throw new IllegalStateException(this + " is not compressed");
 
-return ((ICompressedFile) dfile).getMetadata();
+CompressionMetadata cmd = ((ICompressedFile) dfile).getMetadata();
+
+
cmd.parameters.setLiveMetadata(Schema.instance.getCFMetaData(descriptor));
+
+return cmd;
 }
 
 /**



[2/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-09-23 Thread jake
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3fd90ae6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3fd90ae6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3fd90ae6

Branch: refs/heads/trunk
Commit: 3fd90ae6614a5619e44b2f615f49da2bdcee26d7
Parents: 192468f 62db20a
Author: Jake Luciani 
Authored: Tue Sep 23 12:39:34 2014 -0400
Committer: Jake Luciani 
Committed: Tue Sep 23 12:39:34 2014 -0400

--
 CHANGES.txt|  2 ++
 .../cassandra/io/compress/CompressionParameters.java   | 13 -
 .../org/apache/cassandra/io/sstable/SSTableReader.java |  6 +-
 3 files changed, 19 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd90ae6/CHANGES.txt
--
diff --cc CHANGES.txt
index 2f8a95b,00603f3..ce0e76a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,52 -1,6 +1,54 @@@
 -2.0.11:
 +2.1.1
 + * (cqlsh) Tab completeion for indexes on map keys (CASSANDRA-7972)
 + * (cqlsh) Fix UDT field selection in select clause (CASSANDRA-7891)
 + * Fix resource leak in event of corrupt sstable
 + * (cqlsh) Add command line option for cqlshrc file path (CASSANDRA-7131)
 + * Provide visibility into prepared statements churn (CASSANDRA-7921, 
CASSANDRA-7930)
 + * Invalidate prepared statements when their keyspace or table is
 +   dropped (CASSANDRA-7566)
 + * cassandra-stress: fix support for NetworkTopologyStrategy (CASSANDRA-7945)
 + * Fix saving caches when a table is dropped (CASSANDRA-7784)
 + * Add better error checking of new stress profile (CASSANDRA-7716)
 + * Use ThreadLocalRandom and remove FBUtilities.threadLocalRandom 
(CASSANDRA-7934)
 + * Prevent operator mistakes due to simultaneous bootstrap (CASSANDRA-7069)
 + * cassandra-stress supports whitelist mode for node config (CASSANDRA-7658)
 + * GCInspector more closely tracks GC; cassandra-stress and nodetool report 
it (CASSANDRA-7916)
 + * nodetool won't output bogus ownership info without a keyspace 
(CASSANDRA-7173)
 + * Add human readable option to nodetool commands (CASSANDRA-5433)
 + * Don't try to set repairedAt on old sstables (CASSANDRA-7913)
 + * Add metrics for tracking PreparedStatement use (CASSANDRA-7719)
 + * (cqlsh) tab-completion for triggers (CASSANDRA-7824)
 + * (cqlsh) Support for query paging (CASSANDRA-7514)
 + * (cqlsh) Show progress of COPY operations (CASSANDRA-7789)
 + * Add syntax to remove multiple elements from a map (CASSANDRA-6599)
 + * Support non-equals conditions in lightweight transactions (CASSANDRA-6839)
 + * Add IF [NOT] EXISTS to create/drop triggers (CASSANDRA-7606)
 + * (cqlsh) Display the current logged-in user (CASSANDRA-7785)
 + * (cqlsh) Don't ignore CTRL-C during COPY FROM execution (CASSANDRA-7815)
 + * (cqlsh) Order UDTs according to cross-type dependencies in DESCRIBE
 +   output (CASSANDRA-7659)
 + * (cqlsh) Fix handling of CAS statement results (CASSANDRA-7671)
 + * (cqlsh) COPY TO/FROM improvements (CASSANDRA-7405)
 + * Support list index operations with conditions (CASSANDRA-7499)
 + * Add max live/tombstoned cells to nodetool cfstats output (CASSANDRA-7731)
 + * Validate IPv6 wildcard addresses properly (CASSANDRA-7680)
 + * (cqlsh) Error when tracing query (CASSANDRA-7613)
 + * Avoid IOOBE when building SyntaxError message snippet (CASSANDRA-7569)
 + * SSTableExport uses correct validator to create string representation of 
partition
 +   keys (CASSANDRA-7498)
 + * Avoid NPEs when receiving type changes for an unknown keyspace 
(CASSANDRA-7689)
 + * Add support for custom 2i validation (CASSANDRA-7575)
 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454)
 + * Add listen_interface and rpc_interface options (CASSANDRA-7417)
 + * Improve schema merge performance (CASSANDRA-7444)
 + * Adjust MT depth based on # of partition validating (CASSANDRA-5263)
 + * Optimise NativeCell comparisons (CASSANDRA-6755)
 + * Configurable client timeout for cqlsh (CASSANDRA-7516)
 + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111)
 + * Make repair -pr work with -local (CASSANDRA-7450)
 +Merged from 2.0:
+  * CrcCheckChance should adjust based on live CFMetadata not 
+sstable metadata (CASSANDRA-7978)
   * token() should only accept columns in the partitioning
 key order (CASSANDRA-6075)
   * Add method to invalidate permission cache via JMX (CASSANDRA-7977)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd90ae6/src/java/org/apache/cassandra/io/compress/CompressionParameters.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3fd90ae6/src/jav

[4/4] git commit: Merge branch 'cassandra-2.1' into trunk

2014-09-23 Thread jake
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1ecb7016
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1ecb7016
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1ecb7016

Branch: refs/heads/trunk
Commit: 1ecb7016598d98a54da44845568a65707df3ca9b
Parents: 3f181f0 c2c9835
Author: Jake Luciani 
Authored: Tue Sep 23 12:50:15 2014 -0400
Committer: Jake Luciani 
Committed: Tue Sep 23 12:50:15 2014 -0400

--
 CHANGES.txt |  2 +
 .../io/compress/CompressionParameters.java  | 13 +++-
 .../cassandra/io/sstable/SSTableReader.java |  6 +-
 .../org/apache/cassandra/cql3/CQLTester.java|  2 +-
 .../cassandra/cql3/CrcCheckChanceTest.java  | 70 
 5 files changed, 90 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1ecb7016/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1ecb7016/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1ecb7016/test/unit/org/apache/cassandra/cql3/CQLTester.java
--



[3/4] git commit: Adds test for CASSANDRA-7978

2014-09-23 Thread jake
Adds test for CASSANDRA-7978


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c2c9835f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c2c9835f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c2c9835f

Branch: refs/heads/trunk
Commit: c2c9835f7a431d4d267e501f6b31fdc7a7e0b5fc
Parents: 3fd90ae
Author: Jake Luciani 
Authored: Tue Sep 23 12:42:11 2014 -0400
Committer: Jake Luciani 
Committed: Tue Sep 23 12:42:11 2014 -0400

--
 .../org/apache/cassandra/cql3/CQLTester.java|  2 +-
 .../cassandra/cql3/CrcCheckChanceTest.java  | 70 
 2 files changed, 71 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2c9835f/test/unit/org/apache/cassandra/cql3/CQLTester.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CQLTester.java 
b/test/unit/org/apache/cassandra/cql3/CQLTester.java
index e776fc7..236a9ff 100644
--- a/test/unit/org/apache/cassandra/cql3/CQLTester.java
+++ b/test/unit/org/apache/cassandra/cql3/CQLTester.java
@@ -54,7 +54,7 @@ public abstract class CQLTester
 {
 protected static final Logger logger = 
LoggerFactory.getLogger(CQLTester.class);
 
-private static final String KEYSPACE = "cql_test_keyspace";
+public static final String KEYSPACE = "cql_test_keyspace";
 private static final boolean USE_PREPARED_VALUES = 
Boolean.valueOf(System.getProperty("cassandra.test.use_prepared", "true"));
 private static final AtomicInteger seqNumber = new AtomicInteger();
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2c9835f/test/unit/org/apache/cassandra/cql3/CrcCheckChanceTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CrcCheckChanceTest.java 
b/test/unit/org/apache/cassandra/cql3/CrcCheckChanceTest.java
new file mode 100644
index 000..0cd9202
--- /dev/null
+++ b/test/unit/org/apache/cassandra/cql3/CrcCheckChanceTest.java
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3;
+
+import junit.framework.Assert;
+import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.Keyspace;
+import org.junit.Test;
+
+
+public class CrcCheckChanceTest extends CQLTester
+{
+@Test
+public void testChangingCrcCheckChance() throws Throwable
+{
+//Start with crc_check_chance of 99%
+createTable("CREATE TABLE %s (p text, c text, v text, s text static, 
PRIMARY KEY (p, c)) WITH compression = {'sstable_compression': 'LZ4Compressor', 
'crc_check_chance' : 0.99}");
+
+execute("INSERT INTO %s(p, c, v, s) values (?, ?, ?, ?)", "p1", "k1", 
"v1", "sv1");
+execute("INSERT INTO %s(p, c, v) values (?, ?, ?)", "p1", "k2", "v2");
+execute("INSERT INTO %s(p, s) values (?, ?)", "p2", "sv2");
+
+
+ColumnFamilyStore cfs = 
Keyspace.open(CQLTester.KEYSPACE).getColumnFamilyStore(currentTable());
+cfs.forceBlockingFlush();
+
+Assert.assertEquals(0.99, 
cfs.metadata.compressionParameters.getCrcCheckChance());
+Assert.assertEquals(0.99, 
cfs.getSSTables().iterator().next().getCompressionMetadata().parameters.getCrcCheckChance());
+
+assertRows(execute("SELECT * FROM %s WHERE p=?", "p1"),
+row("p1", "k1", "sv1", "v1"),
+row("p1", "k2", "sv1", "v2")
+);
+
+
+//Verify when we alter the value the live sstable readers hold the new 
one
+alterTable("ALTER TABLE %s WITH compression = {'sstable_compression': 
'LZ4Compressor', 'crc_check_chance': 0.01}");
+
+Assert.assertEquals( 0.01, 
cfs.metadata.compressionParameters.getCrcCheckChance());
+Assert.assertEquals( 0.01, 
cfs.getSSTables().iterator().next().getCompressionMetadata().parameters.getCrcCheckChance());
+
+assertRows(execute("SELECT * FROM %s WHERE p=?", "p1"),
+row("p1", "k1", "sv1", "v1"),
+ro

[jira] [Resolved] (CASSANDRA-7928) Digest queries do not require alder32 checks

2014-09-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani resolved CASSANDRA-7928.
---
Resolution: Won't Fix

Closing in favor of  CASSANDRA-7130 which is now targeted for 2.1

> Digest queries do not require alder32 checks
> 
>
> Key: CASSANDRA-7928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7928
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: T Jake Luciani
>Priority: Minor
>  Labels: performance
> Fix For: 2.1.1
>
>
>  While reading data from sstables, C* does Alder32 checks for any data being 
> read. We have seen that this causes higher CPU usage while doing kernel 
> profiling. These checks might not be useful for digest queries as they will 
> have a different digest in case of corruption. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7928) Digest queries do not require alder32 checks

2014-09-23 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145112#comment-14145112
 ] 

sankalp kohli commented on CASSANDRA-7928:
--

[~tjake]  I agree it is hard to do but important. Even with better hash 
methods, there will still be some CPU involved. This will cut that multiple 
times for higher RF and CL queries. 
So closing it for CASSANDRA-7130 is not correct!!

> Digest queries do not require alder32 checks
> 
>
> Key: CASSANDRA-7928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7928
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: T Jake Luciani
>Priority: Minor
>  Labels: performance
> Fix For: 2.1.1
>
>
>  While reading data from sstables, C* does Alder32 checks for any data being 
> read. We have seen that this causes higher CPU usage while doing kernel 
> profiling. These checks might not be useful for digest queries as they will 
> have a different digest in case of corruption. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-7992) Arithmetic overflow sorting commit log segments on replay

2014-09-23 Thread Oleg Anastasyev (JIRA)
Oleg Anastasyev created CASSANDRA-7992:
--

 Summary: Arithmetic overflow sorting commit log segments on replay
 Key: CASSANDRA-7992
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7992
 Project: Cassandra
  Issue Type: Bug
Reporter: Oleg Anastasyev


When replaying a lot of commit logs aged several days commit log segments are 
sorted incorrectly due to arith overflow in CommitLogSegmentFileComparator




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7992) Arithmetic overflow sorting commit log segments on replay

2014-09-23 Thread Oleg Anastasyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Anastasyev updated CASSANDRA-7992:
---
Attachment: ArithOverflowSortingCommitLogSegments.txt

> Arithmetic overflow sorting commit log segments on replay
> -
>
> Key: CASSANDRA-7992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7992
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Oleg Anastasyev
> Attachments: ArithOverflowSortingCommitLogSegments.txt
>
>
> When replaying a lot of commit logs aged several days commit log segments are 
> sorted incorrectly due to arith overflow in CommitLogSegmentFileComparator



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7130) Make sstable checksum type configurable and optional

2014-09-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-7130:
--
Fix Version/s: (was: 3.0)
   2.1.1

> Make sstable checksum type configurable and optional
> 
>
> Key: CASSANDRA-7130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7130
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Jason Brown
>Priority: Minor
>  Labels: performance
> Fix For: 2.1.1
>
>
> A lot of our users are becoming bottlenecked on CPU rather than IO, and 
> whilst Adler32 is faster than CRC, it isn't anything like as fast as xxhash 
> (used by LZ4), which can push Gb/s. I propose making the checksum type 
> configurable so that users who want speed can shift to xxhash, and those who 
> want security can use Adler or CRC.
> It's worth noting that at some point in the future (JDK8?) optimised 
> implementations using latest intel crc instructions will be added, though 
> it's not clear from the mailing list discussion if/when that will materialise:
> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2013-May/010775.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-7993) Fat client nodes dont schedule schema pull on connect

2014-09-23 Thread Oleg Anastasyev (JIRA)
Oleg Anastasyev created CASSANDRA-7993:
--

 Summary: Fat client nodes dont schedule schema pull on connect
 Key: CASSANDRA-7993
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7993
 Project: Cassandra
  Issue Type: Bug
Reporter: Oleg Anastasyev


So they cannot connect for a long time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7993) Fat client nodes dont schedule schema pull on connect

2014-09-23 Thread Oleg Anastasyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Anastasyev updated CASSANDRA-7993:
---
Attachment: ScheduleSchemaPullInClientMode.txt

> Fat client nodes dont schedule schema pull on connect
> -
>
> Key: CASSANDRA-7993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7993
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Oleg Anastasyev
> Attachments: ScheduleSchemaPullInClientMode.txt
>
>
> So they cannot connect for a long time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7928) Digest queries do not require alder32 checks

2014-09-23 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145175#comment-14145175
 ] 

Jonathan Ellis commented on CASSANDRA-7928:
---

As much as I love wontfixing things, I don't think I see that 7130 obsoletes 
this entirely, either.

> Digest queries do not require alder32 checks
> 
>
> Key: CASSANDRA-7928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7928
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: T Jake Luciani
>Priority: Minor
>  Labels: performance
> Fix For: 2.1.1
>
>
>  While reading data from sstables, C* does Alder32 checks for any data being 
> read. We have seen that this causes higher CPU usage while doing kernel 
> profiling. These checks might not be useful for digest queries as they will 
> have a different digest in case of corruption. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-7928) Digest queries do not require alder32 checks

2014-09-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reopened CASSANDRA-7928:
---

> Digest queries do not require alder32 checks
> 
>
> Key: CASSANDRA-7928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7928
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: T Jake Luciani
>Priority: Minor
>  Labels: performance
> Fix For: 2.1.1
>
>
>  While reading data from sstables, C* does Alder32 checks for any data being 
> read. We have seen that this causes higher CPU usage while doing kernel 
> profiling. These checks might not be useful for digest queries as they will 
> have a different digest in case of corruption. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7928) Digest queries do not require alder32 checks

2014-09-23 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145185#comment-14145185
 ] 

sankalp kohli commented on CASSANDRA-7928:
--

+1 on Jonathan comment. The only way I see doing this is using thread locals 
which is sad.
We should try to do this with rewrite of storage layer in 3.0.  

> Digest queries do not require alder32 checks
> 
>
> Key: CASSANDRA-7928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7928
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: T Jake Luciani
>Priority: Minor
>  Labels: performance
> Fix For: 2.1.1
>
>
>  While reading data from sstables, C* does Alder32 checks for any data being 
> read. We have seen that this causes higher CPU usage while doing kernel 
> profiling. These checks might not be useful for digest queries as they will 
> have a different digest in case of corruption. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7928) Digest queries do not require alder32 checks

2014-09-23 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-7928:
-
Fix Version/s: (was: 2.1.1)
   3.0

> Digest queries do not require alder32 checks
> 
>
> Key: CASSANDRA-7928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7928
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: T Jake Luciani
>Priority: Minor
>  Labels: performance
> Fix For: 3.0
>
>
>  While reading data from sstables, C* does Alder32 checks for any data being 
> read. We have seen that this causes higher CPU usage while doing kernel 
> profiling. These checks might not be useful for digest queries as they will 
> have a different digest in case of corruption. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Update of "DebianPackaging" by JakeLuciani

2014-09-23 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "DebianPackaging" page has been changed by JakeLuciani:
https://wiki.apache.org/cassandra/DebianPackaging?action=diff&rev1=34&rev2=35

  gpg --export --armor 0353B12C | sudo apt-key add -
  }}}
  
- (The list of Apache contributors public keys is available at 
[[http://www.apache.org/dist/cassandra/KEYS]]). 
+ (The list of Apache contributors public keys is available at 
[[https://www.apache.org/dist/cassandra/KEYS]]). 
  
  Then you may install Cassandra by doing:
  


[jira] [Commented] (CASSANDRA-7928) Digest queries do not require alder32 checks

2014-09-23 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145191#comment-14145191
 ] 

T Jake Luciani commented on CASSANDRA-7928:
---

Makes sense to incorporate when we have 7130 in and a better handle on the 
storage engine changes

> Digest queries do not require alder32 checks
> 
>
> Key: CASSANDRA-7928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7928
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: T Jake Luciani
>Priority: Minor
>  Labels: performance
> Fix For: 3.0
>
>
>  While reading data from sstables, C* does Alder32 checks for any data being 
> read. We have seen that this causes higher CPU usage while doing kernel 
> profiling. These checks might not be useful for digest queries as they will 
> have a different digest in case of corruption. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7990) CompoundDenseCellNameType AssertionError and BoundedComposite to CellName ClasCastException

2014-09-23 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7990:
---
Attachment: 7990-partial-fix.txt

I was able to figure out the second stack trace, and the attached patch 
7990-partial-fix.txt (and 
[branch|https://github.com/thobbs/cassandra/tree/CASSANDRA-7990]) fixes that.  
There's a dtest branch to reproduce that issue here: 
https://github.com/thobbs/cassandra-dtest/tree/CASSANDRA-7990.  The bug should 
only affect slice deletions from Thrift that set a non-0x00 EOC on the slice 
bounds. Feel free to test this patch out and see if it resolves the second 
stack trace.

However, I'm not sure about the first stack trace.  It's coming from the native 
protocol, and it's a batch statement that contains an insert or update 
statement.  It's updating a compact storage table with at least two clustering 
keys (e.g. {{PRIMARY KEY (a, b, c)}}.  Do you think you can narrow that down at 
all, [~christianmovi]?

> CompoundDenseCellNameType AssertionError and BoundedComposite to CellName 
> ClasCastException
> ---
>
> Key: CASSANDRA-7990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7990
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu, Java 1.7.0_67, Cassandra 2.1.0,  
> cassandra-driver-core:jar:2.0.6
>Reporter: Christian Spriegel
>Assignee: Tyler Hobbs
>Priority: Minor
> Attachments: 7990-partial-fix.txt
>
>
> I just updated my laptop to Cassandra 2.1 and created a fresh data folder.
> When trying to run my automated tests i get a lot these exceptions in the 
> Cassandra log:
> {code}
> ERROR [SharedPool-Worker-1] 2014-09-23 12:59:17,812 ErrorMessage.java:218 - 
> Unexpected exception during request
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.composites.CompoundDenseCellNameType.create(CompoundDenseCellNameType.java:57)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:313) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:91)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:181)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:283)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:269)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:264)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:187)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:206) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:118)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:422)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:318)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:103)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:31)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:323)
>  [netty-all-4.0.20.Final.jar:4.0.20.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_67]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
>  [apache-cassandra-2.1.0.jar:2.1.0]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
> [apache-cassandra-2.1.0.jar:2.1.0]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
> ERROR [Thrift:9] 2014-09-23 12:59:17,823 CustomTThreadPoolServer.java:219 - 
> Error occurred d

[jira] [Created] (CASSANDRA-7994) Commit logs on the fly compression

2014-09-23 Thread Oleg Anastasyev (JIRA)
Oleg Anastasyev created CASSANDRA-7994:
--

 Summary: Commit logs on the fly compression 
 Key: CASSANDRA-7994
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7994
 Project: Cassandra
  Issue Type: New Feature
Reporter: Oleg Anastasyev


This patch employs lz4 algo to comress commit logs. This could be useful to 
conserve disk space either archiving commit logs  for a long time or for 
conserviing iops for use cases with often and large mutations updating the same 
record.

The compression is performed on blocks of 64k, for better cross mutation 
compression. CRC is computed on each 64k block, unlike original code computing 
it on each individual mutation.

On one of our real production cluster this saved 2/3 of the space consumed by 
commit logs. The replay is 20-30% slower for the same number of mutations.

While doing this, also refactored commit log reading code to CommitLogReader 
class, which i believe makes code cleaner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7994) Commit logs on the fly compression

2014-09-23 Thread Oleg Anastasyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Anastasyev updated CASSANDRA-7994:
---
Attachment: CompressedCommitLogs-7994.txt

attached patch rebased on current 2.0 codebase

> Commit logs on the fly compression 
> ---
>
> Key: CASSANDRA-7994
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7994
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Oleg Anastasyev
> Attachments: CompressedCommitLogs-7994.txt
>
>
> This patch employs lz4 algo to comress commit logs. This could be useful to 
> conserve disk space either archiving commit logs  for a long time or for 
> conserviing iops for use cases with often and large mutations updating the 
> same record.
> The compression is performed on blocks of 64k, for better cross mutation 
> compression. CRC is computed on each 64k block, unlike original code 
> computing it on each individual mutation.
> On one of our real production cluster this saved 2/3 of the space consumed by 
> commit logs. The replay is 20-30% slower for the same number of mutations.
> While doing this, also refactored commit log reading code to CommitLogReader 
> class, which i believe makes code cleaner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7994) Commit logs on the fly compression

2014-09-23 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145259#comment-14145259
 ] 

Jonathan Ellis commented on CASSANDRA-7994:
---

Can you compare your approach to the one in CASSANDRA-6809?

> Commit logs on the fly compression 
> ---
>
> Key: CASSANDRA-7994
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7994
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Oleg Anastasyev
> Attachments: CompressedCommitLogs-7994.txt
>
>
> This patch employs lz4 algo to comress commit logs. This could be useful to 
> conserve disk space either archiving commit logs  for a long time or for 
> conserviing iops for use cases with often and large mutations updating the 
> same record.
> The compression is performed on blocks of 64k, for better cross mutation 
> compression. CRC is computed on each 64k block, unlike original code 
> computing it on each individual mutation.
> On one of our real production cluster this saved 2/3 of the space consumed by 
> commit logs. The replay is 20-30% slower for the same number of mutations.
> While doing this, also refactored commit log reading code to CommitLogReader 
> class, which i believe makes code cleaner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-7995) sstablerepairedset should take more that one sstable as an argument

2014-09-23 Thread Nick Bailey (JIRA)
Nick Bailey created CASSANDRA-7995:
--

 Summary: sstablerepairedset should take more that one sstable as 
an argument
 Key: CASSANDRA-7995
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7995
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
 Fix For: 2.1.1


Given that a c* node can a number of sstables in the 10s (100s?) of thousands 
of sstables on it, sstablerepairedset should be taking a list of sstables to 
mark as repaired rather than a single sstable.

Running any command 10s of thousands of times isn't really good let alone one 
that spins up a jvm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7982) Batch multiple range requests that are going to the same replica

2014-09-23 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145326#comment-14145326
 ] 

Tyler Hobbs commented on CASSANDRA-7982:


bq. why do token ranges come into picture for secondary index scan?

We need to fetch results in token order to enable things like paging and 
splitting up jobs (e.g. Hadoop).  If we knew that we wouldn't hit the LIMIT for 
the query (or page size), we could absolutely have each node read everything 
and ignore token ranges, but there's no way to guarantee that.

I suggest looking into upgrading to 2.1 at some point if secondary index 
performance is important to you.  CASSANDRA-1337 will help there, and this 
ticket is just a minor optimization on top of that.

>  Batch multiple range requests that are going to the same replica
> -
>
> Key: CASSANDRA-7982
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7982
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jay Patel
>Assignee: Jay Patel
> Fix For: 3.0
>
> Attachments: output1.txt
>
>
> In case of VNode and secondary index query, coordinator sends multiple range 
> requests to the same replica. For example, in the attached tracing session 
> (output1.txt), coordinator(192.168.51.22) sends multiple requests to 
> 192.168.51.25. Why can't we batch all the requests to the same replica to 
> avoid multiple round trips? I think this is not the issue with non-vnode 
> cluster where each node has one big range (+ replica ranges), instead of many 
> small ranges.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7995) sstablerepairedset should take more that one sstable as an argument

2014-09-23 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-7995:
---
Labels: lhf  (was: )

> sstablerepairedset should take more that one sstable as an argument
> ---
>
> Key: CASSANDRA-7995
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7995
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
>  Labels: lhf
> Fix For: 2.1.1
>
>
> Given that a c* node can a number of sstables in the 10s (100s?) of thousands 
> of sstables on it, sstablerepairedset should be taking a list of sstables to 
> mark as repaired rather than a single sstable.
> Running any command 10s of thousands of times isn't really good let alone one 
> that spins up a jvm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7769) Implement pg-style dollar syntax for string constants

2014-09-23 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145332#comment-14145332
 ] 

Tyler Hobbs commented on CASSANDRA-7769:


bq. Note: the information in SyntaxException (in case of the invalid markers) 
could be a bit more verbose than "expected '$' but got 'j'". Separate ticket?

I think that message is fine for now.

> Implement pg-style dollar syntax for string constants
> -
>
> Key: CASSANDRA-7769
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7769
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7769.txt, 7769v2.txt, 7769v3.txt, 7769v4.txt, 7769v5.txt
>
>
> Follow-up of CASSANDRA-7740:
> {{$function$...$function$}} in addition to string style variant.
> See also 
> http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2014-09-23 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145338#comment-14145338
 ] 

Robert Stupp commented on CASSANDRA-7438:
-

(note: [~vijay2...@gmail.com], please use the other nick)

Some quick notes:
* Can you add the assertion for {{capacity <= 0}} to 
{{OffheapCacheProvider.create}} - the current error message if 
{{row_cache_size_in_mb}} is not set (or invalid) "capacity should be set" could 
be more fleshy
* Additionally the {{capacity}} check should also check for negative values (it 
starts with a negative value - don't know what happens if it is negative...)
* {{org.apache.cassandra.db.RowCacheTest#testRowCacheCleanup}} fails at the 
last assertion - all other unit tests seem to work
* Documentation in cassandra.yaml for row_cache_provider could be a bit more 
verbose - just some abstract about the characteristics and limitation (e.g. 
Offheap does only work on Linux + OSX) of both implementations
* IMO it would be fine to have a general unit test for 
{{com.lruc.api.LRUCache}} in C* code, too
* Please add an adopted copy of {{RowCacheTest}} for OffheapCacheProvider
* unit tests using OffheapCacheProvider must not start on Windows builds - 
please add an assertion in OffHeapCacheProvider to assert that it runs on Linux 
or OSX

Sorry for the late reply

> Serializing Row cache alternative (Fully off heap)
> --
>
> Key: CASSANDRA-7438
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Linux
>Reporter: Vijay
>Assignee: Vijay
>  Labels: performance
> Fix For: 3.0
>
> Attachments: 0001-CASSANDRA-7438.patch
>
>
> Currently SerializingCache is partially off heap, keys are still stored in 
> JVM heap as BB, 
> * There is a higher GC costs for a reasonably big cache.
> * Some users have used the row cache efficiently in production for better 
> results, but this requires careful tunning.
> * Overhead in Memory for the cache entries are relatively high.
> So the proposal for this ticket is to move the LRU cache logic completely off 
> heap and use JNI to interact with cache. We might want to ensure that the new 
> implementation match the existing API's (ICache), and the implementation 
> needs to have safe memory access, low overhead in memory and less memcpy's 
> (As much as possible).
> We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2014-09-23 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145338#comment-14145338
 ] 

Robert Stupp edited comment on CASSANDRA-7438 at 9/23/14 8:01 PM:
--

(note: [~vijay2...@yahoo.com], please use the other nick)

Some quick notes:
* Can you add the assertion for {{capacity <= 0}} to 
{{OffheapCacheProvider.create}} - the current error message if 
{{row_cache_size_in_mb}} is not set (or invalid) "capacity should be set" could 
be more fleshy
* Additionally the {{capacity}} check should also check for negative values (it 
starts with a negative value - don't know what happens if it is negative...)
* {{org.apache.cassandra.db.RowCacheTest#testRowCacheCleanup}} fails at the 
last assertion - all other unit tests seem to work
* Documentation in cassandra.yaml for row_cache_provider could be a bit more 
verbose - just some abstract about the characteristics and limitation (e.g. 
Offheap does only work on Linux + OSX) of both implementations
* IMO it would be fine to have a general unit test for 
{{com.lruc.api.LRUCache}} in C* code, too
* Please add an adopted copy of {{RowCacheTest}} for OffheapCacheProvider
* unit tests using OffheapCacheProvider must not start on Windows builds - 
please add an assertion in OffHeapCacheProvider to assert that it runs on Linux 
or OSX

Sorry for the late reply


was (Author: snazy):
(note: [~vijay2...@gmail.com], please use the other nick)

Some quick notes:
* Can you add the assertion for {{capacity <= 0}} to 
{{OffheapCacheProvider.create}} - the current error message if 
{{row_cache_size_in_mb}} is not set (or invalid) "capacity should be set" could 
be more fleshy
* Additionally the {{capacity}} check should also check for negative values (it 
starts with a negative value - don't know what happens if it is negative...)
* {{org.apache.cassandra.db.RowCacheTest#testRowCacheCleanup}} fails at the 
last assertion - all other unit tests seem to work
* Documentation in cassandra.yaml for row_cache_provider could be a bit more 
verbose - just some abstract about the characteristics and limitation (e.g. 
Offheap does only work on Linux + OSX) of both implementations
* IMO it would be fine to have a general unit test for 
{{com.lruc.api.LRUCache}} in C* code, too
* Please add an adopted copy of {{RowCacheTest}} for OffheapCacheProvider
* unit tests using OffheapCacheProvider must not start on Windows builds - 
please add an assertion in OffHeapCacheProvider to assert that it runs on Linux 
or OSX

Sorry for the late reply

> Serializing Row cache alternative (Fully off heap)
> --
>
> Key: CASSANDRA-7438
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Linux
>Reporter: Vijay
>Assignee: Vijay
>  Labels: performance
> Fix For: 3.0
>
> Attachments: 0001-CASSANDRA-7438.patch
>
>
> Currently SerializingCache is partially off heap, keys are still stored in 
> JVM heap as BB, 
> * There is a higher GC costs for a reasonably big cache.
> * Some users have used the row cache efficiently in production for better 
> results, but this requires careful tunning.
> * Overhead in Memory for the cache entries are relatively high.
> So the proposal for this ticket is to move the LRU cache logic completely off 
> heap and use JNI to interact with cache. We might want to ensure that the new 
> implementation match the existing API's (ICache), and the implementation 
> needs to have safe memory access, low overhead in memory and less memcpy's 
> (As much as possible).
> We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


git commit: Accept dollar-quoted strings in CQL

2014-09-23 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1ecb70165 -> 6618bd89d


Accept dollar-quoted strings in CQL

Patch by Robert Stupp; reviewed by Tyler Hobbs for CASSANDRA-7769


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6618bd89
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6618bd89
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6618bd89

Branch: refs/heads/trunk
Commit: 6618bd89d3d51b06a6c12ec947cce089807a4190
Parents: 1ecb701
Author: Robert Stupp 
Authored: Tue Sep 23 15:19:13 2014 -0500
Committer: Tyler Hobbs 
Committed: Tue Sep 23 15:19:13 2014 -0500

--
 CHANGES.txt |  1 +
 pylib/cqlshlib/cql3handling.py  |  5 +-
 pylib/cqlshlib/cqlhandling.py   |  2 +-
 src/java/org/apache/cassandra/cql3/Cql.g| 24 +--
 .../apache/cassandra/cql3/ErrorCollector.java   |  6 +-
 .../org/apache/cassandra/cql3/CQLTester.java| 11 +--
 .../org/apache/cassandra/cql3/PgStringTest.java | 76 
 test/unit/org/apache/cassandra/cql3/UFTest.java | 20 ++
 8 files changed, 128 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6618bd89/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e36ef8f..267c4c2 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Accept dollar quoted strings in CQL (CASSANDRA-7769)
  * Make assassinate a first class command (CASSANDRA-7935)
  * Support IN clause on any clustering column (CASSANDRA-4762)
  * Improve compaction logging (CASSANDRA-7818)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6618bd89/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 3425ce2..69fc277 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -149,7 +149,10 @@ syntax_rules = r'''
 
 JUNK ::= /([ \t\r\f\v]+|(--|[/][/])[^\n\r]*([\n\r]|$)|[/][*].*?[*][/])/ ;
 
- ::= /'([^']|'')*'/ ;
+ ::= 
+  |  ;
+ ::= /'([^']|'')*'/ ;
+ ::= /\$\$.*\$\$/;
  ::=/"([^"]|"")*"/ ;
  ::= /-?[0-9]+\.[0-9]+/ ;
  ::=  
/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}/ ;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6618bd89/pylib/cqlshlib/cqlhandling.py
--
diff --git a/pylib/cqlshlib/cqlhandling.py b/pylib/cqlshlib/cqlhandling.py
index fc3dc20..00ba736 100644
--- a/pylib/cqlshlib/cqlhandling.py
+++ b/pylib/cqlshlib/cqlhandling.py
@@ -302,7 +302,7 @@ class CqlParsingRuleSet(pylexotron.ParsingRuleSet):
 if tok[0] == 'unclosedName':
 # strip one quote
 return tok[1][1:].replace('""', '"')
-if tok[0] == 'stringLiteral':
+if tok[0] == 'quotedStringLiteral':
 # strip quotes
 return tok[1][1:-1].replace("''", "'")
 if tok[0] == 'unclosedString':

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6618bd89/src/java/org/apache/cassandra/cql3/Cql.g
--
diff --git a/src/java/org/apache/cassandra/cql3/Cql.g 
b/src/java/org/apache/cassandra/cql3/Cql.g
index 8c40885..e4bfd32 100644
--- a/src/java/org/apache/cassandra/cql3/Cql.g
+++ b/src/java/org/apache/cassandra/cql3/Cql.g
@@ -517,7 +517,6 @@ createFunctionStatement returns [CreateFunctionStatement 
expr]
   ( body = STRING_LITERAL
 { bodyOrClassName = $body.text; }
   )
-  /* TODO placeholder for pg-style function body */
 )
   )
   )
@@ -1420,9 +1419,26 @@ fragment Y: ('y'|'Y');
 fragment Z: ('z'|'Z');
 
 STRING_LITERAL
-@init{ StringBuilder b = new StringBuilder(); }
-@after{ setText(b.toString()); }
-: '\'' (c=~('\'') { b.appendCodePoint(c);} | '\'' '\'' { 
b.appendCodePoint('\''); })* '\''
+@init{
+StringBuilder txt = new StringBuilder(); // temporary to build 
pg-style-string
+}
+@after{ setText(txt.toString()); }
+:
+  /* pg-style string literal */
+  (
+'\$' '\$'
+( /* collect all input until '$$' is reached again */
+  {  (input.size() - input.index() > 1)
+   && !"$$".equals(input.substring(input.index(), input.index() + 
1)) }?
+ => c=. { txt.appendCodePoint(c); }
+)*
+'\$' '\$'
+  )
+  |
+  /* conventional quoted string literal */
+  (
+'\'' (c=~('\'') { txt.appendCodePoint(c);} | '\'' '\'' { 
txt.appendCodePoint('\''); })* '\''
+  )
 ;
 
 QUOTED_NAME

http://git-wip

[jira] [Updated] (CASSANDRA-7995) sstablerepairedset should take more that one sstable as an argument

2014-09-23 Thread Nick Bailey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Bailey updated CASSANDRA-7995:
---
Description: 
Given that a c* node can have a number of sstables in the 10s (100s?) of 
thousands of sstables on it, sstablerepairedset should be taking a list of 
sstables to mark as repaired rather than a single sstable.

Running any command 10s of thousands of times isn't really good let alone one 
that spins up a jvm.

  was:
Given that a c* node can a number of sstables in the 10s (100s?) of thousands 
of sstables on it, sstablerepairedset should be taking a list of sstables to 
mark as repaired rather than a single sstable.

Running any command 10s of thousands of times isn't really good let alone one 
that spins up a jvm.


> sstablerepairedset should take more that one sstable as an argument
> ---
>
> Key: CASSANDRA-7995
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7995
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
>  Labels: lhf
> Fix For: 2.1.1
>
>
> Given that a c* node can have a number of sstables in the 10s (100s?) of 
> thousands of sstables on it, sstablerepairedset should be taking a list of 
> sstables to mark as repaired rather than a single sstable.
> Running any command 10s of thousands of times isn't really good let alone one 
> that spins up a jvm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-7458) functional indexes

2014-09-23 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura reassigned CASSANDRA-7458:
--

Assignee: Mikhail Stepura

> functional indexes
> --
>
> Key: CASSANDRA-7458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7458
> Project: Cassandra
>  Issue Type: New Feature
>  Components: API, Core
>Reporter: Jonathan Ellis
>Assignee: Mikhail Stepura
> Fix For: 3.0
>
>
> Indexing information derived from the row can be powerful.  For example, 
> using the hypothetical {{extract_date}} function,
> {code}
> create table ticks (
> symbol text,
> ticked_at datetime,
> price int,
> tags set,
> PRIMARY KEY (symbol, ticked_at)
> );
> CREATE INDEX ticks_by_day ON ticks(extract_date(ticked_at));
> {code}
> http://www.postgresql.org/docs/9.3/static/indexes-expressional.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7924) Optimization of Java UDFs

2014-09-23 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-7924:

Attachment: 7924v3.txt

patch v3 with fixes comments - branch updated, too

> Optimization of Java UDFs
> -
>
> Key: CASSANDRA-7924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>  Labels: docs, lhf, udf
> Fix For: 3.0
>
> Attachments: 7924.txt, 7924v2.txt, 7924v3.txt
>
>
> Refactor 'java' UDFs to optimize invocation. Goal is to remove reflection 
> code. Implementation uses javassist to generate an instance of {{Function}} 
> that can be directly used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7800) Incomplete results when selecting a particular column in the opposite order of my clustering

2014-09-23 Thread Cody Rank (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145457#comment-14145457
 ] 

Cody Rank commented on CASSANDRA-7800:
--

Although it doesn't show up in the output of describe table (which may be 
another bug), tenantGuid is static.

> Incomplete results when selecting a particular column in the opposite order 
> of my clustering
> 
>
> Key: CASSANDRA-7800
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7800
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: cqlsh 4.1.1 | Cassandra 2.0.8 | CQL spec 3.1.1 | Thrift 
> protocol 19.39.0
>Reporter: Cody Rank
> Attachments: dcm.txt
>
>
> When I run the following query, I get 96 rows back:
> {code}
> SELECT * FROM "DeviceCounterMonth" WHERE "counterGuid" = 
> 3ae09592-91b3-568f-93d0-d7a77d46d2d7 AND "deviceGuid" = 
> ae2fc84d-b85b-4fcf-9881-71af4cc5419d AND "startOfMonthLocal" = 
> '2014-08-01T00:00:00+00:00' ORDER BY "sampleBucketTimeLocal" DESC;
> {code}
> However, If I change DESC to ASC, I only get back a single row. In that row, 
> {{sampleBucketTimeLocal}} is {{null}}, which is not true of any of the rows 
> returned by the first query, and should be impossible since it's part of the 
> primary key.
> Further, if I select specific columns (instead of *), as long as I leave out 
> tenantGuid, the query returns the same expected 96 rows regardless of whether 
> I use DESC or ASC.
> I haven't been able to create a minimal repro, so I'm attaching a dump of the 
> table. The schema is as follows:
> {code}
> CREATE TABLE "DeviceCounterMonth" (
>   "counterGuid" uuid,
>   "deviceGuid" uuid,
>   "startOfMonthLocal" text,
>   "sampleBucketTimeLocal" text,
>   max float,
>   mean float,
>   min float,
>   samples bigint,
>   "stdDevs" float,
>   "tenantGuid" uuid,
>   PRIMARY KEY (("counterGuid", "deviceGuid", "startOfMonthLocal"), 
> "sampleBucketTimeLocal")
> ) WITH CLUSTERING ORDER BY ("sampleBucketTimeLocal" DESC) AND
>   bloom_filter_fp_chance=0.01 AND
>   caching='KEYS_ONLY' AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 AND
>   gc_grace_seconds=864000 AND
>   index_interval=128 AND
>   read_repair_chance=0.10 AND
>   populate_io_cache_on_flush='false' AND
>   default_time_to_live=0 AND
>   speculative_retry='99.0PERCENTILE' AND
>   memtable_flush_period_in_ms=0 AND
>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>   compression={'sstable_compression': 'LZ4Compressor'};
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-7982) Batch multiple range requests that are going to the same replica

2014-09-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-7982.
---
Resolution: Won't Fix

>  Batch multiple range requests that are going to the same replica
> -
>
> Key: CASSANDRA-7982
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7982
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jay Patel
>Assignee: Jay Patel
> Fix For: 3.0
>
> Attachments: output1.txt
>
>
> In case of VNode and secondary index query, coordinator sends multiple range 
> requests to the same replica. For example, in the attached tracing session 
> (output1.txt), coordinator(192.168.51.22) sends multiple requests to 
> 192.168.51.25. Why can't we batch all the requests to the same replica to 
> avoid multiple round trips? I think this is not the issue with non-vnode 
> cluster where each node has one big range (+ replica ranges), instead of many 
> small ranges.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7995) sstablerepairedset should take more that one sstable as an argument

2014-09-23 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-7995:
--
Attachment: 7995-2.1.txt

Adding support to sstablerepairedset to take a list of sstables on the command 
line, or to take a file with all of the sstables with a "-f" flag.

> sstablerepairedset should take more that one sstable as an argument
> ---
>
> Key: CASSANDRA-7995
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7995
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
>  Labels: lhf
> Fix For: 2.1.1
>
> Attachments: 7995-2.1.txt
>
>
> Given that a c* node can have a number of sstables in the 10s (100s?) of 
> thousands of sstables on it, sstablerepairedset should be taking a list of 
> sstables to mark as repaired rather than a single sstable.
> Running any command 10s of thousands of times isn't really good let alone one 
> that spins up a jvm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-7995) sstablerepairedset should take more that one sstable as an argument

2014-09-23 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian reassigned CASSANDRA-7995:
-

Assignee: Carl Yeksigian

> sstablerepairedset should take more that one sstable as an argument
> ---
>
> Key: CASSANDRA-7995
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7995
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
>Assignee: Carl Yeksigian
>  Labels: lhf
> Fix For: 2.1.1
>
> Attachments: 7995-2.1.txt
>
>
> Given that a c* node can have a number of sstables in the 10s (100s?) of 
> thousands of sstables on it, sstablerepairedset should be taking a list of 
> sstables to mark as repaired rather than a single sstable.
> Running any command 10s of thousands of times isn't really good let alone one 
> that spins up a jvm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-7996) Cassandra C# driver errors on virtual properties when using CreateIfNotExists method

2014-09-23 Thread Thomas Atwood (JIRA)
Thomas Atwood created CASSANDRA-7996:


 Summary: Cassandra C# driver errors on virtual properties when 
using CreateIfNotExists method
 Key: CASSANDRA-7996
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7996
 Project: Cassandra
  Issue Type: Improvement
  Components: Drivers (now out of tree)
Reporter: Thomas Atwood
 Fix For: 2.1.0


When using the Cassandra C# driver, I receive the error "An unhandled exception 
of type 'Cassandra.SyntxError' occurred in Cassandra.dll when attempting to 
create a table with an overridden property using the Linq functionality with 
the driver.  If the property is not overriden, the driver creates the table 
without any issues.

Example:  concrete object inherits from abstract object where the Id field is 
virtual on the abstract object.  Reason to override this field would be to 
apply certain regex validation for format depending on the derived concrete 
object (Id will still be unique across all objects that inherit from the 
abstract object).

Abstract object:
using System;
using System.ComponentModel;
using Cassandra.Data.Linq;

namespace TestDatastaxCsDriver.Abstract
{
[AllowFiltering]
[Serializable]
public class AbstractEntity: INotifyPropertyChanged
{
private string _id;
private string _name;
private string _insertuser;
private DateTime _insertimestamp;
private DateTime _modifiedtimestamp;

public event PropertyChangedEventHandler PropertyChanged;
public AbstractEntity(string id, string name, string insertuser)
{
Id = _id;
Name = _name;
InsertUser = _insertuser;
InsertTimestamp = DateTime.Now;
ModifiedTimestamp = DateTime.Now;
}

[PartitionKey]
[Column("id")]
public virtual string Id
{
get { return _id; }
set
{
if (value != _id)
{
_id = value;
NotifyPropertyChanged("Id");
}
}
}

[Column("name")]
public string Name
{
get { return _name; }
set
{
if (value != _name)
{
_name = value;
NotifyPropertyChanged("Name");
}
}
}

[Column("insertuser")]
public string InsertUser
{
get { return _insertuser; }
set
{
if (value != _insertuser)
{
_insertuser = value;
NotifyPropertyChanged("InsertUser");
}
}
}

[Column("inserttimestamp")]
public DateTime InsertTimestamp
{
get { return _insertimestamp; }
set
{
if (value != _insertimestamp)
{
_insertimestamp = value;
NotifyPropertyChanged("InsertTimestamp");
}
}
}

[Column("modifiedtimestamp")]
public DateTime ModifiedTimestamp
{
get { return _modifiedtimestamp; }
set
{
if (value != _modifiedtimestamp)
{
_modifiedtimestamp = value;
NotifyPropertyChanged("ModifiedTimestamp");
}
}
}

private void NotifyPropertyChanged(String propertyName = "")
{
if (PropertyChanged != null)
{
PropertyChanged(this, new 
PropertyChangedEventArgs(propertyName));
ModifiedTimestamp = DateTime.Now;
}
}
}
}

Concrete object:
using System.ComponentModel.DataAnnotations;
using Cassandra.Data.Linq;
using TestDatastaxCsDriver.Abstract;

namespace TestDatastaxCsDriver.Concrete
{
[Table("issuer")]
public class Issuer : AbstractEntity
{
public Issuer(string id, string name, string insertuser) : base(id, 
name, insertuser)
{
}

//Cassandra C# driver chokes on this.  No issues if the property is not 
overriden.  Please note I also tried adding a column attribute to see if it 
fixed the problem and it did not.

[MaxLength(3,ErrorMessage = "Id cannot be longer than 3 characters.")]
public override string Id
{
get
{
return base.Id;
}
set
{
base.Id = value;
}
}
}
}

Program.cs to test:
using Cassandra;
using Cassandra.Data.Linq;
using TestDatastaxCsDriver.Concrete;

namespace TestDatastaxCsDriver
{
class Program
{
static void Main(string[] args)
{
Cluster c

[jira] [Resolved] (CASSANDRA-7996) Cassandra C# driver errors on virtual properties when using CreateIfNotExists method

2014-09-23 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian resolved CASSANDRA-7996.
---
   Resolution: Invalid
Fix Version/s: (was: 2.1.0)

C# driver issues should be reported to the [DataStax C# 
Jira|https://datastax-oss.atlassian.net/browse/CSHARP].

> Cassandra C# driver errors on virtual properties when using CreateIfNotExists 
> method
> 
>
> Key: CASSANDRA-7996
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7996
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Drivers (now out of tree)
>Reporter: Thomas Atwood
>
> When using the Cassandra C# driver, I receive the error "An unhandled 
> exception of type 'Cassandra.SyntxError' occurred in Cassandra.dll when 
> attempting to create a table with an overridden property using the Linq 
> functionality with the driver.  If the property is not overriden, the driver 
> creates the table without any issues.
> Example:  concrete object inherits from abstract object where the Id field is 
> virtual on the abstract object.  Reason to override this field would be to 
> apply certain regex validation for format depending on the derived concrete 
> object (Id will still be unique across all objects that inherit from the 
> abstract object).
> Abstract object:
> using System;
> using System.ComponentModel;
> using Cassandra.Data.Linq;
> namespace TestDatastaxCsDriver.Abstract
> {
> [AllowFiltering]
> [Serializable]
> public class AbstractEntity: INotifyPropertyChanged
> {
> private string _id;
> private string _name;
> private string _insertuser;
> private DateTime _insertimestamp;
> private DateTime _modifiedtimestamp;
> public event PropertyChangedEventHandler PropertyChanged;
> public AbstractEntity(string id, string name, string insertuser)
> {
> Id = _id;
> Name = _name;
> InsertUser = _insertuser;
> InsertTimestamp = DateTime.Now;
> ModifiedTimestamp = DateTime.Now;
> }
> [PartitionKey]
> [Column("id")]
> public virtual string Id
> {
> get { return _id; }
> set
> {
> if (value != _id)
> {
> _id = value;
> NotifyPropertyChanged("Id");
> }
> }
> }
> [Column("name")]
> public string Name
> {
> get { return _name; }
> set
> {
> if (value != _name)
> {
> _name = value;
> NotifyPropertyChanged("Name");
> }
> }
> }
> [Column("insertuser")]
> public string InsertUser
> {
> get { return _insertuser; }
> set
> {
> if (value != _insertuser)
> {
> _insertuser = value;
> NotifyPropertyChanged("InsertUser");
> }
> }
> }
> [Column("inserttimestamp")]
> public DateTime InsertTimestamp
> {
> get { return _insertimestamp; }
> set
> {
> if (value != _insertimestamp)
> {
> _insertimestamp = value;
> NotifyPropertyChanged("InsertTimestamp");
> }
> }
> }
> [Column("modifiedtimestamp")]
> public DateTime ModifiedTimestamp
> {
> get { return _modifiedtimestamp; }
> set
> {
> if (value != _modifiedtimestamp)
> {
> _modifiedtimestamp = value;
> NotifyPropertyChanged("ModifiedTimestamp");
> }
> }
> }
> private void NotifyPropertyChanged(String propertyName = "")
> {
> if (PropertyChanged != null)
> {
> PropertyChanged(this, new 
> PropertyChangedEventArgs(propertyName));
> ModifiedTimestamp = DateTime.Now;
> }
> }
> }
> }
> Concrete object:
> using System.ComponentModel.DataAnnotations;
> using Cassandra.Data.Linq;
> using TestDatastaxCsDriver.Abstract;
> namespace TestDatastaxCsDriver.Concrete
> {
> [Table("issuer")]
> public class Issuer : AbstractEntity
> {
> public Issuer(string id, string name, string insertuser) : base(id, 
> name, insertuser)
> {
> }
> //Cassandra C# driver chokes on this.  No issues if the property is not 
> overriden.  Please note I also tried ad

[jira] [Updated] (CASSANDRA-6075) The token function should allow column identifiers in the correct order only

2014-09-23 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6075:
-
Attachment: 6075-fix-v2.txt

> The token function should allow column identifiers in the correct order only
> 
>
> Key: CASSANDRA-6075
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6075
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 1.2.9
>Reporter: Michaël Figuière
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: cql
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 6075-fix-v2.txt, CASSANDRA-2.0-6075-PART2.txt, 
> CASSANDRA-2.1-6075-PART2.txt, CASSANDRA-2.1-6075.txt, CASSANDRA-6075.txt
>
>
> Given the following table:
> {code}
> CREATE TABLE t1 (a int, b text, PRIMARY KEY ((a, b)));
> {code}
> The following request returns an error in cqlsh as literal arguments order is 
> incorrect:
> {code}
> SELECT * FROM t1 WHERE token(a, b) > token('s', 1);
> Bad Request: Type error: 's' cannot be passed as argument 0 of function token 
> of type int
> {code}
> But surprisingly if we provide the column identifier arguments in the wrong 
> order no error is returned:
> {code}
> SELECT * FROM t1 WHERE token(a, b) > token(1, 'a'); // correct order is valid
> SELECT * FROM t1 WHERE token(b, a) > token(1, 'a'); // incorrect order is 
> valid as well
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


git commit: Fix CASSANDRA-6075

2014-09-23 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 62db20a77 -> b1166c099


Fix CASSANDRA-6075

patch by Aleksey Yeschenko and Benjamin Lerer


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b1166c09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b1166c09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b1166c09

Branch: refs/heads/cassandra-2.0
Commit: b1166c09983b1678cbc4b241f1da860930c571a5
Parents: 62db20a
Author: Aleksey Yeschenko 
Authored: Tue Sep 23 18:51:58 2014 -0700
Committer: Aleksey Yeschenko 
Committed: Tue Sep 23 18:53:27 2014 -0700

--
 .../cql3/statements/SelectStatement.java|  5 ++--
 .../cql3/SelectWithTokenFunctionTest.java   | 30 
 2 files changed, 33 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1166c09/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 363e3d3..aadd0bd 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -24,6 +24,7 @@ import com.google.common.base.Joiner;
 import com.google.common.base.Objects;
 import com.google.common.base.Predicate;
 import com.google.common.collect.Iterables;
+import com.google.common.collect.Iterators;
 
 import org.github.jamm.MemoryMeter;
 
@@ -1815,7 +1816,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 previous = cname;
 }
 
-if (stmt.onToken && cfDef.partitionKeyCount() > 0)
+if (stmt.onToken)
 checkTokenFunctionArgumentsOrder(cfDef);
 }
 
@@ -1827,7 +1828,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
  */
 private void checkTokenFunctionArgumentsOrder(CFDefinition cfDef) 
throws InvalidRequestException
 {
-Iterator iter = cfDef.partitionKeys().iterator();
+Iterator iter = Iterators.cycle(cfDef.partitionKeys());
 for (Relation relation : whereClause)
 {
 SingleColumnRelation singleColumnRelation = 
(SingleColumnRelation) relation;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1166c09/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java 
b/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
index f089a5b..9199862 100644
--- a/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
+++ b/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.cql3;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.db.ConsistencyLevel;
 import org.apache.cassandra.exceptions.InvalidRequestException;
+import org.apache.cassandra.exceptions.SyntaxException;
 import org.apache.cassandra.gms.Gossiper;
 import org.apache.cassandra.service.ClientState;
 import org.junit.AfterClass;
@@ -90,6 +91,8 @@ public class SelectWithTokenFunctionTest
 {
 UntypedResultSet results = execute("SELECT * FROM 
%s.single_partition WHERE token(a) >= token(0)");
 assertEquals(1, results.size());
+results = execute("SELECT * FROM %s.single_partition WHERE 
token(a) >= token(0) and token(a) < token(1)");
+assertEquals(1, results.size());
 }
 finally
 {
@@ -104,6 +107,24 @@ public class SelectWithTokenFunctionTest
 }
 
 @Test(expected = InvalidRequestException.class)
+public void testTokenFunctionWithTwoGreaterThan() throws Throwable
+{
+execute("SELECT * FROM %s.single_clustering WHERE token(a) >= token(0) 
and token(a) >= token(1)");
+}
+
+@Test(expected = InvalidRequestException.class)
+public void testTokenFunctionWithGreaterThanAndEquals() throws Throwable
+{
+execute("SELECT * FROM %s.single_clustering WHERE token(a) >= token(0) 
and token(a) = token(1)");
+}
+
+@Test(expected = SyntaxException.class)
+public void testTokenFunctionWithGreaterThanAndIn() throws Throwable
+{
+execute("SELECT * FROM %s.single_clustering WHERE token(a) >= token(0) 
and token(a) in (token(1))");
+}
+
+@Test(expected = InvalidRequestException.class)
 public void testTokenFunctionWithPartitionKeyAndClusteringKeyArguments() 
throws Throwable
 {
 

[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-09-23 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dffdae0c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dffdae0c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dffdae0c

Branch: refs/heads/trunk
Commit: dffdae0c94352d99ebc147dae6282670315e8fd2
Parents: 6618bd8 c65ef9a
Author: Aleksey Yeschenko 
Authored: Tue Sep 23 18:59:18 2014 -0700
Committer: Aleksey Yeschenko 
Committed: Tue Sep 23 18:59:18 2014 -0700

--
 .../cassandra/cql3/statements/SelectStatement.java   |  4 ++--
 .../cassandra/cql3/SelectWithTokenFunctionTest.java  | 11 +++
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dffdae0c/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--



[1/2] git commit: Fix CASSANDRA-6075

2014-09-23 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 c2c9835f7 -> c65ef9af6


Fix CASSANDRA-6075

patch by Aleksey Yeschenko and Benjamin Lerer


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b1166c09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b1166c09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b1166c09

Branch: refs/heads/cassandra-2.1
Commit: b1166c09983b1678cbc4b241f1da860930c571a5
Parents: 62db20a
Author: Aleksey Yeschenko 
Authored: Tue Sep 23 18:51:58 2014 -0700
Committer: Aleksey Yeschenko 
Committed: Tue Sep 23 18:53:27 2014 -0700

--
 .../cql3/statements/SelectStatement.java|  5 ++--
 .../cql3/SelectWithTokenFunctionTest.java   | 30 
 2 files changed, 33 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1166c09/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 363e3d3..aadd0bd 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -24,6 +24,7 @@ import com.google.common.base.Joiner;
 import com.google.common.base.Objects;
 import com.google.common.base.Predicate;
 import com.google.common.collect.Iterables;
+import com.google.common.collect.Iterators;
 
 import org.github.jamm.MemoryMeter;
 
@@ -1815,7 +1816,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 previous = cname;
 }
 
-if (stmt.onToken && cfDef.partitionKeyCount() > 0)
+if (stmt.onToken)
 checkTokenFunctionArgumentsOrder(cfDef);
 }
 
@@ -1827,7 +1828,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
  */
 private void checkTokenFunctionArgumentsOrder(CFDefinition cfDef) 
throws InvalidRequestException
 {
-Iterator iter = cfDef.partitionKeys().iterator();
+Iterator iter = Iterators.cycle(cfDef.partitionKeys());
 for (Relation relation : whereClause)
 {
 SingleColumnRelation singleColumnRelation = 
(SingleColumnRelation) relation;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1166c09/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java 
b/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
index f089a5b..9199862 100644
--- a/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
+++ b/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.cql3;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.db.ConsistencyLevel;
 import org.apache.cassandra.exceptions.InvalidRequestException;
+import org.apache.cassandra.exceptions.SyntaxException;
 import org.apache.cassandra.gms.Gossiper;
 import org.apache.cassandra.service.ClientState;
 import org.junit.AfterClass;
@@ -90,6 +91,8 @@ public class SelectWithTokenFunctionTest
 {
 UntypedResultSet results = execute("SELECT * FROM 
%s.single_partition WHERE token(a) >= token(0)");
 assertEquals(1, results.size());
+results = execute("SELECT * FROM %s.single_partition WHERE 
token(a) >= token(0) and token(a) < token(1)");
+assertEquals(1, results.size());
 }
 finally
 {
@@ -104,6 +107,24 @@ public class SelectWithTokenFunctionTest
 }
 
 @Test(expected = InvalidRequestException.class)
+public void testTokenFunctionWithTwoGreaterThan() throws Throwable
+{
+execute("SELECT * FROM %s.single_clustering WHERE token(a) >= token(0) 
and token(a) >= token(1)");
+}
+
+@Test(expected = InvalidRequestException.class)
+public void testTokenFunctionWithGreaterThanAndEquals() throws Throwable
+{
+execute("SELECT * FROM %s.single_clustering WHERE token(a) >= token(0) 
and token(a) = token(1)");
+}
+
+@Test(expected = SyntaxException.class)
+public void testTokenFunctionWithGreaterThanAndIn() throws Throwable
+{
+execute("SELECT * FROM %s.single_clustering WHERE token(a) >= token(0) 
and token(a) in (token(1))");
+}
+
+@Test(expected = InvalidRequestException.class)
 public void testTokenFunctionWithPartitionKeyAndClusteringKeyArguments() 
throws Throwable
 {
 

[1/3] git commit: Fix CASSANDRA-6075

2014-09-23 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6618bd89d -> dffdae0c9


Fix CASSANDRA-6075

patch by Aleksey Yeschenko and Benjamin Lerer


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b1166c09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b1166c09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b1166c09

Branch: refs/heads/trunk
Commit: b1166c09983b1678cbc4b241f1da860930c571a5
Parents: 62db20a
Author: Aleksey Yeschenko 
Authored: Tue Sep 23 18:51:58 2014 -0700
Committer: Aleksey Yeschenko 
Committed: Tue Sep 23 18:53:27 2014 -0700

--
 .../cql3/statements/SelectStatement.java|  5 ++--
 .../cql3/SelectWithTokenFunctionTest.java   | 30 
 2 files changed, 33 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1166c09/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 363e3d3..aadd0bd 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -24,6 +24,7 @@ import com.google.common.base.Joiner;
 import com.google.common.base.Objects;
 import com.google.common.base.Predicate;
 import com.google.common.collect.Iterables;
+import com.google.common.collect.Iterators;
 
 import org.github.jamm.MemoryMeter;
 
@@ -1815,7 +1816,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 previous = cname;
 }
 
-if (stmt.onToken && cfDef.partitionKeyCount() > 0)
+if (stmt.onToken)
 checkTokenFunctionArgumentsOrder(cfDef);
 }
 
@@ -1827,7 +1828,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
  */
 private void checkTokenFunctionArgumentsOrder(CFDefinition cfDef) 
throws InvalidRequestException
 {
-Iterator iter = cfDef.partitionKeys().iterator();
+Iterator iter = Iterators.cycle(cfDef.partitionKeys());
 for (Relation relation : whereClause)
 {
 SingleColumnRelation singleColumnRelation = 
(SingleColumnRelation) relation;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1166c09/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java 
b/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
index f089a5b..9199862 100644
--- a/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
+++ b/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.cql3;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.db.ConsistencyLevel;
 import org.apache.cassandra.exceptions.InvalidRequestException;
+import org.apache.cassandra.exceptions.SyntaxException;
 import org.apache.cassandra.gms.Gossiper;
 import org.apache.cassandra.service.ClientState;
 import org.junit.AfterClass;
@@ -90,6 +91,8 @@ public class SelectWithTokenFunctionTest
 {
 UntypedResultSet results = execute("SELECT * FROM 
%s.single_partition WHERE token(a) >= token(0)");
 assertEquals(1, results.size());
+results = execute("SELECT * FROM %s.single_partition WHERE 
token(a) >= token(0) and token(a) < token(1)");
+assertEquals(1, results.size());
 }
 finally
 {
@@ -104,6 +107,24 @@ public class SelectWithTokenFunctionTest
 }
 
 @Test(expected = InvalidRequestException.class)
+public void testTokenFunctionWithTwoGreaterThan() throws Throwable
+{
+execute("SELECT * FROM %s.single_clustering WHERE token(a) >= token(0) 
and token(a) >= token(1)");
+}
+
+@Test(expected = InvalidRequestException.class)
+public void testTokenFunctionWithGreaterThanAndEquals() throws Throwable
+{
+execute("SELECT * FROM %s.single_clustering WHERE token(a) >= token(0) 
and token(a) = token(1)");
+}
+
+@Test(expected = SyntaxException.class)
+public void testTokenFunctionWithGreaterThanAndIn() throws Throwable
+{
+execute("SELECT * FROM %s.single_clustering WHERE token(a) >= token(0) 
and token(a) in (token(1))");
+}
+
+@Test(expected = InvalidRequestException.class)
 public void testTokenFunctionWithPartitionKeyAndClusteringKeyArguments() 
throws Throwable
 {
 execute("SEL

[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-09-23 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c65ef9af
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c65ef9af
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c65ef9af

Branch: refs/heads/trunk
Commit: c65ef9af6529086ef1793d24b3f387012bf59091
Parents: c2c9835 b1166c0
Author: Aleksey Yeschenko 
Authored: Tue Sep 23 18:59:00 2014 -0700
Committer: Aleksey Yeschenko 
Committed: Tue Sep 23 18:59:00 2014 -0700

--
 .../cassandra/cql3/statements/SelectStatement.java   |  4 ++--
 .../cassandra/cql3/SelectWithTokenFunctionTest.java  | 11 +++
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c65ef9af/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 22c8468,aadd0bd..ccda356
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@@ -1821,22 -1813,22 +1821,22 @@@ public class SelectStatement implement
  // index with filtering, we'll need to handle it though.
  throw new InvalidRequestException("Only EQ and IN 
relation are supported on the partition key (unless you use the token() 
function)");
  }
 -previous = cname;
 +previous = cdef;
  }
  
- if (stmt.onToken && cfm.partitionKeyColumns().size() > 0)
+ if (stmt.onToken)
 -checkTokenFunctionArgumentsOrder(cfDef);
 +checkTokenFunctionArgumentsOrder(cfm);
  }
  
  /**
   * Checks that the column identifiers used as argument for the token 
function have been specified in the
   * partition key order.
 - * @param cfDef the Column Family Definition
 + * @param cfm the Column Family MetaData
   * @throws InvalidRequestException if the arguments have not been 
provided in the proper order.
   */
 -private void checkTokenFunctionArgumentsOrder(CFDefinition cfDef) 
throws InvalidRequestException
 +private void checkTokenFunctionArgumentsOrder(CFMetaData cfm) throws 
InvalidRequestException
  {
- Iterator iter = 
cfm.partitionKeyColumns().iterator();
 -Iterator iter = Iterators.cycle(cfDef.partitionKeys());
++Iterator iter = 
Iterators.cycle(cfm.partitionKeyColumns());
  for (Relation relation : whereClause)
  {
  SingleColumnRelation singleColumnRelation = 
(SingleColumnRelation) relation;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c65ef9af/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
--
diff --cc test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
index 73a7209,9199862..6f9f5e2
--- a/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
+++ b/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
@@@ -17,39 -17,155 +17,50 @@@
   */
  package org.apache.cassandra.cql3;
  
 -import org.apache.cassandra.SchemaLoader;
 -import org.apache.cassandra.db.ConsistencyLevel;
 -import org.apache.cassandra.exceptions.InvalidRequestException;
 -import org.apache.cassandra.exceptions.SyntaxException;
 -import org.apache.cassandra.gms.Gossiper;
 -import org.apache.cassandra.service.ClientState;
 -import org.junit.AfterClass;
 -import org.junit.BeforeClass;
  import org.junit.Test;
 -import org.slf4j.Logger;
 -import org.slf4j.LoggerFactory;
  
 -import static org.apache.cassandra.cql3.QueryProcessor.process;
 -import static org.apache.cassandra.cql3.QueryProcessor.processInternal;
 -import static org.junit.Assert.assertEquals;
 -
 -public class SelectWithTokenFunctionTest
 +public class SelectWithTokenFunctionTest extends CQLTester
  {
 -private static final Logger logger = 
LoggerFactory.getLogger(SelectWithTokenFunctionTest.class);
 -static ClientState clientState;
 -static String keyspace = "token_function_test";
 -
 -@BeforeClass
 -public static void setUpClass() throws Throwable
 -{
 -SchemaLoader.loadSchema();
 -executeSchemaChange("CREATE KEYSPACE IF NOT EXISTS %s WITH 
replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}");
 -executeSchemaChange("CREATE TABLE IF NOT EXISTS %s.si

[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-09-23 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c65ef9af
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c65ef9af
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c65ef9af

Branch: refs/heads/cassandra-2.1
Commit: c65ef9af6529086ef1793d24b3f387012bf59091
Parents: c2c9835 b1166c0
Author: Aleksey Yeschenko 
Authored: Tue Sep 23 18:59:00 2014 -0700
Committer: Aleksey Yeschenko 
Committed: Tue Sep 23 18:59:00 2014 -0700

--
 .../cassandra/cql3/statements/SelectStatement.java   |  4 ++--
 .../cassandra/cql3/SelectWithTokenFunctionTest.java  | 11 +++
 2 files changed, 13 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c65ef9af/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 22c8468,aadd0bd..ccda356
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@@ -1821,22 -1813,22 +1821,22 @@@ public class SelectStatement implement
  // index with filtering, we'll need to handle it though.
  throw new InvalidRequestException("Only EQ and IN 
relation are supported on the partition key (unless you use the token() 
function)");
  }
 -previous = cname;
 +previous = cdef;
  }
  
- if (stmt.onToken && cfm.partitionKeyColumns().size() > 0)
+ if (stmt.onToken)
 -checkTokenFunctionArgumentsOrder(cfDef);
 +checkTokenFunctionArgumentsOrder(cfm);
  }
  
  /**
   * Checks that the column identifiers used as argument for the token 
function have been specified in the
   * partition key order.
 - * @param cfDef the Column Family Definition
 + * @param cfm the Column Family MetaData
   * @throws InvalidRequestException if the arguments have not been 
provided in the proper order.
   */
 -private void checkTokenFunctionArgumentsOrder(CFDefinition cfDef) 
throws InvalidRequestException
 +private void checkTokenFunctionArgumentsOrder(CFMetaData cfm) throws 
InvalidRequestException
  {
- Iterator iter = 
cfm.partitionKeyColumns().iterator();
 -Iterator iter = Iterators.cycle(cfDef.partitionKeys());
++Iterator iter = 
Iterators.cycle(cfm.partitionKeyColumns());
  for (Relation relation : whereClause)
  {
  SingleColumnRelation singleColumnRelation = 
(SingleColumnRelation) relation;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c65ef9af/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
--
diff --cc test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
index 73a7209,9199862..6f9f5e2
--- a/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
+++ b/test/unit/org/apache/cassandra/cql3/SelectWithTokenFunctionTest.java
@@@ -17,39 -17,155 +17,50 @@@
   */
  package org.apache.cassandra.cql3;
  
 -import org.apache.cassandra.SchemaLoader;
 -import org.apache.cassandra.db.ConsistencyLevel;
 -import org.apache.cassandra.exceptions.InvalidRequestException;
 -import org.apache.cassandra.exceptions.SyntaxException;
 -import org.apache.cassandra.gms.Gossiper;
 -import org.apache.cassandra.service.ClientState;
 -import org.junit.AfterClass;
 -import org.junit.BeforeClass;
  import org.junit.Test;
 -import org.slf4j.Logger;
 -import org.slf4j.LoggerFactory;
  
 -import static org.apache.cassandra.cql3.QueryProcessor.process;
 -import static org.apache.cassandra.cql3.QueryProcessor.processInternal;
 -import static org.junit.Assert.assertEquals;
 -
 -public class SelectWithTokenFunctionTest
 +public class SelectWithTokenFunctionTest extends CQLTester
  {
 -private static final Logger logger = 
LoggerFactory.getLogger(SelectWithTokenFunctionTest.class);
 -static ClientState clientState;
 -static String keyspace = "token_function_test";
 -
 -@BeforeClass
 -public static void setUpClass() throws Throwable
 -{
 -SchemaLoader.loadSchema();
 -executeSchemaChange("CREATE KEYSPACE IF NOT EXISTS %s WITH 
replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}");
 -executeSchemaChange("CREATE TABLE IF NOT EXIS

[jira] [Comment Edited] (CASSANDRA-6075) The token function should allow column identifiers in the correct order only

2014-09-23 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145777#comment-14145777
 ] 

Aleksey Yeschenko edited comment on CASSANDRA-6075 at 9/24/14 2:10 AM:
---

Fixed in 
https://github.com/apache/cassandra/commit/b1166c09983b1678cbc4b241f1da860930c571a5

Used Guava's Iterators#cycle() instead of the null/hasnext check, and removed 
the (if cfDef.partitionKeyCount() > 0) condition, which is always true.

Thanks.


was (Author: iamaleksey):
Fixed in 
https://github.com/apache/cassandra/commit/b1166c09983b1678cbc4b241f1da860930c571a5

Used Guava's Itrator#cycle() instead of the null/hasnext check, and removed the 
(if cfDef.partitionKeyCount() > 0) condition, which is always true.

Thanks.

> The token function should allow column identifiers in the correct order only
> 
>
> Key: CASSANDRA-6075
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6075
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 1.2.9
>Reporter: Michaël Figuière
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: cql
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 6075-fix-v2.txt, CASSANDRA-2.0-6075-PART2.txt, 
> CASSANDRA-2.1-6075-PART2.txt, CASSANDRA-2.1-6075.txt, CASSANDRA-6075.txt
>
>
> Given the following table:
> {code}
> CREATE TABLE t1 (a int, b text, PRIMARY KEY ((a, b)));
> {code}
> The following request returns an error in cqlsh as literal arguments order is 
> incorrect:
> {code}
> SELECT * FROM t1 WHERE token(a, b) > token('s', 1);
> Bad Request: Type error: 's' cannot be passed as argument 0 of function token 
> of type int
> {code}
> But surprisingly if we provide the column identifier arguments in the wrong 
> order no error is returned:
> {code}
> SELECT * FROM t1 WHERE token(a, b) > token(1, 'a'); // correct order is valid
> SELECT * FROM t1 WHERE token(b, a) > token(1, 'a'); // incorrect order is 
> valid as well
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)