[jira] [Commented] (CASSANDRA-12864) "commitlog_sync_batch_window_in_ms" parameter is not working correctly in 2.1, 2.2 and 3.9

2016-10-31 Thread Hiroyuki Yamada (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624234#comment-15624234
 ] 

Hiroyuki Yamada commented on CASSANDRA-12864:
-

OK, thank you for pointing out, Benjamin.
In that case, the documents should be described more correctly.
They say it waits commitlog_sync_batch_window_in_ms .

> To Apache Cassandra Community
http://cassandra.apache.org/doc/latest/configuration/cassandra_config_file.html?highlight=sync#commitlog-sync

> To Datastax
http://docs.datastax.com/en/cassandra/2.1/cassandra/configuration/configCassandra_yaml_r.html#reference_ds_qfg_n1r_1k__commitlog_sync
http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html#configCassandra_yaml__commitlog_sync

Also, if it is the expected behavior, I think it's kind of missing the points 
of group commit because it can't really control the window size and almost all 
of the mutations are committed right after they are issued.
So it can't control the balance between latency and throughput.



> "commitlog_sync_batch_window_in_ms" parameter is not working correctly in 
> 2.1, 2.2 and 3.9
> --
>
> Key: CASSANDRA-12864
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12864
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Hiroyuki Yamada
>
> "commitlog_sync_batch_window_in_ms" doesn't seem to be working at least in 
> the latest versions in 2.1.16, 2.2.8 and 3.9.
> Here is the way to reproduce the bug.
> 1.  set the following parameters in cassandra.yaml
> * commitlog_sync: batch
> * commitlog_sync_batch_window_in_ms: 1 (10s)
> 2. issue an insert from cqlsh
> 3. it immediately returns instead of waiting for 10 seconds.
> Please refer to the communication in the mailing list.
> http://www.mail-archive.com/user@cassandra.apache.org/msg49642.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12791) MessageIn logic to determine if the message is cross-node is wrong

2016-10-31 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624015#comment-15624015
 ] 

Stefania commented on CASSANDRA-12791:
--

I was trying to preserve the behavior of CASSANDRA-9793. However, it is true 
that knowing if cross-node-timeout is enabled can be easily derived from yaml, 
and I hadn't noticed that CASSANDRA-10580 added the latency to the same log 
message. So I agree that it is better to have the number of dropped messages 
and latency match.

I've amended the log message in [this 
commit|https://github.com/stef1927/cassandra/commit/39168a3eb8e43815e4001521d2793d59c227f9ee].

CI still pending.

> MessageIn logic to determine if the message is cross-node is wrong
> --
>
> Key: CASSANDRA-12791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12791
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>Priority: Minor
>
> {{MessageIn}} has the following code to read the 'creation time' of the 
> message on the receiving side:
> {noformat}
> public static ConstructionTime readTimestamp(InetAddress from, DataInputPlus 
> input, long timestamp) throws IOException
> {
> // make sure to readInt, even if cross_node_to is not enabled
> int partial = input.readInt();
> long crossNodeTimestamp = (timestamp & 0xL) | (((partial 
> & 0xL) << 2) >> 2);
> if (timestamp > crossNodeTimestamp)
> {
> MessagingService.instance().metrics.addTimeTaken(from, timestamp - 
> crossNodeTimestamp);
> }
> if(DatabaseDescriptor.hasCrossNodeTimeout())
> {
> return new ConstructionTime(crossNodeTimestamp, timestamp != 
> crossNodeTimestamp);
> }
> else
> {
> return new ConstructionTime();
> }
> }
> {noformat}
> where {{timestamp}} is really the local time on the receiving node when 
> calling that method.
> The incorrect part, I believe, is the {{timestamp != crossNodeTimestamp}} 
> used to set the {{isCrossNode}} field of {{ConstructionTime}}. A first 
> problem is that this will basically always be {{true}}: for it to be 
> {{false}}, we'd need the low-bytes of the timestamp taken on the sending node 
> to coincide exactly with the ones taken on the receiving side, which is 
> _very_ unlikely. It is also a relatively meaningless test: having that test 
> be {{false}} basically means the lack of clock sync between the 2 nodes is 
> exactly the time the 2 calls to {{System.currentTimeMillis()}} (on sender and 
> receiver), which is definitively not what we care about.
> What the result of this test is used for is to determine if the message was 
> crossNode or local. It's used to increment different metrics (we separate 
> metric local versus crossNode dropped messages) in {{MessagingService}} for 
> instance. And that's where this is kind of a bug: not only the {{timestamp != 
> crossNodeTimestamp}}, but if {{DatabaseDescriptor.hasCrossNodeTimeout()}}, we 
> *always* have this {{isCrossNode}} false, which means we'll never increment 
> the "cross-node dropped messages" metric, which is imo unexpected.
> That is, it is true that if {{DatabaseDescriptor.hasCrossNodeTimeout() == 
> false}}, then we end using the receiver side timestamp to timeout messages, 
> and so you end up only dropping messages that timeout locally. And _in that 
> sense_, always incrementing the "locally" dropped messages metric is not 
> completely illogical. But I doubt most users are aware of those pretty 
> specific nuance when looking at the related metrics, and I'm relatively sure 
> users expect a metrics named {{droppedCrossNodeTimeout}} to actually count 
> cross-node messages by default (keep in mind that 
> {{DatabaseDescriptor.hasCrossNodeTimeout()}} is actually false by default).
> Anyway, to sum it up I suggest that the following change should be done:
> # the {{timestamp != crossNodeTimestamp}} test is definitively not what we 
> want. We should at a minimum just replace it to {{true}} as that's basically 
> what it ends up being except for very rare and arguably random cases.
> # given how the {{ConstructionTime.isCrossNode}} is used, I suggest that we 
> really want it to mean if the message has shipped cross-node, not just be a 
> synonymous of {{DatabaseDescriptor.hasCrossNodeTimeout()}}. It should be 
> whether the message shipped cross-node, i.e. whether {{from == 
> BroadcastAdress()}} or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12835) Tracing payload not passed from QueryMessage to tracing session

2016-10-31 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15623987#comment-15623987
 ] 

mck commented on CASSANDRA-12835:
-

I'm still struggling to find a way (acceptable) to assert that the custom 
payload makes it through to the tracing implementation. The default 
implementation doesn't capture any custom payload, and to use a custom class 
(eg from one the test classpath) won't be overly versatile as dtests execute 
either off a cassandra git clone or off a specified version…

In lei of that, I'll add some unit tests that add a few assertions, but they 
won't catch this specific failure.

> Tracing payload not passed from QueryMessage to tracing session
> ---
>
> Key: CASSANDRA-12835
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12835
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Assignee: mck
>Priority: Critical
>  Labels: tracing
>
> Caused by CASSANDRA-10392.
> Related to CASSANDRA-11706.
> When querying using CQL statements (not prepared) the message type is 
> QueryMessage and the code in 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/transport/messages/QueryMessage.java#L101
>  is as follows:
> {code:java}
> if (state.traceNextQuery())
> {
> state.createTracingSession();
> ImmutableMap.Builder builder = 
> ImmutableMap.builder();
> {code}
> {{state.createTracingSession();}} should probably be 
> {{state.createTracingSession(getCustomPayload());}}. At least that fixes the 
> problem for me.
> This also raises the question whether some other parts of the code should 
> pass the custom payload as well (I'm not the right person to analyze this):
> {code}
> $ ag createTracingSession
> src/java/org/apache/cassandra/service/QueryState.java
> 80:public void createTracingSession()
> 82:createTracingSession(Collections.EMPTY_MAP);
> 85:public void createTracingSession(Map customPayload)
> src/java/org/apache/cassandra/thrift/CassandraServer.java
> 2528:state().getQueryState().createTracingSession();
> src/java/org/apache/cassandra/transport/messages/BatchMessage.java
> 163:state.createTracingSession();
> src/java/org/apache/cassandra/transport/messages/ExecuteMessage.java
> 114:state.createTracingSession(getCustomPayload());
> src/java/org/apache/cassandra/transport/messages/QueryMessage.java
> 101:state.createTracingSession();
> src/java/org/apache/cassandra/transport/messages/PrepareMessage.java
> 74:state.createTracingSession();
> {code}
> This is not marked as `minor` as the CASSANDRA-11706 was because this cannot 
> be fixed by the tracing plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12867) Batch with multiple conditional updates for the same partition causes AssertionError

2016-10-31 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15623735#comment-15623735
 ] 

Kurt Greaves commented on CASSANDRA-12867:
--

[~mshuler] this was discovered from a test added by [~alwyn] in 
CASSANDRA-12649. Once we get some confirmation on the "correctness" I'm happy 
to write a test for this. The test in 12649 would cover this but it's probably 
not in the most appropriate location.

> Batch with multiple conditional updates for the same partition causes 
> AssertionError
> 
>
> Key: CASSANDRA-12867
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12867
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
> Attachments: 12867-3.0.patch
>
>
> Reproduced in 3.0.10 and 3.10. Used to work in 3.0.9 and earlier. Bug was 
> introduced in CASSANDRA-12060.
> The following causes an AssertionError:
> {code}
> CREATE KEYSPACE test WITH replication = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 1 };
> create table test.test (id int PRIMARY KEY, val text);
> BEGIN BATCH INSERT INTO test.test (id, val) VALUES (999, 'aaa') IF NOT 
> EXISTS; INSERT INTO test.test (id, val) VALUES (999, 'ccc') IF NOT EXISTS; 
> APPLY BATCH ;
> {code}
> Stack trace is as follows:
> {code}
> ERROR [Native-Transport-Requests-2] 2016-10-31 04:16:44,231 Message.java:622 
> - Unexpected exception during request; channel = [id: 0x176e1c04, 
> L:/127.0.0.1:9042 - R:/127.0.0.1:59743]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.setConditionsForRow(CQL3CasRequest.java:138)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.addExistsCondition(CQL3CasRequest.java:104)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.addNotExist(CQL3CasRequest.java:84)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.IfNotExistsCondition.addConditionsTo(IfNotExistsCondition.java:28)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.addConditions(ModificationStatement.java:482)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.makeCasRequest(BatchStatement.java:434)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.executeWithConditions(BatchStatement.java:379)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:358)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:346)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:341)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:218)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:249) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:234) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:516)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:409)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_102]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
> {code}
> The problem is that previous will receive a value after the first statement 
> in the batch is evaluated in BatchStatement.makeCasRequest. I can't see any 
> reason why we have this assertion, it seems to me that it's unnecessary.
> Removing it fixes 

[jira] [Updated] (CASSANDRA-12649) Add BATCH metrics

2016-10-31 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-12649:
-
Attachment: 12649-3.x.patch

> Add BATCH metrics
> -
>
> Key: CASSANDRA-12649
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12649
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Alwyn Davis
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12649-3.x.patch, trunk-12649.txt
>
>
> To identify causes of load on a cluster, it would be useful to have some 
> additional metrics:
> * *Mutation size distribution:* I believe this would be relevant when 
> tracking the performance of unlogged batches.
> * *Logged / Unlogged Partitions per batch distribution:* This would also give 
> a count of batch types processed. Multiple distinct tables in batch would 
> just be considered as separate partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12649) Add BATCH metrics

2016-10-31 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15623715#comment-15623715
 ] 

Kurt Greaves commented on CASSANDRA-12649:
--

So rebased the patch on 3.X and one of the added unit tests actually exposed a 
bug that was just introduced in CASSANDRA-12060. Attached new, rebased patch 
here, however doubtful it's going to make it into 3.10.

Also raised CASSANDRA-12867 to cover the bug.

> Add BATCH metrics
> -
>
> Key: CASSANDRA-12649
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12649
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Alwyn Davis
>Priority: Minor
> Fix For: 3.x
>
> Attachments: trunk-12649.txt
>
>
> To identify causes of load on a cluster, it would be useful to have some 
> additional metrics:
> * *Mutation size distribution:* I believe this would be relevant when 
> tracking the performance of unlogged batches.
> * *Logged / Unlogged Partitions per batch distribution:* This would also give 
> a count of batch types processed. Multiple distinct tables in batch would 
> just be considered as separate partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12867) Batch with multiple conditional updates for the same partition causes AssertionError

2016-10-31 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15623662#comment-15623662
 ] 

Michael Shuler commented on CASSANDRA-12867:


I do not have any feedback on "correctness", but if this BATCH INSERT is 
correct and expected to function as stated, this absolutely needs a test 
included, so we don't hit the same error again in the future.

> Batch with multiple conditional updates for the same partition causes 
> AssertionError
> 
>
> Key: CASSANDRA-12867
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12867
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
> Attachments: 12867-3.0.patch
>
>
> Reproduced in 3.0.10 and 3.10. Used to work in 3.0.9 and earlier. Bug was 
> introduced in CASSANDRA-12060.
> The following causes an AssertionError:
> {code}
> CREATE KEYSPACE test WITH replication = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 1 };
> create table test.test (id int PRIMARY KEY, val text);
> BEGIN BATCH INSERT INTO test.test (id, val) VALUES (999, 'aaa') IF NOT 
> EXISTS; INSERT INTO test.test (id, val) VALUES (999, 'ccc') IF NOT EXISTS; 
> APPLY BATCH ;
> {code}
> Stack trace is as follows:
> {code}
> ERROR [Native-Transport-Requests-2] 2016-10-31 04:16:44,231 Message.java:622 
> - Unexpected exception during request; channel = [id: 0x176e1c04, 
> L:/127.0.0.1:9042 - R:/127.0.0.1:59743]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.setConditionsForRow(CQL3CasRequest.java:138)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.addExistsCondition(CQL3CasRequest.java:104)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.addNotExist(CQL3CasRequest.java:84)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.IfNotExistsCondition.addConditionsTo(IfNotExistsCondition.java:28)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.addConditions(ModificationStatement.java:482)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.makeCasRequest(BatchStatement.java:434)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.executeWithConditions(BatchStatement.java:379)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:358)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:346)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:341)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:218)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:249) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:234) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:516)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:409)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_102]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
> {code}
> The problem is that previous will receive a value after the first statement 
> in the batch is evaluated in BatchStatement.makeCasRequest. I can't see any 
> reason why we have this assertion, it seems to me that it's unnecessary.
> Removing it fixes the problem (obviously) but I'm not sure if it b

[jira] [Updated] (CASSANDRA-12867) Batch with multiple conditional updates for the same partition causes AssertionError

2016-10-31 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-12867:
-
Status: Awaiting Feedback  (was: Open)

> Batch with multiple conditional updates for the same partition causes 
> AssertionError
> 
>
> Key: CASSANDRA-12867
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12867
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
> Attachments: 12867-3.0.patch
>
>
> Reproduced in 3.0.10 and 3.10. Used to work in 3.0.9 and earlier. Bug was 
> introduced in CASSANDRA-12060.
> The following causes an AssertionError:
> {code}
> CREATE KEYSPACE test WITH replication = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 1 };
> create table test.test (id int PRIMARY KEY, val text);
> BEGIN BATCH INSERT INTO test.test (id, val) VALUES (999, 'aaa') IF NOT 
> EXISTS; INSERT INTO test.test (id, val) VALUES (999, 'ccc') IF NOT EXISTS; 
> APPLY BATCH ;
> {code}
> Stack trace is as follows:
> {code}
> ERROR [Native-Transport-Requests-2] 2016-10-31 04:16:44,231 Message.java:622 
> - Unexpected exception during request; channel = [id: 0x176e1c04, 
> L:/127.0.0.1:9042 - R:/127.0.0.1:59743]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.setConditionsForRow(CQL3CasRequest.java:138)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.addExistsCondition(CQL3CasRequest.java:104)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.addNotExist(CQL3CasRequest.java:84)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.IfNotExistsCondition.addConditionsTo(IfNotExistsCondition.java:28)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.addConditions(ModificationStatement.java:482)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.makeCasRequest(BatchStatement.java:434)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.executeWithConditions(BatchStatement.java:379)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:358)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:346)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:341)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:218)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:249) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:234) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:516)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:409)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_102]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
> {code}
> The problem is that previous will receive a value after the first statement 
> in the batch is evaluated in BatchStatement.makeCasRequest. I can't see any 
> reason why we have this assertion, it seems to me that it's unnecessary.
> Removing it fixes the problem (obviously) but I'm not sure if it breaks 
> something else, or if this is an intended failure case (in which case it 
> should be caught earlier on).
> Relevant code is as follows:
> {code:title=CQL3CasRequest.java}
> private void setConditionsForRow(Clusterin

[jira] [Updated] (CASSANDRA-12867) Batch with multiple conditional updates for the same partition causes AssertionError

2016-10-31 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-12867:
-
Attachment: 12867-3.0.patch

> Batch with multiple conditional updates for the same partition causes 
> AssertionError
> 
>
> Key: CASSANDRA-12867
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12867
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
> Attachments: 12867-3.0.patch
>
>
> Reproduced in 3.0.10 and 3.10. Used to work in 3.0.9 and earlier. Bug was 
> introduced in CASSANDRA-12060.
> The following causes an AssertionError:
> {code}
> CREATE KEYSPACE test WITH replication = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 1 };
> create table test.test (id int PRIMARY KEY, val text);
> BEGIN BATCH INSERT INTO test.test (id, val) VALUES (999, 'aaa') IF NOT 
> EXISTS; INSERT INTO test.test (id, val) VALUES (999, 'ccc') IF NOT EXISTS; 
> APPLY BATCH ;
> {code}
> Stack trace is as follows:
> {code}
> ERROR [Native-Transport-Requests-2] 2016-10-31 04:16:44,231 Message.java:622 
> - Unexpected exception during request; channel = [id: 0x176e1c04, 
> L:/127.0.0.1:9042 - R:/127.0.0.1:59743]
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.setConditionsForRow(CQL3CasRequest.java:138)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.addExistsCondition(CQL3CasRequest.java:104)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.CQL3CasRequest.addNotExist(CQL3CasRequest.java:84)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.IfNotExistsCondition.addConditionsTo(IfNotExistsCondition.java:28)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.ModificationStatement.addConditions(ModificationStatement.java:482)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.makeCasRequest(BatchStatement.java:434)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.executeWithConditions(BatchStatement.java:379)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:358)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:346)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:341)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:218)
>  ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:249) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:234) 
> ~[main/:na]
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
>  ~[main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:516)
>  [main/:na]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:409)
>  [main/:na]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_102]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
> {code}
> The problem is that previous will receive a value after the first statement 
> in the batch is evaluated in BatchStatement.makeCasRequest. I can't see any 
> reason why we have this assertion, it seems to me that it's unnecessary.
> Removing it fixes the problem (obviously) but I'm not sure if it breaks 
> something else, or if this is an intended failure case (in which case it 
> should be caught earlier on).
> Relevant code is as follows:
> {code:title=CQL3CasRequest.java}
> private void setConditionsForRow(Clustering clusterin

[jira] [Updated] (CASSANDRA-12867) Batch with multiple conditional updates for the same partition causes AssertionError

2016-10-31 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-12867:
-
Description: 
Reproduced in 3.0.10 and 3.10. Used to work in 3.0.9 and earlier. Bug was 
introduced in CASSANDRA-12060.

The following causes an AssertionError:
{code}
CREATE KEYSPACE test WITH replication = { 'class' : 'SimpleStrategy', 
'replication_factor' : 1 };
create table test.test (id int PRIMARY KEY, val text);
BEGIN BATCH INSERT INTO test.test (id, val) VALUES (999, 'aaa') IF NOT EXISTS; 
INSERT INTO test.test (id, val) VALUES (999, 'ccc') IF NOT EXISTS; APPLY BATCH ;
{code}

Stack trace is as follows:
{code}
ERROR [Native-Transport-Requests-2] 2016-10-31 04:16:44,231 Message.java:622 - 
Unexpected exception during request; channel = [id: 0x176e1c04, 
L:/127.0.0.1:9042 - R:/127.0.0.1:59743]
java.lang.AssertionError: null
at 
org.apache.cassandra.cql3.statements.CQL3CasRequest.setConditionsForRow(CQL3CasRequest.java:138)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.CQL3CasRequest.addExistsCondition(CQL3CasRequest.java:104)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.CQL3CasRequest.addNotExist(CQL3CasRequest.java:84)
 ~[main/:na]
at 
org.apache.cassandra.cql3.IfNotExistsCondition.addConditionsTo(IfNotExistsCondition.java:28)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.ModificationStatement.addConditions(ModificationStatement.java:482)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.BatchStatement.makeCasRequest(BatchStatement.java:434)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.BatchStatement.executeWithConditions(BatchStatement.java:379)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:358)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:346)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:341)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:218)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:249) 
~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:234) 
~[main/:na]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
 ~[main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:516)
 [main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:409)
 [main/:na]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.39.Final.jar:4.0.39.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
 [netty-all-4.0.39.Final.jar:4.0.39.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
 [netty-all-4.0.39.Final.jar:4.0.39.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
 [netty-all-4.0.39.Final.jar:4.0.39.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_102]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
 [main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
{code}

The problem is that previous will receive a value after the first statement in 
the batch is evaluated in BatchStatement.makeCasRequest. I can't see any reason 
why we have this assertion, it seems to me that it's unnecessary.
Removing it fixes the problem (obviously) but I'm not sure if it breaks 
something else, or if this is an intended failure case (in which case it should 
be caught earlier on).

Relevant code is as follows:

{code:title=CQL3CasRequest.java}
private void setConditionsForRow(Clustering clustering, RowCondition 
condition)
{
if (clustering == Clustering.STATIC_CLUSTERING)
{
assert staticConditions == null;
staticConditions = condition;
}
else
{
RowCondition previous = conditions.put(clustering, condition);
assert previous == null;
}
}
{code}

I've attached a patch that fixes the issue by removing the assert

  was:
Reproduced in 3.0.10 and 3.10. Used to work in 3.0.9 and earlier. Bug was 
introduced in CASSANDRA-12060.

The following causes an AssertionError:
{code}
CREATE KEYSPACE test WITH replication = { 'class' : 'SimpleStrategy', 
'replication_factor' : 1 };
create table test

[jira] [Created] (CASSANDRA-12867) Batch with multiple conditional updates for the same partition causes AssertionError

2016-10-31 Thread Kurt Greaves (JIRA)
Kurt Greaves created CASSANDRA-12867:


 Summary: Batch with multiple conditional updates for the same 
partition causes AssertionError
 Key: CASSANDRA-12867
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12867
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
Reporter: Kurt Greaves
Assignee: Kurt Greaves


Reproduced in 3.0.10 and 3.10. Used to work in 3.0.9 and earlier. Bug was 
introduced in CASSANDRA-12060.

The following causes an AssertionError:
{code}
CREATE KEYSPACE test WITH replication = { 'class' : 'SimpleStrategy', 
'replication_factor' : 1 };
create table test.test (id int PRIMARY KEY, val text);
BEGIN BATCH INSERT INTO test.test (id, val) VALUES (999, 'aaa') IF NOT EXISTS; 
INSERT INTO test.test (id, val) VALUES (999, 'ccc') IF NOT EXISTS; APPLY BATCH ;
{code}

Stack trace is as follows:
{code}
ERROR [Native-Transport-Requests-2] 2016-10-31 04:16:44,231 Message.java:622 - 
Unexpected exception during request; channel = [id: 0x176e1c04, 
L:/127.0.0.1:9042 - R:/127.0.0.1:59743]
java.lang.AssertionError: null
at 
org.apache.cassandra.cql3.statements.CQL3CasRequest.setConditionsForRow(CQL3CasRequest.java:138)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.CQL3CasRequest.addExistsCondition(CQL3CasRequest.java:104)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.CQL3CasRequest.addNotExist(CQL3CasRequest.java:84)
 ~[main/:na]
at 
org.apache.cassandra.cql3.IfNotExistsCondition.addConditionsTo(IfNotExistsCondition.java:28)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.ModificationStatement.addConditions(ModificationStatement.java:482)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.BatchStatement.makeCasRequest(BatchStatement.java:434)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.BatchStatement.executeWithConditions(BatchStatement.java:379)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:358)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:346)
 ~[main/:na]
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:341)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:218)
 ~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:249) 
~[main/:na]
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:234) 
~[main/:na]
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115)
 ~[main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:516)
 [main/:na]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:409)
 [main/:na]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.39.Final.jar:4.0.39.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
 [netty-all-4.0.39.Final.jar:4.0.39.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
 [netty-all-4.0.39.Final.jar:4.0.39.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
 [netty-all-4.0.39.Final.jar:4.0.39.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[na:1.8.0_102]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
 [main/:na]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
{code}

The problem is that previous will receive a value after the first statement in 
the batch is evaluated in BatchStatement.makeCasRequest. I can't see any reason 
why we have this assertion, it seems to me that it's unnecessary.
Removing it fixes the problem (obviously) but I'm not sure if it breaks 
something else, or if this is an intended failure case (in which case it should 
be caught earlier on).

Relevant code is as follows:

{code:title=CQL3CasRequest.java}
private void setConditionsForRow(Clustering clustering, RowCondition 
condition)
{
if (clustering == Clustering.STATIC_CLUSTERING)
{
assert staticConditions == null;
staticConditions = condition;
}
else
{
RowCondition previous = conditions.put(clustering, condition);
assert previous == null;
}
}
{code}

I've attached a patch that fixes the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12862) LWT leaves corrupted state

2016-10-31 Thread Artur Siekielski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15623195#comment-15623195
 ] 

Artur Siekielski commented on CASSANDRA-12862:
--

{quote}
But I see that sometimes "INSERT ... IF NOT EXISTS" returns applied=False even 
when the PK doesn't exist. I assume it's normal and I should retry the insert?
{quote}

Correction, I don't actually see this. IF the LWT query returns "applied" row, 
then it corresponds to the existence of the given PK. However, WriteTimeout can 
be raised even when only  a single thread does insert for the given PK.

> LWT leaves corrupted state
> --
>
> Key: CASSANDRA-12862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12862
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.1.16, 3-node cluster with RF=3, 
> NetworkTopology with 1 DC
>Reporter: Artur Siekielski
>
> When executing "INSERT ... IF NOT EXISTS" (with consistency LOCAL_QUORUM) 
> while the concurrency level is high (about 50 simultaneous threads doing 
> inserts, for the same partition key but different clustering keys) sometimes 
> the INSERT returns applied=False, but the subsequent SELECTs return no data. 
> The corrupted state is permanent - neither the INSERT or SELECTs succeed, 
> making the PK "locked".
> I can easily reproduce this - for 100 simultaneous threads doing a single 
> insert I get 1-2 corruptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12862) LWT leaves corrupted state

2016-10-31 Thread Artur Siekielski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artur Siekielski resolved CASSANDRA-12862.
--
Resolution: Invalid

> LWT leaves corrupted state
> --
>
> Key: CASSANDRA-12862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12862
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.1.16, 3-node cluster with RF=3, 
> NetworkTopology with 1 DC
>Reporter: Artur Siekielski
>
> When executing "INSERT ... IF NOT EXISTS" (with consistency LOCAL_QUORUM) 
> while the concurrency level is high (about 50 simultaneous threads doing 
> inserts, for the same partition key but different clustering keys) sometimes 
> the INSERT returns applied=False, but the subsequent SELECTs return no data. 
> The corrupted state is permanent - neither the INSERT or SELECTs succeed, 
> making the PK "locked".
> I can easily reproduce this - for 100 simultaneous threads doing a single 
> insert I get 1-2 corruptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12862) LWT leaves corrupted state

2016-10-31 Thread Artur Siekielski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15623163#comment-15623163
 ] 

Artur Siekielski commented on CASSANDRA-12862:
--

Sorry for raising false alarm. I tried to reproduce the issue using a 
standalone script, but ended up discovering an issue in application code 
(handling WriteTimeouts).

But I see that sometimes "INSERT ... IF NOT EXISTS" returns applied=False even 
when the PK doesn't exist. I assume it's normal and I should retry the insert?

I think that the documentation at 
https://docs.datastax.com/en/cql/3.1/cql/cql_reference/insert_r.html#reference_ds_gp2_1jp_xj__if-not-exists
 should mention that (and WriteTimeouts).


> LWT leaves corrupted state
> --
>
> Key: CASSANDRA-12862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12862
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.1.16, 3-node cluster with RF=3, 
> NetworkTopology with 1 DC
>Reporter: Artur Siekielski
>
> When executing "INSERT ... IF NOT EXISTS" (with consistency LOCAL_QUORUM) 
> while the concurrency level is high (about 50 simultaneous threads doing 
> inserts, for the same partition key but different clustering keys) sometimes 
> the INSERT returns applied=False, but the subsequent SELECTs return no data. 
> The corrupted state is permanent - neither the INSERT or SELECTs succeed, 
> making the PK "locked".
> I can easily reproduce this - for 100 simultaneous threads doing a single 
> insert I get 1-2 corruptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12859) Column-level permissions

2016-10-31 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15622658#comment-15622658
 ] 

Sam Tunnicliffe commented on CASSANDRA-12859:
-

bq.MODIFY as also including DELETE and TRUNCATE

Actually, I think I wasn't very clear what I meant. For inserts & updates, 
you'll be inspecting the set of modified columns and checking the permissions 
for those individually. For column-level deletes I think you could apply the 
same logic, e.g. (thinking only about regular columns at the moment, so leaving 
aside primary key columns)
{code}
CREATE ks.t1 (p int, c int, v int, PRIMARY KEY (p,c));
INSERT INTO ks.t1(p, c, v) VALUES (0, 0, 0);   // requires MODIFY on ks.t1(v)
UPDATE ks.t1 SET v = 1 WHERE p = 0 AND v = 0;  // requires MODIFY on ks.t1(v)
DELETE v FROM ks.t1 WHERE p = 0 AND v = 0; // requires MODIFY on ks.t1(v)
{code}

Of course, row/partition level deletes don't specify the columns, but this is 
generally the same as a SELECT \*, where you'll need to inspect the table 
metadata to check that the user has SELECT permissions on all columns which 
*may* be returned. So you could take that same approach to DELETE (& TRUNCATE).
{code}
SELECT v FROM ks.t1 WHERE p = 0 AND c = 0;   // requires SELECT on ks.t1(v)
SELECT * FROM ks.t1 WHERE p = 0 AND c = 0;   // requires SELECT on ks.t1(v)
DELETE FROM ks.t1 WHERE p = 0 AND c = 0; // requires MODIFY on ks.t1(v)
DELETE FROM ks.t1 WHERE p = 0;   // requires MODIFY on ks.t1(v)
{code}
This way you've limited the ability of the user to modify the date exactly in 
accordance with their granted permissions.


> Column-level permissions
> 
>
> Key: CASSANDRA-12859
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12859
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core, CQL
>Reporter: Boris Melamed
> Attachments: Cassandra Proposal - Column-level permissions.docx
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> h4. Here is a draft of: 
> Cassandra Proposal - Column-level permissions.docx (attached)
> h4. Quoting the 'Overview' section:
> The purpose of this proposal is to add column-level (field-level) permissions 
> to Cassandra. It is my intent to soon start implementing this feature in a 
> fork, and to submit a pull request once it’s ready.
> h4. Motivation
> Cassandra already supports permissions on keyspace and table (column family) 
> level. Sources:
> * http://www.datastax.com/dev/blog/role-based-access-control-in-cassandra
> * https://cassandra.apache.org/doc/latest/cql/security.html#data-control
> At IBM, we have use cases in the area of big data analytics where 
> column-level access permissions are also a requirement. All industry RDBMS 
> products are supporting this level of permission control, and regulators are 
> expecting it from all data-based systems.
> h4. Main day-one requirements
> # Extend CQL (Cassandra Query Language) to be able to optionally specify a 
> list of individual columns, in the {{GRANT}} statement. The relevant 
> permission types are: {{MODIFY}} (for {{UPDATE}} and {{INSERT}}) and 
> {{SELECT}}.
> # Persist the optional information in the appropriate system table 
> ‘system_auth.role_permissions’.
> # Enforce the column access restrictions during execution. Details:
> #* Should fit with the existing permission propagation down a role chain.
> #* Proposed message format when a user’s roles give access to the queried 
> table but not to all of the selected, inserted, or updated columns:
>   "User %s has no %s permission on column %s of table %s"
> #* Error will report only the first checked column. 
> Nice to have: list all inaccessible columns.
> #* Error code is the same as for table access denial: 2100.
> h4. Additional day-one requirements
> # Reflect the column-level permissions in statements of type 
> {{LIST ALL PERMISSIONS OF someuser;}}
> # Performance should not degrade in any significant way.
> # Backwards compatibility
> #* Permission enforcement for DBs created before the upgrade should continue 
> to work with the same behavior after upgrading to a version that allows 
> column-level permissions.
> #* Previous CQL syntax will remain valid, and have the same effect as before.
> h4. Documentation
> * 
> https://cassandra.apache.org/doc/latest/cql/security.html#grammar-token-permission
> * Feedback request: any others?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12859) Column-level permissions

2016-10-31 Thread Boris Melamed (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15622591#comment-15622591
 ] 

Boris Melamed edited comment on CASSANDRA-12859 at 10/31/16 4:13 PM:
-

Thank you for giving this doc your attention, and for your feedback.
I was taking cues from what the big RDBMS products have to offer, mainly DB2 
and Oracle.

Now to your points:

h4. MODIFY as also including DELETE and TRUNCATE.
I remember seeing a previous request to divide it up. The reason given not to 
divide was that once you have UPDATE rights, you can also effectively remove 
whole rows (DELETE, TRUNCATE). However, with column permissions, this is no 
longer the case. I agree that it's scary (or outright wrong) to allow someone 
with a mere column permission to remove whole rows. On the other hand, if 
DELETE requires a MODIFY permission without any column restrictions, then there 
is no way (even if needed) of allowing anyone to delete rows unless they have 
UPDATE permissions for the whole table. Possibly, that's a valid stipulation - 
TBD.

{quote}
Why not just process deletes/truncates the same as inserts?
{quote}
For inserts, I intended to reject statements that set columns for which the 
user has no access permissions. Are you saying that INSERTs should not be 
restricted by columns?
(Note that primary key columns must be allowed, or else no INSERT/UPDATE is 
possible. I shall add this to the doc...)

Should we add a new permission type, such as UPDATE or UPSERT, after all?

h4. GRANT - additive or replacing?
In Oracle and DB2, it's actually required to REVOKE table permissions, 
before changing the list of included columns in any way.
I've intended allowing 'replacing' GRANTs as syntactic sugar.
But now, it seems to me that the "spartan" way is the most unambiguous one. 
If one wants to add or remove one or more columns from the list of included 
columns, then  e.g.:

{code}
GRANT SELECT (col_a, col_b) ON ks.t1 TO foo; // role foo has access to col_a 
and col_b
REVOKE SELECT ON ks.t1 FROM foo; // removing the previous access to table t1, 
thus clearing column perms there as well
GRANT SELECT (col_a, col_b, col_c) ON ks.t1 TO foo; // now, foo has permissions 
on all of col_a, col_b, col_c
{code}

Having said this, there are several DB products that do allow revoking of 
permissions on certain columns. There, it would make sense to have the additive 
column GRANTing paradigm, as your intuition suggests.
However, possible problems with that approach, as user-friendly as it appears, 
are:
# More complexity: grammar addition for REVOKE statements as well.
# Possible confusion: users may erroneously think that the following allows 
access to all columns (including future ones) except col_a:

{code}
GRANT SELECT ON ks.t1 TO foo;
REVOKE SELECT (col_a) ON ks.t1 FROM foo;
{code}

Of course, this will not work unless we implement black lists, which we have 
not thought of doing.

As a remedy, we could return an error when REVOKE refers to a column that does 
not exist.
If there are strong feelings for having this more elaborate paradigm, then we 
can do that.
Otherwise, at least in the first step, I'd go for the 'spartan' approach where 
any column list change requires a previous REVOKE on the whole table, for that 
role, table, and permission type. The nice thing being that there will be no 
issue with backwards compatibility going forward, since we are not deciding 
whether GRANTed columns are additive or replacing; it's simply forbidden to 
GRANT again, without first REVOKEing.


h4. Misc
* Thank you for unit test pointers.
* Absolutely, dropped columns must trigger cleanup of permissions, thanks for 
pointing this out.
* Grammar- indeed. It would be simpler to have the non-standard syntax:
{code}
GRANT SELECT ON ks.t1 (col_a, ...) TO foo;
{code}
If there are no objections, I may go for that. Or else, the code could check 
and throw an exception if the resource is not a table.
* I shall look deeper into the code and come back about the IResource aspect.



was (Author: bmel):
Thank you for giving this doc your attention, and for your feedback.
I was taking cues from what the big RDBMS products have to offer, mainly DB2 
and Oracle.

Now to your points:

h4. MODIFY as also including DELETE and TRUNCATE.
I remember seeing a previous request to divide it up. The reason given not to 
divide was that once you have UPDATE rights, you can also effectively remove 
whole rows (DELETE, TRUNCATE). However, with column permissions, this is no 
longer the case. I understand that it's scary (or outright wrong) to allow 
someone with a mere column permission to remove whole rows. On the other hands, 
if DELETE requires MODIFY permission without column restrictions, then there is 
no way (even if needed) of allowing anyone to delete rows unless they have 
UPDATE every column. Possibly, that's

[jira] [Commented] (CASSANDRA-12859) Column-level permissions

2016-10-31 Thread Boris Melamed (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15622591#comment-15622591
 ] 

Boris Melamed commented on CASSANDRA-12859:
---

Thank you for giving this doc your attention, and for your feedback.
I was taking cues from what the big RDBMS products have to offer, mainly DB2 
and Oracle.

Now to your points:

h4. MODIFY as also including DELETE and TRUNCATE.
I remember seeing a previous request to divide it up. The reason given not to 
divide was that once you have UPDATE rights, you can also effectively remove 
whole rows (DELETE, TRUNCATE). However, with column permissions, this is no 
longer the case. I understand that it's scary (or outright wrong) to allow 
someone with a mere column permission to remove whole rows. On the other hands, 
if DELETE requires MODIFY permission without column restrictions, then there is 
no way (even if needed) of allowing anyone to delete rows unless they have 
UPDATE every column. Possibly, that's a valid stipulation - TBD.

{quote}
Why not just process deletes/truncates the same as inserts?
{quote}
For inserts, I intended to reject statements that set columns for which the 
user has no access permissions. Are you saying that INSERTs should not be 
restricted by columns?
(Note that primary key columns must be allowed, or else no INSERT/UPDATE is 
possible. I shall add this to the doc...)

Should we add a new permission type, such as UPDATE or UPSERT, after all?

h4. GRANT - additive or replacing?
In Oracle and DB2, it's actually required to REVOKE table permissions, 
before changing the list of included columns in any way.
I've intended allowing 'replacing' GRANTs as syntactic sugar.
But now, it seems to me that the "spartan" way is the most unambiguous one. 
If one wants to add or remove one or more columns from the list of included 
columns, then  e.g.:

{code}
GRANT SELECT (col_a, col_b) ON ks.t1 TO foo; // role foo has access to col_a 
and col_b
REVOKE SELECT ON ks.t1 FROM foo; // removing the previous access to table t1, 
thus clearing column perms there as well
GRANT SELECT (col_a, col_b, col_c) ON ks.t1 TO foo; // now, foo has permissions 
on all of col_a, col_b, col_c
{code}

Having said this, there are several DB products that do allow revoking of 
permissions on certain columns. 
There, it would make sense to have the additive column GRANTing paradigm, as 
your intuition suggests.
However, possible problems with that approach, as user-friendly as it appears, 
are:
# More complexity: grammar addition for REVOKE statements as well.
# Possible confusion: users may erroneously think that the following allows 
access to all columns (including future ones) except col_a:

{code}
GRANT SELECT ON ks.t1 TO foo;
REVOKE SELECT (col_a) ON ks.t1 FROM foo;
{code}

Of course, this will not work unless we implement black lists, which we have 
not thought of doing.

As a remedy, we could return an error when REVOKE refers to a column that does 
not exist.
If there are strong feelings for having this more elaborate paradigm, then we 
can do that.
Otherwise, at least in the first step, I'd go for the 'spartan' approach where 
any 
column list change requires a previous REVOKE on the whole table, for that 
role, table, and permission type. The nice thing being that there will be no 
issue with 
backwards compatibility going forward, since we are not deciding whether 
GRANTed columns are additive or replacing; it's simply
forbidden to GRANT again, without first REVOKEing.


h4. Misc
* Thank you for unit test pointers.
* Absolutely, dropped columns must trigger cleanup of permissions, thanks for 
pointing this out.
* Grammar- indeed. It would be simpler to have the non-standard syntax:
{code}
GRANT SELECT ON ks.t1 (col_a, ...) TO foo;
{code}
If there are no objections, I may go for that. Or else, the code could check 
and throw an exception if the resource is not a table.
* I shall look deeper into the code and come back about the IResource aspect.


> Column-level permissions
> 
>
> Key: CASSANDRA-12859
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12859
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core, CQL
>Reporter: Boris Melamed
> Attachments: Cassandra Proposal - Column-level permissions.docx
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> h4. Here is a draft of: 
> Cassandra Proposal - Column-level permissions.docx (attached)
> h4. Quoting the 'Overview' section:
> The purpose of this proposal is to add column-level (field-level) permissions 
> to Cassandra. It is my intent to soon start implementing this feature in a 
> fork, and to submit a pull request once it’s ready.
> h4. Motivation
> Cassandra already supports permissions on keyspace and table (column family) 
> level. Sources:
> * h

[cassandra] Git Push Summary

2016-10-31 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/3.0.10-tentative [created] 817ba0387


[2/6] cassandra git commit: Release 3.0.10

2016-10-31 Thread mshuler
Release 3.0.10


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/817ba038
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/817ba038
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/817ba038

Branch: refs/heads/cassandra-3.X
Commit: 817ba038783212b716f6981b26c8348ffdc92f59
Parents: d38a732
Author: Michael Shuler 
Authored: Mon Oct 31 10:34:24 2016 -0500
Committer: Michael Shuler 
Committed: Mon Oct 31 10:34:24 2016 -0500

--
 debian/changelog | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/817ba038/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index d54d59a..96ebd42 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (3.0.10) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Mon, 31 Oct 2016 10:33:44 -0500
+
 cassandra (3.0.9) unstable; urgency=medium
 
   * New release



[3/6] cassandra git commit: Release 3.0.10

2016-10-31 Thread mshuler
Release 3.0.10


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/817ba038
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/817ba038
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/817ba038

Branch: refs/heads/trunk
Commit: 817ba038783212b716f6981b26c8348ffdc92f59
Parents: d38a732
Author: Michael Shuler 
Authored: Mon Oct 31 10:34:24 2016 -0500
Committer: Michael Shuler 
Committed: Mon Oct 31 10:34:24 2016 -0500

--
 debian/changelog | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/817ba038/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index d54d59a..96ebd42 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (3.0.10) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Mon, 31 Oct 2016 10:33:44 -0500
+
 cassandra (3.0.9) unstable; urgency=medium
 
   * New release



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-31 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ccac7efe
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ccac7efe
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ccac7efe

Branch: refs/heads/cassandra-3.X
Commit: ccac7efe99f31d286a489844ba72e6f56f139f9d
Parents: a3828ca 817ba03
Author: Michael Shuler 
Authored: Mon Oct 31 10:34:56 2016 -0500
Committer: Michael Shuler 
Committed: Mon Oct 31 10:34:56 2016 -0500

--

--




[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.X

2016-10-31 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ccac7efe
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ccac7efe
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ccac7efe

Branch: refs/heads/trunk
Commit: ccac7efe99f31d286a489844ba72e6f56f139f9d
Parents: a3828ca 817ba03
Author: Michael Shuler 
Authored: Mon Oct 31 10:34:56 2016 -0500
Committer: Michael Shuler 
Committed: Mon Oct 31 10:34:56 2016 -0500

--

--




[1/6] cassandra git commit: Release 3.0.10

2016-10-31 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 d38a732ce -> 817ba0387
  refs/heads/cassandra-3.X a3828ca8b -> ccac7efe9
  refs/heads/trunk e84a8f391 -> 8abb632c4


Release 3.0.10


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/817ba038
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/817ba038
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/817ba038

Branch: refs/heads/cassandra-3.0
Commit: 817ba038783212b716f6981b26c8348ffdc92f59
Parents: d38a732
Author: Michael Shuler 
Authored: Mon Oct 31 10:34:24 2016 -0500
Committer: Michael Shuler 
Committed: Mon Oct 31 10:34:24 2016 -0500

--
 debian/changelog | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/817ba038/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index d54d59a..96ebd42 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (3.0.10) unstable; urgency=medium
+
+  * New release
+
+ -- Michael Shuler   Mon, 31 Oct 2016 10:33:44 -0500
+
 cassandra (3.0.9) unstable; urgency=medium
 
   * New release



[6/6] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-10-31 Thread mshuler
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8abb632c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8abb632c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8abb632c

Branch: refs/heads/trunk
Commit: 8abb632c4676af5042b259341b89d1048e38724d
Parents: e84a8f3 ccac7ef
Author: Michael Shuler 
Authored: Mon Oct 31 10:35:33 2016 -0500
Committer: Michael Shuler 
Committed: Mon Oct 31 10:35:33 2016 -0500

--

--




[jira] [Updated] (CASSANDRA-12744) Randomness of stress distributions is not good

2016-10-31 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-12744:
---
Fix Version/s: (was: 3.0.10)
   3.0.x

> Randomness of stress distributions is not good
> --
>
> Key: CASSANDRA-12744
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12744
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Minor
>  Labels: stress
> Fix For: 3.0.x
>
>
> The randomness of our distributions is pretty bad.  We are using the 
> JDKRandomGenerator() but in testing of uniform(1..3) we see for 100 
> iterations it's only outputting 3.  If you bump it to 10k it hits all 3 
> values. 
> I made a change to just use the default commons math random generator and now 
> see all 3 values for n=10



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12443) Remove alter type support

2016-10-31 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15622395#comment-15622395
 ] 

Carl Yeksigian commented on CASSANDRA-12443:


[~iamaleksey]: Yup, good catch; I've removed that as well.

[~blerer]: I updated the 3.x/trunk branches and pushed again. Just kicked off 
new CI tests, will update once they are done.

> Remove alter type support
> -
>
> Key: CASSANDRA-12443
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12443
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
> Fix For: 3.0.x
>
>
> Currently, we allow altering of types. However, because we no longer store 
> the length for all types anymore, switching from a fixed-width to 
> variable-width type causes issues. commitlog playback breaking startup, 
> queries currently in flight getting back bad results, and special casing 
> required to handle the changes. In addition, this would solve 
> CASSANDRA-10309, as there is no possibility of the types changing while an 
> SSTableReader is open.
> For fixed-length, compatible types, the alter also doesn't add much over a 
> cast, so users could use that in order to retrieve the altered type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Release 3.10

2016-10-31 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.X e0adc166a -> a3828ca8b
  refs/heads/trunk b4068ef00 -> e84a8f391


Release 3.10


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a3828ca8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a3828ca8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a3828ca8

Branch: refs/heads/cassandra-3.X
Commit: a3828ca8b755fc98799867baf07039f7ff53be05
Parents: e0adc16
Author: Michael Shuler 
Authored: Mon Oct 31 08:55:51 2016 -0500
Committer: Michael Shuler 
Committed: Mon Oct 31 08:55:51 2016 -0500

--
 debian/changelog | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a3828ca8/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 19bf308..5756188 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,8 +1,8 @@
-cassandra (3.10) UNRELEASED; urgency=medium
+cassandra (3.10) unstable; urgency=medium
 
   * New release
 
- -- Michael Shuler   Mon, 26 Sep 2016 09:07:34 -0500
+ -- Michael Shuler   Mon, 31 Oct 2016 08:54:45 -0500
 
 cassandra (3.8) unstable; urgency=medium
 



[jira] [Updated] (CASSANDRA-12531) dtest failure in read_failures_test.TestReadFailures.test_tombstone_failure_v3

2016-10-31 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12531:

Status: Patch Available  (was: In Progress)

Since they were added for CASSANDRA-12311, these tests have never passed on 
2.2; the first run on Jenkins after they were committed is [here| 
http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/682/].

The reason for the failure is that whilst in 3.0+ the tombstone count is 
per-query, in earlier versions the count is per-partition. The data model in 
the test is the classic skinny row model, with no clustering columns so the 
tombstone-per-partition count never gets above 1 and the warning threshold is 
never breached. I've opened [a PR to fix the test 
|https://github.com/riptano/cassandra-dtest/pull/1377]. 


> dtest failure in read_failures_test.TestReadFailures.test_tombstone_failure_v3
> --
>
> Key: CASSANDRA-12531
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12531
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Sam Tunnicliffe
>  Labels: dtest
> Fix For: 2.2.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/682/testReport/read_failures_test/TestReadFailures/test_tombstone_failure_v3
> http://cassci.datastax.com/job/cassandra-2.2_dtest/682/testReport/read_failures_test/TestReadFailures/test_tombstone_failure_v4
> {code}
> Error Message
> ReadTimeout not raised
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-swJYMH
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/read_failures_test.py", line 90, in 
> test_tombstone_failure_v3
> self._perform_cql_statement(session, "SELECT value FROM tombstonefailure")
>   File "/home/automaton/cassandra-dtest/read_failures_test.py", line 63, in 
> _perform_cql_statement
> session.execute(statement)
>   File "/usr/lib/python2.7/unittest/case.py", line 116, in __exit__
> "{0} not raised".format(exc_name))
> "ReadTimeout not raised\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-swJYMH\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12866) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.bug_5732_test

2016-10-31 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12866:

Assignee: (was: DS Test Eng)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.bug_5732_test
> --
>
> Key: CASSANDRA-12866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12866
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_dtest_upgrade/17/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 214, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 581, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> {code}{code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:0a1f1c81e641039ca9fd573d5217b6b6f2ad8fb8
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,749 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@4f5697fa) to class 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Cleanup@1100050528:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1-Data.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,750 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@45aefc8a) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@11303515:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,750 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7b3ed4f3) to class 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@837204356:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1-Index.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,752 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@39e499e) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@1619232020:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,752 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6d974cbb) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1765405204:[Memory@[0..4),
>  Memory@[0..e)] was not released before the reference was garbage collected
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[cassandra] Git Push Summary

2016-10-31 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/3.10-tentative [created] a3828ca8b


[3/3] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-10-31 Thread mshuler
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e84a8f39
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e84a8f39
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e84a8f39

Branch: refs/heads/trunk
Commit: e84a8f3918c53b9f469b73ebff35de9092212341
Parents: b4068ef a3828ca
Author: Michael Shuler 
Authored: Mon Oct 31 08:58:07 2016 -0500
Committer: Michael Shuler 
Committed: Mon Oct 31 08:58:07 2016 -0500

--

--




[jira] [Updated] (CASSANDRA-12838) Extend native protocol flags and add supported versions to the SUPPORTED response

2016-10-31 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12838:
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.10
   Status: Resolved  (was: Ready to Commit)

Committed to 3.X as e0adc166a33033c9d2668547803a1e034c2c2494 and merged into 
trunk.

> Extend native protocol flags and add supported versions to the SUPPORTED 
> response
> -
>
> Key: CASSANDRA-12838
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12838
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
>Reporter: Stefania
>Assignee: Stefania
>  Labels: client-impacting
> Fix For: 3.10
>
>
> We already use 7 bits for the flags of the QUERY message, and since they are 
> encoded with a fixed size byte, we may be forced to change the structure of 
> the message soon, and I'd like to do this in version 5 but without wasting 
> bytes on the wire. Therefore, I propose to convert fixed flag's bytes to 
> unsigned vints, as defined in CASSANDRA-9499. The only exception would be the 
> flags in the frame, which should stay as fixed size.
> Up to 7 bits, vints are encoded the same as bytes are, so no immediate change 
> would be required in the drivers, although they should plan to support vint 
> flags if supporting version 5. Moving forward, when a new flag is required 
> for the QUERY message, and eventually when other flags reach 8 bits in other 
> messages too, the flag's bitmaps would be automatically encoded with a size 
> that is big enough to accommodate all flags, but no bigger than required. We 
> can currently support up to 8 bytes with unsigned vints.
> The downside is that drivers need to implement unsigned vint encoding for 
> version 5, but this is already required by CASSANDRA-11873, and will most 
> likely be required by CASSANDRA-11622 as well.
> I would also like to add the list of versions to the SUPPORTED message, in 
> order to simplify the handshake for drivers that prefer to send an OPTION 
> message, rather than rely on receiving an error for an unsupported version in 
> the STARTUP message. Said error should also contain the full list of 
> supported versions, not just the min and max, for clarity, and because the 
> latest version is now a beta version.
> Finally, we currently store versions as integer constants in {{Server.java}}, 
> and we still have a fair bit of hard-coded numbers in the code, especially in 
> tests. I plan to clean this up by introducing a {{ProtocolVersion}} enum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Release 3.10

2016-10-31 Thread mshuler
Release 3.10


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a3828ca8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a3828ca8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a3828ca8

Branch: refs/heads/trunk
Commit: a3828ca8b755fc98799867baf07039f7ff53be05
Parents: e0adc16
Author: Michael Shuler 
Authored: Mon Oct 31 08:55:51 2016 -0500
Committer: Michael Shuler 
Committed: Mon Oct 31 08:55:51 2016 -0500

--
 debian/changelog | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a3828ca8/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 19bf308..5756188 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,8 +1,8 @@
-cassandra (3.10) UNRELEASED; urgency=medium
+cassandra (3.10) unstable; urgency=medium
 
   * New release
 
- -- Michael Shuler   Mon, 26 Sep 2016 09:07:34 -0500
+ -- Michael Shuler   Mon, 31 Oct 2016 08:54:45 -0500
 
 cassandra (3.8) unstable; urgency=medium
 



[jira] [Updated] (CASSANDRA-12866) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.bug_5732_test

2016-10-31 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12866:

Issue Type: Bug  (was: Test)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.bug_5732_test
> --
>
> Key: CASSANDRA-12866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12866
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest, test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_dtest_upgrade/17/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 214, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 581, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> {code}{code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:0a1f1c81e641039ca9fd573d5217b6b6f2ad8fb8
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,749 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@4f5697fa) to class 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Cleanup@1100050528:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1-Data.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,750 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@45aefc8a) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@11303515:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,750 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7b3ed4f3) to class 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@837204356:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1-Index.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,752 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@39e499e) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@1619232020:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,752 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6d974cbb) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1765405204:[Memory@[0..4),
>  Memory@[0..e)] was not released before the reference was garbage collected
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12866) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.bug_5732_test

2016-10-31 Thread Sean McCarthy (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15622197#comment-15622197
 ] 

Sean McCarthy commented on CASSANDRA-12866:
---

Seems like this is still failing in 3.x

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.bug_5732_test
> --
>
> Key: CASSANDRA-12866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12866
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest, test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.X_dtest_upgrade/17/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x/bug_5732_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 358, in run
> self.tearDown()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
> 214, in tearDown
> super(UpgradeTester, self).tearDown()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 581, in tearDown
> raise AssertionError('Unexpected error in log, see stdout')
> {code}{code}
> Standard Output
> http://git-wip-us.apache.org/repos/asf/cassandra.git 
> git:0a1f1c81e641039ca9fd573d5217b6b6f2ad8fb8
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,749 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@4f5697fa) to class 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Cleanup@1100050528:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1-Data.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,750 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@45aefc8a) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@11303515:[[OffHeapBitSet]]
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,750 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@7b3ed4f3) to class 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@837204356:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1-Index.db
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,752 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@39e499e) to class 
> org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@1619232020:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1
>  was not released before the reference was garbage collected
> Unexpected error in node1 log, error: 
> ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,752 Ref.java:199 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6d974cbb) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1765405204:[Memory@[0..4),
>  Memory@[0..e)] was not released before the reference was garbage collected
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12281) Gossip blocks on startup when another node is bootstrapping

2016-10-31 Thread Andy Peckys (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15622189#comment-15622189
 ] 

Andy Peckys commented on CASSANDRA-12281:
-

It's the same for me, only 1 keyspace

> Gossip blocks on startup when another node is bootstrapping
> ---
>
> Key: CASSANDRA-12281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12281
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Eric Evans
>Assignee: Stefan Podkowinski
> Attachments: restbase1015-a_jstack.txt
>
>
> In our cluster, normal node startup times (after a drain on shutdown) are 
> less than 1 minute.  However, when another node in the cluster is 
> bootstrapping, the same node startup takes nearly 30 minutes to complete, the 
> apparent result of gossip blocking on pending range calculations.
> {noformat}
> $ nodetool-a tpstats
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 0   1840 0
>  0
> ReadStage 0 0   2350 0
>  0
> RequestResponseStage  0 0 53 0
>  0
> ReadRepairStage   0 0  1 0
>  0
> CounterMutationStage  0 0  0 0
>  0
> HintedHandoff 0 0 44 0
>  0
> MiscStage 0 0  0 0
>  0
> CompactionExecutor3 3395 0
>  0
> MemtableReclaimMemory 0 0 30 0
>  0
> PendingRangeCalculator1 2 29 0
>  0
> GossipStage   1  5602164 0
>  0
> MigrationStage0 0  0 0
>  0
> MemtablePostFlush 0 0111 0
>  0
> ValidationExecutor0 0  0 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   0 0 30 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {noformat}
> A full thread dump is attached, but the relevant bit seems to be here:
> {noformat}
> [ ... ]
> "GossipStage:1" #1801 daemon prio=5 os_prio=0 tid=0x7fe4cd54b000 
> nid=0xea9 waiting on condition [0x7fddcf883000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0004c1e922c0> (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
>   at 
> org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:174)
>   at 
> org.apache.cassandra.locator.TokenMetadata.updateNormalTokens(TokenMetadata.java:160)
>   at 
> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:2023)
>   at 
> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1682)
>   at 
> org.apache.cassandra.gms.Gossiper.doOnChangeNotifications(Gossiper.java:1182)
>   at org.apache.cassandra.gms.Gossiper.applyNewStates(Gossiper.java:1165)
>   at 
> org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:1128)
>   at 
> org.apache.cassandra.gms.GossipDigestAckVerbHandler.doVerb(Gossip

[jira] [Created] (CASSANDRA-12866) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.bug_5732_test

2016-10-31 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12866:
-

 Summary: dtest failure in 
upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.bug_5732_test
 Key: CASSANDRA-12866
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12866
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/cassandra-3.X_dtest_upgrade/17/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x/bug_5732_test

{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 358, in run
self.tearDown()
  File "/home/automaton/cassandra-dtest/upgrade_tests/upgrade_base.py", line 
214, in tearDown
super(UpgradeTester, self).tearDown()
  File "/home/automaton/cassandra-dtest/dtest.py", line 581, in tearDown
raise AssertionError('Unexpected error in log, see stdout')
{code}{code}
Standard Output

http://git-wip-us.apache.org/repos/asf/cassandra.git 
git:0a1f1c81e641039ca9fd573d5217b6b6f2ad8fb8
Unexpected error in node1 log, error: 
ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,749 Ref.java:199 - LEAK 
DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@4f5697fa) to class 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Cleanup@1100050528:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1-Data.db
 was not released before the reference was garbage collected
Unexpected error in node1 log, error: 
ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,750 Ref.java:199 - LEAK 
DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@45aefc8a) to class 
org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@11303515:[[OffHeapBitSet]]
 was not released before the reference was garbage collected
Unexpected error in node1 log, error: 
ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,750 Ref.java:199 - LEAK 
DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@7b3ed4f3) to class 
org.apache.cassandra.io.util.MmappedSegmentedFile$Cleanup@837204356:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1-Index.db
 was not released before the reference was garbage collected
Unexpected error in node1 log, error: 
ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,752 Ref.java:199 - LEAK 
DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@39e499e) 
to class 
org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@1619232020:/tmp/dtest-A15hEO/test/node1/data2/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-1
 was not released before the reference was garbage collected
Unexpected error in node1 log, error: 
ERROR [Reference-Reaper:1] 2016-10-30 15:47:36,752 Ref.java:199 - LEAK 
DETECTED: a reference 
(org.apache.cassandra.utils.concurrent.Ref$State@6d974cbb) to class 
org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@1765405204:[Memory@[0..4),
 Memory@[0..e)] was not released before the reference was garbage collected
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[05/11] cassandra git commit: Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum

2016-10-31 Thread stefania
Extend native protocol request flags, add versions to SUPPORTED, and introduce 
ProtocolVersion enum

patch by Stefania Alborghetti; reviewed by Benjamin Lerer for CASSANDRA-12838


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e0adc166
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e0adc166
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e0adc166

Branch: refs/heads/cassandra-3.X
Commit: e0adc166a33033c9d2668547803a1e034c2c2494
Parents: 0a1f1c8
Author: Stefania Alborghetti 
Authored: Tue Oct 25 16:01:40 2016 +0800
Committer: Stefania Alborghetti 
Committed: Mon Oct 31 21:14:42 2016 +0800

--
 CHANGES.txt |   1 +
 doc/native_protocol_v5.spec |  13 +-
 ...driver-internal-only-3.7.0.post0-2481531.zip | Bin 0 -> 252057 bytes
 ...driver-internal-only-3.7.0.post0-70f41b5.zip | Bin 252036 -> 0 bytes
 .../org/apache/cassandra/cql3/CQL3Type.java |  20 +--
 .../apache/cassandra/cql3/ColumnCondition.java  |  14 +-
 .../org/apache/cassandra/cql3/Constants.java|   3 +-
 src/java/org/apache/cassandra/cql3/Lists.java   |   8 +-
 src/java/org/apache/cassandra/cql3/Maps.java|   8 +-
 .../org/apache/cassandra/cql3/QueryOptions.java |  46 +++---
 .../apache/cassandra/cql3/QueryProcessor.java   |   5 +-
 .../org/apache/cassandra/cql3/ResultSet.java|  61 ++--
 src/java/org/apache/cassandra/cql3/Sets.java|   8 +-
 src/java/org/apache/cassandra/cql3/Term.java|   3 +-
 src/java/org/apache/cassandra/cql3/Tuples.java  |   5 +-
 .../apache/cassandra/cql3/UntypedResultSet.java |   4 +-
 .../org/apache/cassandra/cql3/UserTypes.java|   3 +-
 .../cassandra/cql3/functions/AggregateFcts.java |  81 +-
 .../cql3/functions/AggregateFunction.java   |   8 +-
 .../cql3/functions/BytesConversionFcts.java |   9 +-
 .../cassandra/cql3/functions/CastFcts.java  |   8 +-
 .../cassandra/cql3/functions/FromJsonFct.java   |   3 +-
 .../cassandra/cql3/functions/FunctionCall.java  |   5 +-
 .../cql3/functions/JavaBasedUDFunction.java |   5 +-
 .../cassandra/cql3/functions/JavaUDF.java   |  23 +--
 .../cql3/functions/ScalarFunction.java  |   3 +-
 .../cql3/functions/ScriptBasedUDFunction.java   |   7 +-
 .../cassandra/cql3/functions/TimeFcts.java  |  25 +--
 .../cassandra/cql3/functions/ToJsonFct.java |   3 +-
 .../cassandra/cql3/functions/TokenFct.java  |   3 +-
 .../cassandra/cql3/functions/UDAggregate.java   |   5 +-
 .../cql3/functions/UDFByteCodeVerifier.java |   8 +-
 .../cassandra/cql3/functions/UDFunction.java|  28 ++--
 .../cassandra/cql3/functions/UDHelper.java  |  15 +-
 .../cassandra/cql3/functions/UuidFcts.java  |   3 +-
 .../selection/AggregateFunctionSelector.java|   5 +-
 .../cassandra/cql3/selection/FieldSelector.java |   5 +-
 .../cql3/selection/ScalarFunctionSelector.java  |   5 +-
 .../cassandra/cql3/selection/Selection.java |  18 ++-
 .../cassandra/cql3/selection/Selector.java  |   5 +-
 .../cql3/selection/SimpleSelector.java  |   5 +-
 .../cassandra/cql3/selection/TermSelector.java  |   5 +-
 .../cql3/selection/WritetimeOrTTLSelector.java  |   5 +-
 .../statements/CreateAggregateStatement.java|   4 +-
 .../cql3/statements/SelectStatement.java|   5 +-
 .../cassandra/db/PartitionRangeReadCommand.java |   3 +-
 src/java/org/apache/cassandra/db/ReadQuery.java |   5 +-
 .../db/SinglePartitionReadCommand.java  |   7 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |   4 +-
 .../db/marshal/AbstractCompositeType.java   |   3 +-
 .../cassandra/db/marshal/AbstractType.java  |   3 +-
 .../apache/cassandra/db/marshal/AsciiType.java  |   3 +-
 .../cassandra/db/marshal/BooleanType.java   |   3 +-
 .../apache/cassandra/db/marshal/ByteType.java   |   3 +-
 .../apache/cassandra/db/marshal/BytesType.java  |   3 +-
 .../cassandra/db/marshal/CollectionType.java|   3 +-
 .../db/marshal/ColumnToCollectionType.java  |   3 +-
 .../cassandra/db/marshal/CounterColumnType.java |   3 +-
 .../apache/cassandra/db/marshal/DateType.java   |   3 +-
 .../cassandra/db/marshal/DecimalType.java   |   3 +-
 .../apache/cassandra/db/marshal/DoubleType.java |   3 +-
 .../cassandra/db/marshal/DurationType.java  |   3 +-
 .../db/marshal/DynamicCompositeType.java|   3 +-
 .../apache/cassandra/db/marshal/FloatType.java  |   3 +-
 .../apache/cassandra/db/marshal/FrozenType.java |   3 +-
 .../cassandra/db/marshal/InetAddressType.java   |   3 +-
 .../apache/cassandra/db/marshal/Int32Type.java  |   3 +-
 .../cassandra/db/marshal/IntegerType.java   |   3 +-
 .../apache/cassandra/db/marshal/ListType.java   |  13 +-
 .../apache/cassandra/db/marshal/LongType.java   |   3 +-
 .../apache/cassandra/db/marshal/MapType.java|   6 +-
 .../db/marshal/PartitionerDefinedOrder.java |   3

[08/11] cassandra git commit: Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum

2016-10-31 Thread stefania
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/db/marshal/UserType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/UserType.java 
b/src/java/org/apache/cassandra/db/marshal/UserType.java
index cd181cc..176ab84 100644
--- a/src/java/org/apache/cassandra/db/marshal/UserType.java
+++ b/src/java/org/apache/cassandra/db/marshal/UserType.java
@@ -29,6 +29,7 @@ import org.apache.cassandra.db.rows.CellPath;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.exceptions.SyntaxException;
 import org.apache.cassandra.serializers.MarshalException;
+import org.apache.cassandra.transport.ProtocolVersion;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.Pair;
 import org.slf4j.Logger;
@@ -143,7 +144,7 @@ public class UserType extends TupleType
 return ShortType.instance;
 }
 
-public ByteBuffer serializeForNativeProtocol(Iterator cells, int 
protocolVersion)
+public ByteBuffer serializeForNativeProtocol(Iterator cells, 
ProtocolVersion protocolVersion)
 {
 assert isMultiCell;
 
@@ -249,7 +250,7 @@ public class UserType extends TupleType
 }
 
 @Override
-public String toJSONString(ByteBuffer buffer, int protocolVersion)
+public String toJSONString(ByteBuffer buffer, ProtocolVersion 
protocolVersion)
 {
 ByteBuffer[] buffers = split(buffer);
 StringBuilder sb = new StringBuilder("{");

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
--
diff --git a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java 
b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
index f1ee3c1..2dffe58 100644
--- a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
+++ b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
@@ -44,7 +44,7 @@ import org.apache.cassandra.db.filter.ColumnFilter;
 import org.apache.cassandra.db.view.View;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.exceptions.InvalidRequestException;
-import org.apache.cassandra.transport.Server;
+import org.apache.cassandra.transport.ProtocolVersion;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Pair;
@@ -856,7 +856,7 @@ public final class SchemaKeyspace
.add("final_func", aggregate.finalFunction() != null ? 
aggregate.finalFunction().name().name : null)
.add("initcond", aggregate.initialCondition() != null
 // must use the frozen state type here, as 
'null' for unfrozen collections may mean 'empty'
-? 
aggregate.stateType().freeze().asCQL3Type().toCQLLiteral(aggregate.initialCondition(),
 Server.CURRENT_VERSION)
+? 
aggregate.stateType().freeze().asCQL3Type().toCQLLiteral(aggregate.initialCondition(),
 ProtocolVersion.CURRENT)
 : null);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
--
diff --git 
a/src/java/org/apache/cassandra/serializers/CollectionSerializer.java 
b/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
index 3d6be67..95a0388 100644
--- a/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
@@ -22,7 +22,7 @@ import java.nio.ByteBuffer;
 import java.util.Collection;
 import java.util.List;
 
-import org.apache.cassandra.transport.Server;
+import org.apache.cassandra.transport.ProtocolVersion;
 import org.apache.cassandra.utils.ByteBufferUtil;
 
 public abstract class CollectionSerializer implements TypeSerializer
@@ -30,14 +30,14 @@ public abstract class CollectionSerializer implements 
TypeSerializer
 protected abstract List serializeValues(T value);
 protected abstract int getElementCount(T value);
 
-public abstract T deserializeForNativeProtocol(ByteBuffer buffer, int 
version);
-public abstract void validateForNativeProtocol(ByteBuffer buffer, int 
version);
+public abstract T deserializeForNativeProtocol(ByteBuffer buffer, 
ProtocolVersion version);
+public abstract void validateForNativeProtocol(ByteBuffer buffer, 
ProtocolVersion version);
 
 public ByteBuffer serialize(T value)
 {
 List values = serializeValues(value);
 // See deserialize() for why using the protocol v3 variant is the 
right thing to do.
-return pack(values, getElementCount(value), Server.VERSION_3);
+return pack(values, getElementCount(value), Protocol

[02/11] cassandra git commit: Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum

2016-10-31 Thread stefania
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/transport/messages/CredentialsMessage.java
--
diff --git 
a/src/java/org/apache/cassandra/transport/messages/CredentialsMessage.java 
b/src/java/org/apache/cassandra/transport/messages/CredentialsMessage.java
index 4ecaffd..764d992 100644
--- a/src/java/org/apache/cassandra/transport/messages/CredentialsMessage.java
+++ b/src/java/org/apache/cassandra/transport/messages/CredentialsMessage.java
@@ -29,6 +29,7 @@ import org.apache.cassandra.service.QueryState;
 import org.apache.cassandra.transport.CBUtil;
 import org.apache.cassandra.transport.Message;
 import org.apache.cassandra.transport.ProtocolException;
+import org.apache.cassandra.transport.ProtocolVersion;
 
 /**
  * Message to indicate that the server is ready to receive requests.
@@ -37,9 +38,9 @@ public class CredentialsMessage extends Message.Request
 {
 public static final Message.Codec codec = new 
Message.Codec()
 {
-public CredentialsMessage decode(ByteBuf body, int version)
+public CredentialsMessage decode(ByteBuf body, ProtocolVersion version)
 {
-if (version > 1)
+if (version.isGreaterThan(ProtocolVersion.V1))
 throw new ProtocolException("Legacy credentials authentication 
is not supported in " +
 "protocol versions > 1. Please use SASL authentication 
via a SaslResponse message");
 
@@ -47,12 +48,12 @@ public class CredentialsMessage extends Message.Request
 return new CredentialsMessage(credentials);
 }
 
-public void encode(CredentialsMessage msg, ByteBuf dest, int version)
+public void encode(CredentialsMessage msg, ByteBuf dest, 
ProtocolVersion version)
 {
 CBUtil.writeStringMap(msg.credentials, dest);
 }
 
-public int encodedSize(CredentialsMessage msg, int version)
+public int encodedSize(CredentialsMessage msg, ProtocolVersion version)
 {
 return CBUtil.sizeOfStringMap(msg.credentials);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
--
diff --git a/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java 
b/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
index 5ce248f..ac4b3dc 100644
--- a/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
+++ b/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
@@ -45,7 +45,7 @@ public class ErrorMessage extends Message.Response
 
 public static final Message.Codec codec = new 
Message.Codec()
 {
-public ErrorMessage decode(ByteBuf body, int version)
+public ErrorMessage decode(ByteBuf body, ProtocolVersion version)
 {
 ExceptionCode code = ExceptionCode.fromValue(body.readInt());
 String msg = CBUtil.readString(body);
@@ -89,7 +89,7 @@ public class ErrorMessage extends Message.Response
 int failure = body.readInt();
 
 Map 
failureReasonByEndpoint = new ConcurrentHashMap<>();
-if (version >= Server.VERSION_5)
+if (version.isGreaterOrEqualTo(ProtocolVersion.V5))
 {
 for (int i = 0; i < failure; i++)
 {
@@ -163,7 +163,7 @@ public class ErrorMessage extends Message.Response
 return new ErrorMessage(te);
 }
 
-public void encode(ErrorMessage msg, ByteBuf dest, int version)
+public void encode(ErrorMessage msg, ByteBuf dest, ProtocolVersion 
version)
 {
 final TransportException err = 
getBackwardsCompatibleException(msg, version);
 dest.writeInt(err.code().value);
@@ -190,7 +190,7 @@ public class ErrorMessage extends Message.Response
 // The number of failures is also present in protocol 
v5, but used instead to specify the size of the failure map
 dest.writeInt(rfe.failureReasonByEndpoint.size());
 
-if (version >= Server.VERSION_5)
+if (version.isGreaterOrEqualTo(ProtocolVersion.V5))
 {
 for (Map.Entry 
entry : rfe.failureReasonByEndpoint.entrySet())
 {
@@ -236,7 +236,7 @@ public class ErrorMessage extends Message.Response
 }
 }
 
-public int encodedSize(ErrorMessage msg, int version)
+public int encodedSize(ErrorMessage msg, ProtocolVersion version)
 {
 final TransportException err = 
getBackwardsCompatibleException(msg, version);
 String errorString = err.getMessage() == null ? "" : 
err.g

[07/11] cassandra git commit: Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum

2016-10-31 Thread stefania
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/transport/messages/CredentialsMessage.java
--
diff --git 
a/src/java/org/apache/cassandra/transport/messages/CredentialsMessage.java 
b/src/java/org/apache/cassandra/transport/messages/CredentialsMessage.java
index 4ecaffd..764d992 100644
--- a/src/java/org/apache/cassandra/transport/messages/CredentialsMessage.java
+++ b/src/java/org/apache/cassandra/transport/messages/CredentialsMessage.java
@@ -29,6 +29,7 @@ import org.apache.cassandra.service.QueryState;
 import org.apache.cassandra.transport.CBUtil;
 import org.apache.cassandra.transport.Message;
 import org.apache.cassandra.transport.ProtocolException;
+import org.apache.cassandra.transport.ProtocolVersion;
 
 /**
  * Message to indicate that the server is ready to receive requests.
@@ -37,9 +38,9 @@ public class CredentialsMessage extends Message.Request
 {
 public static final Message.Codec codec = new 
Message.Codec()
 {
-public CredentialsMessage decode(ByteBuf body, int version)
+public CredentialsMessage decode(ByteBuf body, ProtocolVersion version)
 {
-if (version > 1)
+if (version.isGreaterThan(ProtocolVersion.V1))
 throw new ProtocolException("Legacy credentials authentication 
is not supported in " +
 "protocol versions > 1. Please use SASL authentication 
via a SaslResponse message");
 
@@ -47,12 +48,12 @@ public class CredentialsMessage extends Message.Request
 return new CredentialsMessage(credentials);
 }
 
-public void encode(CredentialsMessage msg, ByteBuf dest, int version)
+public void encode(CredentialsMessage msg, ByteBuf dest, 
ProtocolVersion version)
 {
 CBUtil.writeStringMap(msg.credentials, dest);
 }
 
-public int encodedSize(CredentialsMessage msg, int version)
+public int encodedSize(CredentialsMessage msg, ProtocolVersion version)
 {
 return CBUtil.sizeOfStringMap(msg.credentials);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
--
diff --git a/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java 
b/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
index 5ce248f..ac4b3dc 100644
--- a/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
+++ b/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
@@ -45,7 +45,7 @@ public class ErrorMessage extends Message.Response
 
 public static final Message.Codec codec = new 
Message.Codec()
 {
-public ErrorMessage decode(ByteBuf body, int version)
+public ErrorMessage decode(ByteBuf body, ProtocolVersion version)
 {
 ExceptionCode code = ExceptionCode.fromValue(body.readInt());
 String msg = CBUtil.readString(body);
@@ -89,7 +89,7 @@ public class ErrorMessage extends Message.Response
 int failure = body.readInt();
 
 Map 
failureReasonByEndpoint = new ConcurrentHashMap<>();
-if (version >= Server.VERSION_5)
+if (version.isGreaterOrEqualTo(ProtocolVersion.V5))
 {
 for (int i = 0; i < failure; i++)
 {
@@ -163,7 +163,7 @@ public class ErrorMessage extends Message.Response
 return new ErrorMessage(te);
 }
 
-public void encode(ErrorMessage msg, ByteBuf dest, int version)
+public void encode(ErrorMessage msg, ByteBuf dest, ProtocolVersion 
version)
 {
 final TransportException err = 
getBackwardsCompatibleException(msg, version);
 dest.writeInt(err.code().value);
@@ -190,7 +190,7 @@ public class ErrorMessage extends Message.Response
 // The number of failures is also present in protocol 
v5, but used instead to specify the size of the failure map
 dest.writeInt(rfe.failureReasonByEndpoint.size());
 
-if (version >= Server.VERSION_5)
+if (version.isGreaterOrEqualTo(ProtocolVersion.V5))
 {
 for (Map.Entry 
entry : rfe.failureReasonByEndpoint.entrySet())
 {
@@ -236,7 +236,7 @@ public class ErrorMessage extends Message.Response
 }
 }
 
-public int encodedSize(ErrorMessage msg, int version)
+public int encodedSize(ErrorMessage msg, ProtocolVersion version)
 {
 final TransportException err = 
getBackwardsCompatibleException(msg, version);
 String errorString = err.getMessage() == null ? "" : 
err.g

[01/11] cassandra git commit: Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum

2016-10-31 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.X 0a1f1c81e -> e0adc166a
  refs/heads/trunk 6f1ce6823 -> b4068ef00


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/GoodClass.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/GoodClass.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/GoodClass.java
index 54821b9..7275ef5 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/GoodClass.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/GoodClass.java
@@ -24,6 +24,7 @@ import java.util.List;
 import com.datastax.driver.core.TypeCodec;
 import org.apache.cassandra.cql3.functions.JavaUDF;
 import org.apache.cassandra.cql3.functions.UDFContext;
+import org.apache.cassandra.transport.ProtocolVersion;
 
 /**
  * Used by {@link 
org.apache.cassandra.cql3.validation.entities.UFVerifierTest}.
@@ -35,12 +36,12 @@ public final class GoodClass extends JavaUDF
 super(returnDataType, argDataTypes, udfContext);
 }
 
-protected Object executeAggregateImpl(int protocolVersion, Object 
firstParam, List params)
+protected Object executeAggregateImpl(ProtocolVersion protocolVersion, 
Object firstParam, List params)
 {
 throw new UnsupportedOperationException();
 }
 
-protected ByteBuffer executeImpl(int protocolVersion, List 
params)
+protected ByteBuffer executeImpl(ProtocolVersion protocolVersion, 
List params)
 {
 return null;
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronized.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronized.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronized.java
index dba846d..c036f63 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronized.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronized.java
@@ -24,6 +24,7 @@ import java.util.List;
 import com.datastax.driver.core.TypeCodec;
 import org.apache.cassandra.cql3.functions.JavaUDF;
 import org.apache.cassandra.cql3.functions.UDFContext;
+import org.apache.cassandra.transport.ProtocolVersion;
 
 /**
  * Used by {@link 
org.apache.cassandra.cql3.validation.entities.UFVerifierTest}.
@@ -35,12 +36,12 @@ public final class UseOfSynchronized extends JavaUDF
 super(returnDataType, argDataTypes, udfContext);
 }
 
-protected Object executeAggregateImpl(int protocolVersion, Object 
firstParam, List params)
+protected Object executeAggregateImpl(ProtocolVersion protocolVersion, 
Object firstParam, List params)
 {
 throw new UnsupportedOperationException();
 }
 
-protected ByteBuffer executeImpl(int protocolVersion, List 
params)
+protected ByteBuffer executeImpl(ProtocolVersion protocolVersion, 
List params)
 {
 synchronized (this)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronizedWithNotify.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronizedWithNotify.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronizedWithNotify.java
index 63c319c..3eb673a 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronizedWithNotify.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronizedWithNotify.java
@@ -24,6 +24,7 @@ import java.util.List;
 import com.datastax.driver.core.TypeCodec;
 import org.apache.cassandra.cql3.functions.JavaUDF;
 import org.apache.cassandra.cql3.functions.UDFContext;
+import org.apache.cassandra.transport.ProtocolVersion;
 
 /**
  * Used by {@link 
org.apache.cassandra.cql3.validation.entities.UFVerifierTest}.
@@ -35,12 +36,12 @@ public final class UseOfSynchronizedWithNotify extends 
JavaUDF
 super(returnDataType, argDataTypes, udfContext);
 }
 
-protected Object executeAggregateImpl(int protocolVersion, Object 
firstParam, List params)
+protected Object executeAggregateImpl(ProtocolVersion protocolVersion, 
Object firstParam, List params)
 {
 throw new UnsupportedOperationException();
 }
 
-protected ByteBuffer executeImpl(int protocolVersion, List 
params)
+protected ByteBuffer executeImpl(ProtocolVersion protocolVersion, 
List params)
 {
 synchronized (this)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc1

[04/11] cassandra git commit: Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum

2016-10-31 Thread stefania
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/cql3/functions/TimeFcts.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/TimeFcts.java 
b/src/java/org/apache/cassandra/cql3/functions/TimeFcts.java
index 79ebfaf..e682dcd 100644
--- a/src/java/org/apache/cassandra/cql3/functions/TimeFcts.java
+++ b/src/java/org/apache/cassandra/cql3/functions/TimeFcts.java
@@ -27,6 +27,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.db.marshal.*;
+import org.apache.cassandra.transport.ProtocolVersion;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.UUIDGen;
 
@@ -52,7 +53,7 @@ public abstract class TimeFcts
 
 public static final Function nowFct = new NativeScalarFunction("now", 
TimeUUIDType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 return ByteBuffer.wrap(UUIDGen.getTimeUUIDBytes());
 }
@@ -60,7 +61,7 @@ public abstract class TimeFcts
 
 public static final Function minTimeuuidFct = new 
NativeScalarFunction("mintimeuuid", TimeUUIDType.instance, 
TimestampType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -72,7 +73,7 @@ public abstract class TimeFcts
 
 public static final Function maxTimeuuidFct = new 
NativeScalarFunction("maxtimeuuid", TimeUUIDType.instance, 
TimestampType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -90,7 +91,7 @@ public abstract class TimeFcts
 {
 private volatile boolean hasLoggedDeprecationWarning;
 
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 if (!hasLoggedDeprecationWarning)
 {
@@ -116,7 +117,7 @@ public abstract class TimeFcts
 {
 private volatile boolean hasLoggedDeprecationWarning;
 
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 if (!hasLoggedDeprecationWarning)
 {
@@ -138,7 +139,7 @@ public abstract class TimeFcts
  */
 public static final NativeScalarFunction timeUuidtoDate = new 
NativeScalarFunction("todate", SimpleDateType.instance, TimeUUIDType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -154,7 +155,7 @@ public abstract class TimeFcts
  */
 public static final NativeScalarFunction timeUuidToTimestamp = new 
NativeScalarFunction("totimestamp", TimestampType.instance, 
TimeUUIDType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -170,7 +171,7 @@ public abstract class TimeFcts
  */
 public static final NativeScalarFunction timeUuidToUnixTimestamp = new 
NativeScalarFunction("tounixtimestamp", LongType.instance, 
TimeUUIDType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -185,7 +186,7 @@ public abstract class TimeFcts
  */
 public static final NativeScalarFunction timestampToUnixTimestamp = new 
NativeScalarFunction("tounixtimestamp", LongType.instance, 
TimestampType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -201,7 +202,7 @@ public abstract class TimeFcts
 */
public static final NativeScalarFunction timestampToDate = new 
NativeScalarFunction("todate", SimpleDateType.instance, TimestampType.instance)
{
-   public ByteBuffer execute(int protocolVersion, List 
parameters)
+   public ByteBuffer execute(ProtocolVersion protocolVe

[11/11] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-10-31 Thread stefania
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b4068ef0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b4068ef0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b4068ef0

Branch: refs/heads/trunk
Commit: b4068ef00e9566ac5ecba9c67ddcf419fcd673a2
Parents: 6f1ce68 e0adc16
Author: Stefania Alborghetti 
Authored: Mon Oct 31 21:16:40 2016 +0800
Committer: Stefania Alborghetti 
Committed: Mon Oct 31 21:16:40 2016 +0800

--
 CHANGES.txt |   1 +
 doc/native_protocol_v5.spec |  13 +-
 ...driver-internal-only-3.7.0.post0-2481531.zip | Bin 0 -> 252057 bytes
 ...driver-internal-only-3.7.0.post0-70f41b5.zip | Bin 252036 -> 0 bytes
 .../org/apache/cassandra/cql3/CQL3Type.java |  20 +--
 .../apache/cassandra/cql3/ColumnCondition.java  |  14 +-
 .../org/apache/cassandra/cql3/Constants.java|   3 +-
 src/java/org/apache/cassandra/cql3/Lists.java   |   8 +-
 src/java/org/apache/cassandra/cql3/Maps.java|   8 +-
 .../org/apache/cassandra/cql3/QueryOptions.java |  46 +++---
 .../apache/cassandra/cql3/QueryProcessor.java   |   5 +-
 .../org/apache/cassandra/cql3/ResultSet.java|  61 ++--
 src/java/org/apache/cassandra/cql3/Sets.java|   8 +-
 src/java/org/apache/cassandra/cql3/Term.java|   3 +-
 src/java/org/apache/cassandra/cql3/Tuples.java  |   5 +-
 .../apache/cassandra/cql3/UntypedResultSet.java |   4 +-
 .../org/apache/cassandra/cql3/UserTypes.java|   3 +-
 .../cassandra/cql3/functions/AggregateFcts.java |  81 +-
 .../cql3/functions/AggregateFunction.java   |   8 +-
 .../cql3/functions/BytesConversionFcts.java |   9 +-
 .../cassandra/cql3/functions/CastFcts.java  |   8 +-
 .../cassandra/cql3/functions/FromJsonFct.java   |   3 +-
 .../cassandra/cql3/functions/FunctionCall.java  |   5 +-
 .../cql3/functions/JavaBasedUDFunction.java |   5 +-
 .../cassandra/cql3/functions/JavaUDF.java   |  23 +--
 .../cql3/functions/ScalarFunction.java  |   3 +-
 .../cql3/functions/ScriptBasedUDFunction.java   |   7 +-
 .../cassandra/cql3/functions/TimeFcts.java  |  25 +--
 .../cassandra/cql3/functions/ToJsonFct.java |   3 +-
 .../cassandra/cql3/functions/TokenFct.java  |   3 +-
 .../cassandra/cql3/functions/UDAggregate.java   |   5 +-
 .../cql3/functions/UDFByteCodeVerifier.java |   8 +-
 .../cassandra/cql3/functions/UDFunction.java|  28 ++--
 .../cassandra/cql3/functions/UDHelper.java  |  15 +-
 .../cassandra/cql3/functions/UuidFcts.java  |   3 +-
 .../selection/AggregateFunctionSelector.java|   5 +-
 .../cassandra/cql3/selection/FieldSelector.java |   5 +-
 .../cql3/selection/ScalarFunctionSelector.java  |   5 +-
 .../cassandra/cql3/selection/Selection.java |  18 ++-
 .../cassandra/cql3/selection/Selector.java  |   5 +-
 .../cql3/selection/SimpleSelector.java  |   5 +-
 .../cassandra/cql3/selection/TermSelector.java  |   5 +-
 .../cql3/selection/WritetimeOrTTLSelector.java  |   5 +-
 .../statements/CreateAggregateStatement.java|   4 +-
 .../cql3/statements/SelectStatement.java|   5 +-
 .../cassandra/db/PartitionRangeReadCommand.java |   3 +-
 src/java/org/apache/cassandra/db/ReadQuery.java |   5 +-
 .../db/SinglePartitionReadCommand.java  |   7 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |   4 +-
 .../db/marshal/AbstractCompositeType.java   |   3 +-
 .../cassandra/db/marshal/AbstractType.java  |   3 +-
 .../apache/cassandra/db/marshal/AsciiType.java  |   3 +-
 .../cassandra/db/marshal/BooleanType.java   |   3 +-
 .../apache/cassandra/db/marshal/ByteType.java   |   3 +-
 .../apache/cassandra/db/marshal/BytesType.java  |   3 +-
 .../cassandra/db/marshal/CollectionType.java|   3 +-
 .../db/marshal/ColumnToCollectionType.java  |   3 +-
 .../cassandra/db/marshal/CounterColumnType.java |   3 +-
 .../apache/cassandra/db/marshal/DateType.java   |   3 +-
 .../cassandra/db/marshal/DecimalType.java   |   3 +-
 .../apache/cassandra/db/marshal/DoubleType.java |   3 +-
 .../cassandra/db/marshal/DurationType.java  |   3 +-
 .../db/marshal/DynamicCompositeType.java|   3 +-
 .../apache/cassandra/db/marshal/FloatType.java  |   3 +-
 .../apache/cassandra/db/marshal/FrozenType.java |   3 +-
 .../cassandra/db/marshal/InetAddressType.java   |   3 +-
 .../apache/cassandra/db/marshal/Int32Type.java  |   3 +-
 .../cassandra/db/marshal/IntegerType.java   |   3 +-
 .../apache/cassandra/db/marshal/ListType.java   |  13 +-
 .../apache/cassandra/db/marshal/LongType.java   |   3 +-
 .../apache/cassandra/db/marshal/MapType.java|   6 +-
 .../db/marshal/PartitionerDefinedOrder.java |   3 +-
 .../cassandra/db/marshal/ReversedType.java  |   3 +-
 .../apache/cassandra/db/marshal/SetType.java|   3 +-
 .../apache/cassandr

[03/11] cassandra git commit: Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum

2016-10-31 Thread stefania
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/db/marshal/UserType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/UserType.java 
b/src/java/org/apache/cassandra/db/marshal/UserType.java
index cd181cc..176ab84 100644
--- a/src/java/org/apache/cassandra/db/marshal/UserType.java
+++ b/src/java/org/apache/cassandra/db/marshal/UserType.java
@@ -29,6 +29,7 @@ import org.apache.cassandra.db.rows.CellPath;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.exceptions.SyntaxException;
 import org.apache.cassandra.serializers.MarshalException;
+import org.apache.cassandra.transport.ProtocolVersion;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.Pair;
 import org.slf4j.Logger;
@@ -143,7 +144,7 @@ public class UserType extends TupleType
 return ShortType.instance;
 }
 
-public ByteBuffer serializeForNativeProtocol(Iterator cells, int 
protocolVersion)
+public ByteBuffer serializeForNativeProtocol(Iterator cells, 
ProtocolVersion protocolVersion)
 {
 assert isMultiCell;
 
@@ -249,7 +250,7 @@ public class UserType extends TupleType
 }
 
 @Override
-public String toJSONString(ByteBuffer buffer, int protocolVersion)
+public String toJSONString(ByteBuffer buffer, ProtocolVersion 
protocolVersion)
 {
 ByteBuffer[] buffers = split(buffer);
 StringBuilder sb = new StringBuilder("{");

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
--
diff --git a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java 
b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
index f1ee3c1..2dffe58 100644
--- a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
+++ b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
@@ -44,7 +44,7 @@ import org.apache.cassandra.db.filter.ColumnFilter;
 import org.apache.cassandra.db.view.View;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.exceptions.InvalidRequestException;
-import org.apache.cassandra.transport.Server;
+import org.apache.cassandra.transport.ProtocolVersion;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Pair;
@@ -856,7 +856,7 @@ public final class SchemaKeyspace
.add("final_func", aggregate.finalFunction() != null ? 
aggregate.finalFunction().name().name : null)
.add("initcond", aggregate.initialCondition() != null
 // must use the frozen state type here, as 
'null' for unfrozen collections may mean 'empty'
-? 
aggregate.stateType().freeze().asCQL3Type().toCQLLiteral(aggregate.initialCondition(),
 Server.CURRENT_VERSION)
+? 
aggregate.stateType().freeze().asCQL3Type().toCQLLiteral(aggregate.initialCondition(),
 ProtocolVersion.CURRENT)
 : null);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
--
diff --git 
a/src/java/org/apache/cassandra/serializers/CollectionSerializer.java 
b/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
index 3d6be67..95a0388 100644
--- a/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/CollectionSerializer.java
@@ -22,7 +22,7 @@ import java.nio.ByteBuffer;
 import java.util.Collection;
 import java.util.List;
 
-import org.apache.cassandra.transport.Server;
+import org.apache.cassandra.transport.ProtocolVersion;
 import org.apache.cassandra.utils.ByteBufferUtil;
 
 public abstract class CollectionSerializer implements TypeSerializer
@@ -30,14 +30,14 @@ public abstract class CollectionSerializer implements 
TypeSerializer
 protected abstract List serializeValues(T value);
 protected abstract int getElementCount(T value);
 
-public abstract T deserializeForNativeProtocol(ByteBuffer buffer, int 
version);
-public abstract void validateForNativeProtocol(ByteBuffer buffer, int 
version);
+public abstract T deserializeForNativeProtocol(ByteBuffer buffer, 
ProtocolVersion version);
+public abstract void validateForNativeProtocol(ByteBuffer buffer, 
ProtocolVersion version);
 
 public ByteBuffer serialize(T value)
 {
 List values = serializeValues(value);
 // See deserialize() for why using the protocol v3 variant is the 
right thing to do.
-return pack(values, getElementCount(value), Server.VERSION_3);
+return pack(values, getElementCount(value), Protocol

[06/11] cassandra git commit: Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum

2016-10-31 Thread stefania
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/GoodClass.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/GoodClass.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/GoodClass.java
index 54821b9..7275ef5 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/GoodClass.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/GoodClass.java
@@ -24,6 +24,7 @@ import java.util.List;
 import com.datastax.driver.core.TypeCodec;
 import org.apache.cassandra.cql3.functions.JavaUDF;
 import org.apache.cassandra.cql3.functions.UDFContext;
+import org.apache.cassandra.transport.ProtocolVersion;
 
 /**
  * Used by {@link 
org.apache.cassandra.cql3.validation.entities.UFVerifierTest}.
@@ -35,12 +36,12 @@ public final class GoodClass extends JavaUDF
 super(returnDataType, argDataTypes, udfContext);
 }
 
-protected Object executeAggregateImpl(int protocolVersion, Object 
firstParam, List params)
+protected Object executeAggregateImpl(ProtocolVersion protocolVersion, 
Object firstParam, List params)
 {
 throw new UnsupportedOperationException();
 }
 
-protected ByteBuffer executeImpl(int protocolVersion, List 
params)
+protected ByteBuffer executeImpl(ProtocolVersion protocolVersion, 
List params)
 {
 return null;
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronized.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronized.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronized.java
index dba846d..c036f63 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronized.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronized.java
@@ -24,6 +24,7 @@ import java.util.List;
 import com.datastax.driver.core.TypeCodec;
 import org.apache.cassandra.cql3.functions.JavaUDF;
 import org.apache.cassandra.cql3.functions.UDFContext;
+import org.apache.cassandra.transport.ProtocolVersion;
 
 /**
  * Used by {@link 
org.apache.cassandra.cql3.validation.entities.UFVerifierTest}.
@@ -35,12 +36,12 @@ public final class UseOfSynchronized extends JavaUDF
 super(returnDataType, argDataTypes, udfContext);
 }
 
-protected Object executeAggregateImpl(int protocolVersion, Object 
firstParam, List params)
+protected Object executeAggregateImpl(ProtocolVersion protocolVersion, 
Object firstParam, List params)
 {
 throw new UnsupportedOperationException();
 }
 
-protected ByteBuffer executeImpl(int protocolVersion, List 
params)
+protected ByteBuffer executeImpl(ProtocolVersion protocolVersion, 
List params)
 {
 synchronized (this)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronizedWithNotify.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronizedWithNotify.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronizedWithNotify.java
index 63c319c..3eb673a 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronizedWithNotify.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronizedWithNotify.java
@@ -24,6 +24,7 @@ import java.util.List;
 import com.datastax.driver.core.TypeCodec;
 import org.apache.cassandra.cql3.functions.JavaUDF;
 import org.apache.cassandra.cql3.functions.UDFContext;
+import org.apache.cassandra.transport.ProtocolVersion;
 
 /**
  * Used by {@link 
org.apache.cassandra.cql3.validation.entities.UFVerifierTest}.
@@ -35,12 +36,12 @@ public final class UseOfSynchronizedWithNotify extends 
JavaUDF
 super(returnDataType, argDataTypes, udfContext);
 }
 
-protected Object executeAggregateImpl(int protocolVersion, Object 
firstParam, List params)
+protected Object executeAggregateImpl(ProtocolVersion protocolVersion, 
Object firstParam, List params)
 {
 throw new UnsupportedOperationException();
 }
 
-protected ByteBuffer executeImpl(int protocolVersion, List 
params)
+protected ByteBuffer executeImpl(ProtocolVersion protocolVersion, 
List params)
 {
 synchronized (this)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/test/unit/org/apache/cassandra/cql3/validation/entities/udfverify/UseOfSynchronizedWithNotifyAll.java
-

[09/11] cassandra git commit: Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum

2016-10-31 Thread stefania
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0adc166/src/java/org/apache/cassandra/cql3/functions/TimeFcts.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/TimeFcts.java 
b/src/java/org/apache/cassandra/cql3/functions/TimeFcts.java
index 79ebfaf..e682dcd 100644
--- a/src/java/org/apache/cassandra/cql3/functions/TimeFcts.java
+++ b/src/java/org/apache/cassandra/cql3/functions/TimeFcts.java
@@ -27,6 +27,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.db.marshal.*;
+import org.apache.cassandra.transport.ProtocolVersion;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.UUIDGen;
 
@@ -52,7 +53,7 @@ public abstract class TimeFcts
 
 public static final Function nowFct = new NativeScalarFunction("now", 
TimeUUIDType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 return ByteBuffer.wrap(UUIDGen.getTimeUUIDBytes());
 }
@@ -60,7 +61,7 @@ public abstract class TimeFcts
 
 public static final Function minTimeuuidFct = new 
NativeScalarFunction("mintimeuuid", TimeUUIDType.instance, 
TimestampType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -72,7 +73,7 @@ public abstract class TimeFcts
 
 public static final Function maxTimeuuidFct = new 
NativeScalarFunction("maxtimeuuid", TimeUUIDType.instance, 
TimestampType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -90,7 +91,7 @@ public abstract class TimeFcts
 {
 private volatile boolean hasLoggedDeprecationWarning;
 
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 if (!hasLoggedDeprecationWarning)
 {
@@ -116,7 +117,7 @@ public abstract class TimeFcts
 {
 private volatile boolean hasLoggedDeprecationWarning;
 
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 if (!hasLoggedDeprecationWarning)
 {
@@ -138,7 +139,7 @@ public abstract class TimeFcts
  */
 public static final NativeScalarFunction timeUuidtoDate = new 
NativeScalarFunction("todate", SimpleDateType.instance, TimeUUIDType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -154,7 +155,7 @@ public abstract class TimeFcts
  */
 public static final NativeScalarFunction timeUuidToTimestamp = new 
NativeScalarFunction("totimestamp", TimestampType.instance, 
TimeUUIDType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -170,7 +171,7 @@ public abstract class TimeFcts
  */
 public static final NativeScalarFunction timeUuidToUnixTimestamp = new 
NativeScalarFunction("tounixtimestamp", LongType.instance, 
TimeUUIDType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -185,7 +186,7 @@ public abstract class TimeFcts
  */
 public static final NativeScalarFunction timestampToUnixTimestamp = new 
NativeScalarFunction("tounixtimestamp", LongType.instance, 
TimestampType.instance)
 {
-public ByteBuffer execute(int protocolVersion, List 
parameters)
+public ByteBuffer execute(ProtocolVersion protocolVersion, 
List parameters)
 {
 ByteBuffer bb = parameters.get(0);
 if (bb == null)
@@ -201,7 +202,7 @@ public abstract class TimeFcts
 */
public static final NativeScalarFunction timestampToDate = new 
NativeScalarFunction("todate", SimpleDateType.instance, TimestampType.instance)
{
-   public ByteBuffer execute(int protocolVersion, List 
parameters)
+   public ByteBuffer execute(ProtocolVersion protocolVe

[10/11] cassandra git commit: Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum

2016-10-31 Thread stefania
Extend native protocol request flags, add versions to SUPPORTED, and introduce 
ProtocolVersion enum

patch by Stefania Alborghetti; reviewed by Benjamin Lerer for CASSANDRA-12838


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e0adc166
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e0adc166
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e0adc166

Branch: refs/heads/trunk
Commit: e0adc166a33033c9d2668547803a1e034c2c2494
Parents: 0a1f1c8
Author: Stefania Alborghetti 
Authored: Tue Oct 25 16:01:40 2016 +0800
Committer: Stefania Alborghetti 
Committed: Mon Oct 31 21:14:42 2016 +0800

--
 CHANGES.txt |   1 +
 doc/native_protocol_v5.spec |  13 +-
 ...driver-internal-only-3.7.0.post0-2481531.zip | Bin 0 -> 252057 bytes
 ...driver-internal-only-3.7.0.post0-70f41b5.zip | Bin 252036 -> 0 bytes
 .../org/apache/cassandra/cql3/CQL3Type.java |  20 +--
 .../apache/cassandra/cql3/ColumnCondition.java  |  14 +-
 .../org/apache/cassandra/cql3/Constants.java|   3 +-
 src/java/org/apache/cassandra/cql3/Lists.java   |   8 +-
 src/java/org/apache/cassandra/cql3/Maps.java|   8 +-
 .../org/apache/cassandra/cql3/QueryOptions.java |  46 +++---
 .../apache/cassandra/cql3/QueryProcessor.java   |   5 +-
 .../org/apache/cassandra/cql3/ResultSet.java|  61 ++--
 src/java/org/apache/cassandra/cql3/Sets.java|   8 +-
 src/java/org/apache/cassandra/cql3/Term.java|   3 +-
 src/java/org/apache/cassandra/cql3/Tuples.java  |   5 +-
 .../apache/cassandra/cql3/UntypedResultSet.java |   4 +-
 .../org/apache/cassandra/cql3/UserTypes.java|   3 +-
 .../cassandra/cql3/functions/AggregateFcts.java |  81 +-
 .../cql3/functions/AggregateFunction.java   |   8 +-
 .../cql3/functions/BytesConversionFcts.java |   9 +-
 .../cassandra/cql3/functions/CastFcts.java  |   8 +-
 .../cassandra/cql3/functions/FromJsonFct.java   |   3 +-
 .../cassandra/cql3/functions/FunctionCall.java  |   5 +-
 .../cql3/functions/JavaBasedUDFunction.java |   5 +-
 .../cassandra/cql3/functions/JavaUDF.java   |  23 +--
 .../cql3/functions/ScalarFunction.java  |   3 +-
 .../cql3/functions/ScriptBasedUDFunction.java   |   7 +-
 .../cassandra/cql3/functions/TimeFcts.java  |  25 +--
 .../cassandra/cql3/functions/ToJsonFct.java |   3 +-
 .../cassandra/cql3/functions/TokenFct.java  |   3 +-
 .../cassandra/cql3/functions/UDAggregate.java   |   5 +-
 .../cql3/functions/UDFByteCodeVerifier.java |   8 +-
 .../cassandra/cql3/functions/UDFunction.java|  28 ++--
 .../cassandra/cql3/functions/UDHelper.java  |  15 +-
 .../cassandra/cql3/functions/UuidFcts.java  |   3 +-
 .../selection/AggregateFunctionSelector.java|   5 +-
 .../cassandra/cql3/selection/FieldSelector.java |   5 +-
 .../cql3/selection/ScalarFunctionSelector.java  |   5 +-
 .../cassandra/cql3/selection/Selection.java |  18 ++-
 .../cassandra/cql3/selection/Selector.java  |   5 +-
 .../cql3/selection/SimpleSelector.java  |   5 +-
 .../cassandra/cql3/selection/TermSelector.java  |   5 +-
 .../cql3/selection/WritetimeOrTTLSelector.java  |   5 +-
 .../statements/CreateAggregateStatement.java|   4 +-
 .../cql3/statements/SelectStatement.java|   5 +-
 .../cassandra/db/PartitionRangeReadCommand.java |   3 +-
 src/java/org/apache/cassandra/db/ReadQuery.java |   5 +-
 .../db/SinglePartitionReadCommand.java  |   7 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |   4 +-
 .../db/marshal/AbstractCompositeType.java   |   3 +-
 .../cassandra/db/marshal/AbstractType.java  |   3 +-
 .../apache/cassandra/db/marshal/AsciiType.java  |   3 +-
 .../cassandra/db/marshal/BooleanType.java   |   3 +-
 .../apache/cassandra/db/marshal/ByteType.java   |   3 +-
 .../apache/cassandra/db/marshal/BytesType.java  |   3 +-
 .../cassandra/db/marshal/CollectionType.java|   3 +-
 .../db/marshal/ColumnToCollectionType.java  |   3 +-
 .../cassandra/db/marshal/CounterColumnType.java |   3 +-
 .../apache/cassandra/db/marshal/DateType.java   |   3 +-
 .../cassandra/db/marshal/DecimalType.java   |   3 +-
 .../apache/cassandra/db/marshal/DoubleType.java |   3 +-
 .../cassandra/db/marshal/DurationType.java  |   3 +-
 .../db/marshal/DynamicCompositeType.java|   3 +-
 .../apache/cassandra/db/marshal/FloatType.java  |   3 +-
 .../apache/cassandra/db/marshal/FrozenType.java |   3 +-
 .../cassandra/db/marshal/InetAddressType.java   |   3 +-
 .../apache/cassandra/db/marshal/Int32Type.java  |   3 +-
 .../cassandra/db/marshal/IntegerType.java   |   3 +-
 .../apache/cassandra/db/marshal/ListType.java   |  13 +-
 .../apache/cassandra/db/marshal/LongType.java   |   3 +-
 .../apache/cassandra/db/marshal/MapType.java|   6 +-
 .../db/marshal/PartitionerDefinedOrder.java |   3 +-
 ...

[jira] [Created] (CASSANDRA-12865) dtest failure in materialized_views_test.TestMaterializedViews.view_tombstone_test

2016-10-31 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-12865:
-

 Summary: dtest failure in 
materialized_views_test.TestMaterializedViews.view_tombstone_test
 Key: CASSANDRA-12865
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12865
 Project: Cassandra
  Issue Type: Test
Reporter: Sean McCarthy
Assignee: DS Test Eng
 Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log

example failure:

http://cassci.datastax.com/job/cassandra-3.0_dtest/844/testReport/materialized_views_test/TestMaterializedViews/view_tombstone_test

{code}
Error Message

Encountered digest mismatch when we shouldn't
{code}{code}
Stacktrace

  File "/usr/lib/python2.7/unittest/case.py", line 329, in run
testMethod()
  File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 971, 
in view_tombstone_test
self.check_trace_events(result.get_query_trace(), False)
  File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 995, 
in check_trace_events
self.fail("Encountered digest mismatch when we shouldn't")
  File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
raise self.failureException(msg)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12808) testall failure inorg.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex

2016-10-31 Thread Sean McCarthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean McCarthy updated CASSANDRA-12808:
--
Description: 
example failure:
http://cassci.datastax.com/job/cassandra-2.2_testall/594/testReport/org.apache.cassandra.io.sstable/IndexSummaryManagerTest/testCancelIndex/

{code}
Error Message

Expected compaction interrupted exception
{code}
{code}
Stacktrace

junit.framework.AssertionFailedError: Expected compaction interrupted exception
at 
org.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex(IndexSummaryManagerTest.java:641)
{code}

Related failure:
http://cassci.datastax.com/job/cassandra-2.2_testall/600/testReport/org.apache.cassandra.io.sstable/IndexSummaryManagerTest/testCancelIndex_compression/

  was:
example failure:
http://cassci.datastax.com/job/cassandra-2.2_testall/594/testReport/org.apache.cassandra.io.sstable/IndexSummaryManagerTest/testCancelIndex/

{code}
Error Message

Expected compaction interrupted exception
{code}
{code}
Stacktrace

junit.framework.AssertionFailedError: Expected compaction interrupted exception
at 
org.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex(IndexSummaryManagerTest.java:641)
{code}


> testall failure 
> inorg.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex
> -
>
> Key: CASSANDRA-12808
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12808
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: test-failure
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/594/testReport/org.apache.cassandra.io.sstable/IndexSummaryManagerTest/testCancelIndex/
> {code}
> Error Message
> Expected compaction interrupted exception
> {code}
> {code}
> Stacktrace
> junit.framework.AssertionFailedError: Expected compaction interrupted 
> exception
>   at 
> org.apache.cassandra.io.sstable.IndexSummaryManagerTest.testCancelIndex(IndexSummaryManagerTest.java:641)
> {code}
> Related failure:
> http://cassci.datastax.com/job/cassandra-2.2_testall/600/testReport/org.apache.cassandra.io.sstable/IndexSummaryManagerTest/testCancelIndex_compression/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12859) Column-level permissions

2016-10-31 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15622003#comment-15622003
 ] 

Sam Tunnicliffe commented on CASSANDRA-12859:
-

Thanks for opening this and for the detailed proposal doc, I've heard this 
feature requested a few times now, so it would be good to get it in. 
Regarding the specifics of the proposal, I have a few questions/points to 
feedback:

bq. In the interest of an unobtrusive and non-breaking implementation, I 
propose to not break up MODIFY into its conceptual parts. Rather, optional 
column lists will be allowed on MODIFY. Such column lists, if any, will be 
simply ignored in permission enforcement of DELETE and TRUNCATE statements.

When you say column lists will be ignored, do you mean that the specific 
columns in the list will be ignored, making the presence of *any* list 
equivalent to a table-level grant? Or is the suggestion that a permission with 
a qualifying list of columns will be equivalent to *no* permission being 
granted? The former is definitely dangerous as it would allow a user who has 
MODIFY permission only on a single column to delete an entire row, or even to 
delete all partitions with TRUNCATE. The latter is also somewhat problematic as 
it requires special handling at authz time, i.e. you need to specifically check 
whether the user has non-column-restricted permission on the table (which I 
think is subtly different to the checking required on the read & upsert paths). 
Why not just process deletes/truncates the same as inserts?

bq. Dropping previously included columns from the new list has the effect of 
revoking the permission on those columns.
If I understood this correctly, this means that every GRANT statement 
containing a column list completely replaces any existing column list. e.g.
{code}
GRANT SELECT on ks.t1 (col_a, col_b) TO foo;  // role foo has access to col_a 
and col_b
GRANT SELECT on kt.t1 (col_c) TO foo;  // now foo only has permissions on col_c
{code}
The special case is when the column list is empty, in which case it becomes a 
GRANT on *all* columns. I get that this special case is required for backwards 
compatibility, but I'm not keen on the regular case as it seems a little 
counter-intuitive to me. After executing the two statements above for example, 
it would appear more natural to me for foo to have SELECT permissions on all 
three columns. 

bq. Are there unit tests, part of the Cassandra project, that verify 
functionality of managing and enforcing permissions?
There are not any substantial unit tests for authz at the table level, but 
there is a fairly comprehensive set of dtests 
[here|https://github.com/riptano/cassandra-dtest/blob/master/auth_test.py]. The 
main impediment to better unit testing here is that {{CassandraAuthorizer}} 
does all reads and writes using the distributed path, through {{StorageProxy}}. 
I've been considering something like [this 
change|https://gist.github.com/beobal/0bcd592ad7716d0bebd400c53b83ce3e] to make 
it more testable, this might be a good time to do that. (Note: 
{{CassandraRoleManager}} works in exactly the same way, so it will require 
similar changes to be used in unit tests).

How do you propose to handle dropped/altered columns? When a table or keyspace 
is dropped, all permissions on it are revoked. Aside from good housekeeping, 
this prevents accidental leakage of permissions, should a new table be created 
with the same name. {{IAuthorizer}} is currently hooked up to schema change 
events via {{AuthMigrationListener}} to facilitate this. Something similar will 
need to be done to process schema events which alter or drop columns. This 
scenario is missing from the proposed testing plan btw.

Whilst the new EBNF looks fair enough, we need to be sure and enforce the 
restriction that only {{DataResource}} can have a column list applied, and only 
a Table level {{DataResource}} at that. So, although it's not something that 
can be enforced at the grammar level AFAICT, we need to ensure that statements 
like these are illegal:
{code}
GRANT SELECT (col_a, col_b) ON KEYSPACE ks TO foo;
GRANT EXECUTE (col_x) ON FUNCTION ks.fun1(int) TO foo;
{code}

The section describing how the finer-grained checking will impact the code 
stops at the {{ClientState}} & doesn't make any mention of changes to the 
{{IAuthorizer}} interface. So it's slightly unclear how precisely to support 
bq. enriching the class PermissionsCache by managing, in memory, a set of 
included columns (if specified in a GRANT statement), per SELECT / MODIFY. 
Needs to be looked up efficiently.
To be honest, I think this is a trickier problem than it may at first appear. 
The reason being that the concept of qualifying permissions with a column list 
only applies to one specific {{IResource}} implementation, but 
{{IAuthorizer::authorize}} is completely agnostic as to

[jira] [Comment Edited] (CASSANDRA-12864) "commitlog_sync_batch_window_in_ms" parameter is not working correctly in 2.1, 2.2 and 3.9

2016-10-31 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15621821#comment-15621821
 ] 

Benjamin Lerer edited comment on CASSANDRA-12864 at 10/31/16 10:48 AM:
---

It looks like [~benedict] already answered your question in this email:
bq. {{commitlog_sync_batch_window_in_ms}} is the maximum length of time that 
queries may be batched together for, not the minimum.



was (Author: blerer):
It looks like [~benedict] already answered your question in this email:
bq. {{commitlog_sync_batch_window_in_ms}} is the maximum length of time that 
queries may be batched together
for, not the minimum.


> "commitlog_sync_batch_window_in_ms" parameter is not working correctly in 
> 2.1, 2.2 and 3.9
> --
>
> Key: CASSANDRA-12864
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12864
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Hiroyuki Yamada
>
> "commitlog_sync_batch_window_in_ms" doesn't seem to be working at least in 
> the latest versions in 2.1.16, 2.2.8 and 3.9.
> Here is the way to reproduce the bug.
> 1.  set the following parameters in cassandra.yaml
> * commitlog_sync: batch
> * commitlog_sync_batch_window_in_ms: 1 (10s)
> 2. issue an insert from cqlsh
> 3. it immediately returns instead of waiting for 10 seconds.
> Please refer to the communication in the mailing list.
> http://www.mail-archive.com/user@cassandra.apache.org/msg49642.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12864) "commitlog_sync_batch_window_in_ms" parameter is not working correctly in 2.1, 2.2 and 3.9

2016-10-31 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15621821#comment-15621821
 ] 

Benjamin Lerer commented on CASSANDRA-12864:


It looks like [~benedict] already answered your question in this email:
bq. {{commitlog_sync_batch_window_in_ms}} is the maximum length of time that 
queries may be batched together
for, not the minimum.


> "commitlog_sync_batch_window_in_ms" parameter is not working correctly in 
> 2.1, 2.2 and 3.9
> --
>
> Key: CASSANDRA-12864
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12864
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Hiroyuki Yamada
>
> "commitlog_sync_batch_window_in_ms" doesn't seem to be working at least in 
> the latest versions in 2.1.16, 2.2.8 and 3.9.
> Here is the way to reproduce the bug.
> 1.  set the following parameters in cassandra.yaml
> * commitlog_sync: batch
> * commitlog_sync_batch_window_in_ms: 1 (10s)
> 2. issue an insert from cqlsh
> 3. it immediately returns instead of waiting for 10 seconds.
> Please refer to the communication in the mailing list.
> http://www.mail-archive.com/user@cassandra.apache.org/msg49642.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12791) MessageIn logic to determine if the message is cross-node is wrong

2016-10-31 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15621787#comment-15621787
 ] 

Sylvain Lebresne commented on CASSANDRA-12791:
--

bq. The intent is to help operators work out if messages are dropped because of 
clock skew.
bq. we need to also check DD.hasCrossNodeTimeout(), a message originating cross 
node is not sufficient.

I don't I agree tbh. Adding the {{DD.hasCrossNodeTimeout()}} check is losing 
information and creating a somewhat confusing metric, but I disagree it's 
really adding value. To quote Brandon on the original ticket, knowing if 
messages are dropped of clock skew "is easily derived from the yaml". Namely, 
if you do see a lot of cross-node dropped message but no local/internal ones, 
then it's a fair sign this may be due to clock skew and you can then simply 
check if {{DD.hasCrossNodeTimeout()}} is set or not to confirm.

So adding the {{DD.hasCrossNodeTimeout()}} check does not really add any 
information that you can't easily infer otherwise, but adding it does mean that 
when the option is {{false}} (the default as it happens), then the cross-node 
metric will never-ever get incremented. And I can't shake the feeling that it's 
going to be confusing for most users.I mean, they see we have 2 different 
metrics, but only seeing lhe "local" one ever get incremented might make them 
think only locally delivered message are dropped for some weird reason.

Anyway, I don't care tremendously about it (I was mostly bugged by the broken 
logic in {{MessageIn}} after all) but I do think it's strictly better *without* 
the check to {{DD.hasCrossNodeTimeout()}} in {{MS.incrementDroppedMessage()}}. 
I'm good with the rest of the changes though.


> MessageIn logic to determine if the message is cross-node is wrong
> --
>
> Key: CASSANDRA-12791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12791
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>Priority: Minor
>
> {{MessageIn}} has the following code to read the 'creation time' of the 
> message on the receiving side:
> {noformat}
> public static ConstructionTime readTimestamp(InetAddress from, DataInputPlus 
> input, long timestamp) throws IOException
> {
> // make sure to readInt, even if cross_node_to is not enabled
> int partial = input.readInt();
> long crossNodeTimestamp = (timestamp & 0xL) | (((partial 
> & 0xL) << 2) >> 2);
> if (timestamp > crossNodeTimestamp)
> {
> MessagingService.instance().metrics.addTimeTaken(from, timestamp - 
> crossNodeTimestamp);
> }
> if(DatabaseDescriptor.hasCrossNodeTimeout())
> {
> return new ConstructionTime(crossNodeTimestamp, timestamp != 
> crossNodeTimestamp);
> }
> else
> {
> return new ConstructionTime();
> }
> }
> {noformat}
> where {{timestamp}} is really the local time on the receiving node when 
> calling that method.
> The incorrect part, I believe, is the {{timestamp != crossNodeTimestamp}} 
> used to set the {{isCrossNode}} field of {{ConstructionTime}}. A first 
> problem is that this will basically always be {{true}}: for it to be 
> {{false}}, we'd need the low-bytes of the timestamp taken on the sending node 
> to coincide exactly with the ones taken on the receiving side, which is 
> _very_ unlikely. It is also a relatively meaningless test: having that test 
> be {{false}} basically means the lack of clock sync between the 2 nodes is 
> exactly the time the 2 calls to {{System.currentTimeMillis()}} (on sender and 
> receiver), which is definitively not what we care about.
> What the result of this test is used for is to determine if the message was 
> crossNode or local. It's used to increment different metrics (we separate 
> metric local versus crossNode dropped messages) in {{MessagingService}} for 
> instance. And that's where this is kind of a bug: not only the {{timestamp != 
> crossNodeTimestamp}}, but if {{DatabaseDescriptor.hasCrossNodeTimeout()}}, we 
> *always* have this {{isCrossNode}} false, which means we'll never increment 
> the "cross-node dropped messages" metric, which is imo unexpected.
> That is, it is true that if {{DatabaseDescriptor.hasCrossNodeTimeout() == 
> false}}, then we end using the receiver side timestamp to timeout messages, 
> and so you end up only dropping messages that timeout locally. And _in that 
> sense_, always incrementing the "locally" dropped messages metric is not 
> completely illogical. But I doubt most users are aware of those pretty 
> specific nuance when looking at the related metrics, and I'm relatively sure 
> users expect a metrics named {{droppedCrossNodeTimeout}} to actually count 
> cross-n

[jira] [Commented] (CASSANDRA-12539) Empty CommitLog prevents restart

2016-10-31 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15621732#comment-15621732
 ] 

Benjamin Lerer commented on CASSANDRA-12539:


bq. There seems to be a very easy way to reproduce this condition

Which condition? Having an empty commit log segment?
An empty commit log segment can occurs for different reasons. Running out of 
file pointers is only one of them. Cassandra has no way to know that it has an 
empty commit log segment because it run out of file pointers before crashing. 
An empty commit log segment might have been caused by another problem were you 
actually lost some data. In this case, you will actually want to be notify of 
the problem.

Now, the real problem here is not that Cassandra does not ignore empty commit 
log in startup. It is that it should not create an empty commit log in this 
corner case. I will look at what I can do about it. 

 

> Empty CommitLog prevents restart
> 
>
> Key: CASSANDRA-12539
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12539
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>
> A node just crashed (known cause: CASSANDRA-11594) but to my surprise (unlike 
> other time) restarting simply fails.
> Checking the logs showed:
> {noformat}
> ERROR [main] 2016-08-25 17:05:22,611 JVMStabilityInspector.java:82 - Exiting 
> due to error while processing commit log during initialization.
> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: 
> Could not read commit log descriptor in file 
> /data/cassandra/commitlog/CommitLog-6-1468235564433.log
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:650)
>  [apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:327)
>  [apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:148)
>  [apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:181) 
> [apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:161) 
> [apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:289) 
> [apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:557)
>  [apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:685) 
> [apache-cassandra-3.0.8.jar:3.0.8]
> INFO  [main] 2016-08-25 17:08:56,944 YamlConfigurationLoader.java:85 - 
> Configuration location: file:/etc/cassandra/cassandra.yaml
> {noformat}
> Deleting the empty file fixes the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12862) LWT leaves corrupted state

2016-10-31 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15621685#comment-15621685
 ] 

Sylvain Lebresne commented on CASSANDRA-12862:
--

It would be gladly appreciated if you shared the code you're using to 
reproduce, especially if it's simple. 

> LWT leaves corrupted state
> --
>
> Key: CASSANDRA-12862
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12862
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.1.16, 3-node cluster with RF=3, 
> NetworkTopology with 1 DC
>Reporter: Artur Siekielski
>
> When executing "INSERT ... IF NOT EXISTS" (with consistency LOCAL_QUORUM) 
> while the concurrency level is high (about 50 simultaneous threads doing 
> inserts, for the same partition key but different clustering keys) sometimes 
> the INSERT returns applied=False, but the subsequent SELECTs return no data. 
> The corrupted state is permanent - neither the INSERT or SELECTs succeed, 
> making the PK "locked".
> I can easily reproduce this - for 100 simultaneous threads doing a single 
> insert I get 1-2 corruptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12857) Upgrade procedure between 2.1.x and 3.0.x is broken

2016-10-31 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15621666#comment-15621666
 ] 

Sylvain Lebresne commented on CASSANDRA-12857:
--

Something went wrong when upgrading the schema, but it will be hard to track it 
down unless we have your schema (or at least some version of your schema that 
reproduce).

> Upgrade procedure between 2.1.x and 3.0.x is broken
> ---
>
> Key: CASSANDRA-12857
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12857
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alexander Yasnogor
>Priority: Critical
>
> It is not possible safely to do Cassandra in place upgrade from 2.1.14 to 
> 3.0.9.
> Distribution: deb packages from datastax community repo.
> The upgrade was performed according to procedure from this docu: 
> https://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgrdCassandraDetails.html
> Potential reason: The upgrade procedure creates corrupted system_schema and 
> this keyspace get populated in the cluster and kills it.
> We started with one datacenter which contains 19 nodes divided to two racks.
> First rack was successfully upgraded and nodetool describecluster reported 
> two schema versions. One for upgraded nodes, another for non-upgraded nodes.
> On starting new version on a first node from the second rack:
> {code:java}
> INFO  [main] 2016-10-25 13:06:12,103 LegacySchemaMigrator.java:87 - Moving 11 
> keyspaces from legacy schema tables to the new schema keyspace (system_schema)
> INFO  [main] 2016-10-25 13:06:12,104 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@7505e6ac
> INFO  [main] 2016-10-25 13:06:12,200 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@64414574
> INFO  [main] 2016-10-25 13:06:12,204 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@3f2c5f45
> INFO  [main] 2016-10-25 13:06:12,207 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@2bc2d64d
> INFO  [main] 2016-10-25 13:06:12,301 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@77343846
> INFO  [main] 2016-10-25 13:06:12,305 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@19b0b931
> INFO  [main] 2016-10-25 13:06:12,308 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@44bb0b35
> INFO  [main] 2016-10-25 13:06:12,311 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@79f6cd51
> INFO  [main] 2016-10-25 13:06:12,319 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@2fcd363b
> INFO  [main] 2016-10-25 13:06:12,356 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@609eead6
> INFO  [main] 2016-10-25 13:06:12,358 LegacySchemaMigrator.java:148 - 
> Migrating keyspace 
> org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@7eb7f5d0
> INFO  [main] 2016-10-25 13:06:13,958 LegacySchemaMigrator.java:97 - 
> Truncating legacy schema tables
> INFO  [main] 2016-10-25 13:06:26,474 LegacySchemaMigrator.java:103 - 
> Completed migration of legacy schema tables
> INFO  [main] 2016-10-25 13:06:26,474 StorageService.java:521 - Populating 
> token metadata from system tables
> INFO  [main] 2016-10-25 13:06:26,796 StorageService.java:528 - Token 
> metadata: Normal Tokens: [HUGE LIST of tokens]
> INFO  [main] 2016-10-25 13:06:29,066 ColumnFamilyStore.java:389 - 
> Initializing ...
> INFO  [main] 2016-10-25 13:06:29,066 ColumnFamilyStore.java:389 - 
> Initializing ...
> INFO  [main] 2016-10-25 13:06:45,894 AutoSavingCache.java:165 - Completed 
> loading (2 ms; 460 keys) KeyCache cache
> INFO  [main] 2016-10-25 13:06:46,982 StorageService.java:521 - Populating 
> token metadata from system tables
> INFO  [main] 2016-10-25 13:06:47,394 StorageService.java:528 - Token 
> metadata: Normal Tokens:[HUGE LIST of tokens]
> INFO  [main] 2016-10-25 13:06:47,420 LegacyHintsMigrator.java:88 - Migrating 
> legacy hints to new storage
> INFO  [main] 2016-10-25 13:06:47,420 LegacyHintsMigrator.java:91 - Forcing a 
> major compaction of system.hints table
> INFO  [main] 2016-10-25 13:06:50,587 LegacyHintsMigrator.java:95 - Writing 
> legacy hints to the new storage
> INFO  [main] 2016-10-25 13:06:53,927 LegacyHintsMigrator.java:99 - Truncating 
> system.hints table
> 
> INFO  [main] 2016-10-25 13:06:56,572 Migra