[jira] [Created] (CASSANDRA-12088) Upgrade corrupts SSTables

2016-06-24 Thread Chandra Sekar S (JIRA)
Chandra Sekar S created CASSANDRA-12088:
---

 Summary: Upgrade corrupts SSTables
 Key: CASSANDRA-12088
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12088
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: CentOS release 6.7 (Final)
Cassandra version: 2.1, 3.7
Reporter: Chandra Sekar S
Priority: Critical


When upgrading from 2.0 to 3.7, table was corrupted and an exception occurs 
when performing LWT from Java Driver. The server was upgraded from 2.0 to 2.1 
and then to 3.7. "nodetool upgradesstables" was run after each step of upgrade.

Schema of affected table:

{code}
CREATE TABLE payment.tbl (
c1 text,
c2 timestamp,
c3 text,
s1 timestamp static,
s2 int static,
c4 text,
PRIMARY KEY (c1, c2)
) WITH CLUSTERING ORDER BY (c2 ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
{code}

Insertion that fails:

{code:java}
insert into tbl (c1, s2) values ('value', 0) if not exists;
{code}

The stack trace in system.log of cassandra server,

{code}
INFO  [HANDSHAKE-maven-repo.corp.zeta.in/10.1.5.13] 2016-06-24 22:23:14,887 
OutboundTcpConnection.java:514 - Handshaking version with 
maven-repo.corp.zeta.in/10.1.5.13
ERROR [MessagingService-Incoming-/10.1.5.13] 2016-06-24 22:23:14,889 
CassandraDaemon.java:217 - Exception in thread 
Thread[MessagingService-Incoming-/10.1.5.13,5,main]
java.io.IOError: java.io.IOException: Corrupt flags value for unfiltered 
partition (isStatic flag set): 160
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:224)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:212)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:681)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:642)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:131)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.service.paxos.PrepareResponse$PrepareResponseSerializer.deserialize(PrepareResponse.java:97)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.service.paxos.PrepareResponse$PrepareResponseSerializer.deserialize(PrepareResponse.java:66)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:114) 
~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:190)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
Caused by: java.io.IOException: Corrupt flags value for unfiltered partition 
(isStatic flag set): 160
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.deserialize(UnfilteredSerializer.java:380)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:219)
 ~[apache-cassandra-3.7.0.jar:3.7.0]
... 11 common frames omitted
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11713) Add ability to log thread dump when NTR pool is blocked

2016-06-24 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348955#comment-15348955
 ] 

Paulo Motta edited comment on CASSANDRA-11713 at 6/25/16 12:45 AM:
---

Attaching new patch encapsulating thread dumping capability in {{ThreadDumper}} 
class, which registers itself as a {{org.apache.cassandra.utils.ThreadDumper}} 
mbean and logs a warn in case it's not able to register the MBean. 

Also added a new parameter {{enableThreadDumping}} to {{SEPExecutor}} that is 
only enabled by {{RequestThreadPoolExecutor}}. When this parameter is set, a 
{{ThreadDumper}} is instantiated and {{ThreadDumper.maybeLogThreadDump}} is 
called when there are blocked requests.

Tested patch with jvisualvm (screenshot 
[attached|https://issues.apache.org/jira/secure/attachment/12813172/ThreadDumper.png])
 and verified that it only logs thread dump once and unsets the flag.

Patch and CI tests below:
||trunk||
|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11713]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-11713-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-11713-dtest/lastCompletedBuild/testReport/]|


was (Author: pauloricardomg):
Attaching new patch encapsulating thread dumping capability in {{ThreadDumper}} 
class, which registers itself as a {{org.apache.cassandra.utils.ThreadDumper}} 
mbean and logs a warn in case it's not able to register the MBean. 

Also added a new parameter {{enableThreadDumping}} to {{SEPExecutor}} that is 
only enabled by {{RequestThreadPoolExecutor}}. When this parameter is set, a 
{{ThreadDumper}} is instantiated and {{ThreadDumper.maybeLogThreadDump}} is 
called when there are blocked requests.

Tested patch with jvisualvm (screenshot attached) and checked that it only logs 
thread dump once and unsets the flag.

Patch and CI tests below:
||trunk||
|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11713]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-11713-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-11713-dtest/lastCompletedBuild/testReport/]|

> Add ability to log thread dump when NTR pool is blocked
> ---
>
> Key: CASSANDRA-11713
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11713
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Attachments: ThreadDumper.png
>
>
> Thread dumps are very useful for troubleshooting Native-Transport-Requests 
> contention issues like CASSANDRA-11363 and CASSANDRA-11529.
> While they could be generated externally with {{jstack}}, sometimes the 
> conditions are transient and it's hard to catch the exact moment when they 
> happen, so it could be useful to generate and log them upon user request when 
> certain internal condition happens.
> I propose adding a {{logThreadDumpOnNextContention}} flag to {{SEPExecutor}} 
> that when enabled via JMX generates and logs a single thread dump on the 
> system log when the thread pool queue is full.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11713) Add ability to log thread dump when NTR pool is blocked

2016-06-24 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-11713:

Attachment: ThreadDumper.png

> Add ability to log thread dump when NTR pool is blocked
> ---
>
> Key: CASSANDRA-11713
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11713
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
> Attachments: ThreadDumper.png
>
>
> Thread dumps are very useful for troubleshooting Native-Transport-Requests 
> contention issues like CASSANDRA-11363 and CASSANDRA-11529.
> While they could be generated externally with {{jstack}}, sometimes the 
> conditions are transient and it's hard to catch the exact moment when they 
> happen, so it could be useful to generate and log them upon user request when 
> certain internal condition happens.
> I propose adding a {{logThreadDumpOnNextContention}} flag to {{SEPExecutor}} 
> that when enabled via JMX generates and logs a single thread dump on the 
> system log when the thread pool queue is full.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11713) Add ability to log thread dump when NTR pool is blocked

2016-06-24 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348955#comment-15348955
 ] 

Paulo Motta commented on CASSANDRA-11713:
-

Attaching new patch encapsulating thread dumping capability in {{ThreadDumper}} 
class, which registers itself as a {{org.apache.cassandra.utils.ThreadDumper}} 
mbean and logs a warn in case it's not able to register the MBean. 

Also added a new parameter {{enableThreadDumping}} to {{SEPExecutor}} that is 
only enabled by {{RequestThreadPoolExecutor}}. When this parameter is set, a 
{{ThreadDumper}} is instantiated and {{ThreadDumper.maybeLogThreadDump}} is 
called when there are blocked requests.

Tested patch with jvisualvm (screenshot attached) and checked that it only logs 
thread dump once and unsets the flag.

Patch and CI tests below:
||trunk||
|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11713]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-11713-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-11713-dtest/lastCompletedBuild/testReport/]|

> Add ability to log thread dump when NTR pool is blocked
> ---
>
> Key: CASSANDRA-11713
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11713
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>
> Thread dumps are very useful for troubleshooting Native-Transport-Requests 
> contention issues like CASSANDRA-11363 and CASSANDRA-11529.
> While they could be generated externally with {{jstack}}, sometimes the 
> conditions are transient and it's hard to catch the exact moment when they 
> happen, so it could be useful to generate and log them upon user request when 
> certain internal condition happens.
> I propose adding a {{logThreadDumpOnNextContention}} flag to {{SEPExecutor}} 
> that when enabled via JMX generates and logs a single thread dump on the 
> system log when the thread pool queue is full.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10769) "received out of order wrt DecoratedKey" after scrub

2016-06-24 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348855#comment-15348855
 ] 

sankalp kohli commented on CASSANDRA-10769:
---

cc [~krummas] and [~brandon.williams]

> "received out of order wrt DecoratedKey" after scrub
> 
>
> Key: CASSANDRA-10769
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10769
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.11, Debian Wheezy
>Reporter: mlowicki
>
> After running scrub and cleanup on all nodes in single data center I'm 
> getting:
> {code}
> ERROR [ValidationExecutor:103] 2015-11-25 06:28:21,530 Validator.java:245 - 
> Failed creating a merkle tree for [repair 
> #89fa2b70-933d-11e5-b036-75bb514ae072 on sync/entity_by_id2, 
> (-5867793819051725444,-5865919628027816979]], /10.210.3.221 (see log for 
> details)
> ERROR [ValidationExecutor:103] 2015-11-25 06:28:21,531 
> CassandraDaemon.java:227 - Exception in thread 
> Thread[ValidationExecutor:103,1,main]
> java.lang.AssertionError: row DecoratedKey(-5867787467868737053, 
> 000932373633313036313204808800) received out of order wrt 
> DecoratedKey(-5865937851627253360, 000933313230313737333204c3c700)
> at org.apache.cassandra.repair.Validator.add(Validator.java:127) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1010)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:94)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:622)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> {code}
> What I did is to run repair on other node:
> {code}
> time nodetool repair --in-local-dc
> {code}
> Corresponding log on the node where repair has been started:
> {code}
> ERROR [AntiEntropySessions:414] 2015-11-25 06:28:21,533 
> RepairSession.java:303 - [repair #89fa2b70-933d-11e5-b036-75bb514ae072] 
> session completed with the following error
> org.apache.cassandra.exceptions.RepairException: [repair 
> #89fa2b70-933d-11e5-b036-75bb514ae072 on sync/entity_by_id2, 
> (-5867793819051725444,-5865919628027816979]] Validation failed in 
> /10.210.3.117
> at 
> org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:166)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:406)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:134)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> INFO  [AntiEntropySessions:415] 2015-11-25 06:28:21,533 
> RepairSession.java:260 - [repair #b9458fa0-933d-11e5-b036-75bb514ae072] new 
> session: will sync /10.210.3.221, /10.210.3.118, /10.210.3.117 on range 
> (7119703141488009983,7129744584776466802] for sync.[device_token, entity2, 
> user_stats, user_device, user_quota, user_store, user_device_progress, 
> entity_by_id2]
> ERROR [AntiEntropySessions:414] 2015-11-25 06:28:21,533 
> CassandraDaemon.java:227 - Exception in thread 
> Thread[AntiEntropySessions:414,5,RMI Runtime]
> java.lang.RuntimeException: org.apache.cassandra.exceptions.RepairException: 
> [repair #89fa2b70-933d-11e5-b036-75bb514ae072 on sync/entity_by_id2, 
> (-5867793819051725444,-5865919628027816979]] Validation failed in 
> /10.210.3.117
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_80]

[jira] [Commented] (CASSANDRA-10769) "received out of order wrt DecoratedKey" after scrub

2016-06-24 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348853#comment-15348853
 ] 

sankalp kohli commented on CASSANDRA-10769:
---

This is also in 2.1.14 clusters. Once you have a couple of failures, it will 
cause the thread pool in repair session to fill up and cause future repairs to 
fail 

> "received out of order wrt DecoratedKey" after scrub
> 
>
> Key: CASSANDRA-10769
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10769
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.11, Debian Wheezy
>Reporter: mlowicki
>
> After running scrub and cleanup on all nodes in single data center I'm 
> getting:
> {code}
> ERROR [ValidationExecutor:103] 2015-11-25 06:28:21,530 Validator.java:245 - 
> Failed creating a merkle tree for [repair 
> #89fa2b70-933d-11e5-b036-75bb514ae072 on sync/entity_by_id2, 
> (-5867793819051725444,-5865919628027816979]], /10.210.3.221 (see log for 
> details)
> ERROR [ValidationExecutor:103] 2015-11-25 06:28:21,531 
> CassandraDaemon.java:227 - Exception in thread 
> Thread[ValidationExecutor:103,1,main]
> java.lang.AssertionError: row DecoratedKey(-5867787467868737053, 
> 000932373633313036313204808800) received out of order wrt 
> DecoratedKey(-5865937851627253360, 000933313230313737333204c3c700)
> at org.apache.cassandra.repair.Validator.add(Validator.java:127) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1010)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:94)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$9.call(CompactionManager.java:622)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> {code}
> What I did is to run repair on other node:
> {code}
> time nodetool repair --in-local-dc
> {code}
> Corresponding log on the node where repair has been started:
> {code}
> ERROR [AntiEntropySessions:414] 2015-11-25 06:28:21,533 
> RepairSession.java:303 - [repair #89fa2b70-933d-11e5-b036-75bb514ae072] 
> session completed with the following error
> org.apache.cassandra.exceptions.RepairException: [repair 
> #89fa2b70-933d-11e5-b036-75bb514ae072 on sync/entity_by_id2, 
> (-5867793819051725444,-5865919628027816979]] Validation failed in 
> /10.210.3.117
> at 
> org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:166)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:406)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:134)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> INFO  [AntiEntropySessions:415] 2015-11-25 06:28:21,533 
> RepairSession.java:260 - [repair #b9458fa0-933d-11e5-b036-75bb514ae072] new 
> session: will sync /10.210.3.221, /10.210.3.118, /10.210.3.117 on range 
> (7119703141488009983,7129744584776466802] for sync.[device_token, entity2, 
> user_stats, user_device, user_quota, user_store, user_device_progress, 
> entity_by_id2]
> ERROR [AntiEntropySessions:414] 2015-11-25 06:28:21,533 
> CassandraDaemon.java:227 - Exception in thread 
> Thread[AntiEntropySessions:414,5,RMI Runtime]
> java.lang.RuntimeException: org.apache.cassandra.exceptions.RepairException: 
> [repair #89fa2b70-933d-11e5-b036-75bb514ae072 on sync/entity_by_id2, 
> (-5867793819051725444,-5865919628027816979]] Validation failed in 
> /10.210.3.117
> at com.google.common.base.Throwables.propagate(Throwables.java:160) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call

[jira] [Updated] (CASSANDRA-12082) CommitLogStressTest failing post-CASSANDRA-8844

2016-06-24 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12082:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

[Committed|https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=a13add64fe586ba16041db71f0a200a52da924be]
 earlier today while JIRA was down.

> CommitLogStressTest failing post-CASSANDRA-8844
> ---
>
> Key: CASSANDRA-12082
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12082
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
> Fix For: 3.8
>
> Attachments: 0001-Fix-CommitLogStressTest.patch
>
>
> Test timing out after CASSANDRA-8844.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12082) CommitLogStressTest failing post-CASSANDRA-8844

2016-06-24 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-12082:

Status: Ready to Commit  (was: Patch Available)

> CommitLogStressTest failing post-CASSANDRA-8844
> ---
>
> Key: CASSANDRA-12082
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12082
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
> Fix For: 3.8
>
> Attachments: 0001-Fix-CommitLogStressTest.patch
>
>
> Test timing out after CASSANDRA-8844.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11414) dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test

2016-06-24 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348789#comment-15348789
 ] 

Paulo Motta commented on CASSANDRA-11414:
-

It seems there could be a few races when a stream session is interrupted, I 
also faced some of those on CASSANDRA-3486. There are also some potential 
issues with the dtest that can make it non-deterministic, so I addressed those 
on this [PR|https://github.com/riptano/cassandra-dtest/pull/1051] (currently 
under review).

I addressed the most visible of these races on the {{StreamSession}} and 
surroundings, and submitted a new multiplexer run based on [this dtest 
branch|https://github.com/riptano/cassandra-dtest/pull/1051]: 
https://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/148/

The patch and dtests for 3.0 are available below:
||3.0||
|[branch|https://github.com/apache/cassandra/compare/cassandra-3.0...pauloricardomg:3.0-11414]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-11414-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-11414-dtest/lastCompletedBuild/testReport/]|

(will set to PA and give more details about the improvements if multiplexer 
results look good)

> dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test
> --
>
> Key: CASSANDRA-11414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11414
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Paulo Motta
>  Labels: dtest
> Fix For: 3.x
>
>
> Stress is failing to read back all data. We can see this output from the 
> stress read
> {code}
> java.io.IOException: Operation x0 on key(s) [314c384f304f4c325030]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> java.io.IOException: Operation x0 on key(s) [33383438363931353131]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> FAILURE
> {code}
> Started happening with build 1075. Does not appear flaky on CI.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1076/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test
> Failed on CassCI build trunk_dtest #1076



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12078) [SASI] Move skip_stop_words filter BEFORE stemming

2016-06-24 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348780#comment-15348780
 ] 

Pavel Yaskevich commented on CASSANDRA-12078:
-

I'm sorry guys, i had it [run through 
CI|http://cassci.datastax.com/view/Dev/view/xedin/job/xedin-CASSANDRA-12078-testall/lastCompletedBuild/testReport/]
 but somehow it didn't show me the problem with standard analyzer. 

Actually after thinking about this further - I think stop words should be 
specified as a list in the language which is used by the field, so maybe the 
problem here is actually not with stop words ordering but rather with that 
locale has been set? 

> [SASI] Move skip_stop_words filter BEFORE stemming
> --
>
> Key: CASSANDRA-12078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12078
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.7, Cassandra 3.8
>Reporter: DOAN DuyHai
>Assignee: DOAN DuyHai
> Fix For: 3.8
>
> Attachments: patch.txt
>
>
> Right now, if skip stop words and stemming are enabled, SASI will put 
> stemming in the filter pipeline BEFORE skip_stop_words:
> {code:java}
> private FilterPipelineTask getFilterPipeline()
> {
> FilterPipelineBuilder builder = new FilterPipelineBuilder(new 
> BasicResultFilters.NoOperation());
>  ...
> if (options.shouldStemTerms())
> builder = builder.add("term_stemming", new 
> StemmingFilters.DefaultStemmingFilter(options.getLocale()));
> if (options.shouldIgnoreStopTerms())
> builder = builder.add("skip_stop_words", new 
> StopWordFilters.DefaultStopWordFilter(options.getLocale()));
> return builder.build();
> }
> {code}
> The problem is that stemming before removing stop words can yield wrong 
> results.
> I have an example:
> {code:sql}
> SELECT * FROM music.albums WHERE country='France' AND title LIKE 'danse' 
> ALLOW FILTERING;
> {code}
> Because of stemming *danse* ( *dance* in English) becomes *dans* (the final 
> vowel is removed). Then skip stop words is applied. Unfortunately *dans* 
> (*in* in English) is a stop word in French so it is removed completely.
> In the end the query is equivalent to {{SELECT * FROM music.albums WHERE 
> country='France'}} and of course the results are wrong.
> Attached is a trivial patch to move the skip_stop_words filter BEFORE 
> stemming filter
> /cc [~xedin] [~jrwest] [~beobal]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12041) Add CDC to describe table

2016-06-24 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348608#comment-15348608
 ] 

Adam Holmberg commented on CASSANDRA-12041:
---

Thanks for the ping. I created 
https://datastax-oss.atlassian.net/browse/PYTHON-593
This shouldn't be a big deal, but we have some other release activities coming 
up. Where do I look to see when code freeze is for this (any) release?

> Add CDC to describe table
> -
>
> Key: CASSANDRA-12041
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12041
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Carl Yeksigian
>Assignee: Joshua McKenzie
>  Labels: client-impacting
> Fix For: 3.8
>
>
> Currently we do not output CDC with {{DESCRIBE TABLE}}, but should include 
> that for 3.8+ tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Revert "Allow metrics export for prometheus in its native format"

2016-06-24 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk a13add64f -> 98cbd561e


Revert "Allow metrics export for prometheus in its native format"

This reverts commit 33f2f844b6bef7b3e5977f649bb2bfaf2e4db904.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/98cbd561
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/98cbd561
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/98cbd561

Branch: refs/heads/trunk
Commit: 98cbd561e286ca3191c3d64527ff649256f2e3a6
Parents: a13add6
Author: Robert Stupp 
Authored: Fri Jun 24 21:50:13 2016 +0200
Committer: Robert Stupp 
Committed: Fri Jun 24 21:50:13 2016 +0200

--
 CHANGES.txt |  1 -
 NEWS.txt|  5 ---
 .../cassandra/service/CassandraDaemon.java  | 35 
 3 files changed, 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/98cbd561/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9495c96..d40cab4 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,4 @@
 3.8
- * Allow metrics export for prometheus in its native format (CASSANDRA-11967)
  * Move skip_stop_words filter before stemming (CASSANDRA-12078)
  * Support seek() in EncryptedFileSegmentInputStream (CASSANDRA-11957)
  * SSTable tools mishandling LocalPartitioner (CASSANDRA-12002)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/98cbd561/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 11e3b37..7418f3a 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -19,11 +19,6 @@ using the provided 'sstableupgrade' tool.
 
 New features
 
-   - Support for alternative metrics exporters has been added. To use them, 
the appropiate
- libraries need to be placed in the lib directory. Cassandra will load the 
class given in
- the system property cassandra.metricsExporter and instantiate it by 
calling the constructor
- taking an instance of com.codahale.metrics.MetricRegistry. If the 
provided class implements
- java.io.Closeable, its close() method will be called on shutdown.
- Shared pool threads are now named according to the stage they are 
executing
  tasks for. Thread names mentioned in traced queries change accordingly.
- A new option has been added to cassandra-stress "-rate fixed={number}/s"

http://git-wip-us.apache.org/repos/asf/cassandra/blob/98cbd561/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index b1c44be..2d21bff 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -17,12 +17,10 @@
  */
 package org.apache.cassandra.service;
 
-import java.io.Closeable;
 import java.io.File;
 import java.io.IOException;
 import java.lang.management.ManagementFactory;
 import java.lang.management.MemoryPoolMXBean;
-import java.lang.reflect.Constructor;
 import java.net.InetAddress;
 import java.net.URL;
 import java.net.UnknownHostException;
@@ -42,7 +40,6 @@ import org.slf4j.LoggerFactory;
 
 import com.addthis.metrics3.reporter.config.ReporterConfig;
 import com.codahale.metrics.Meter;
-import com.codahale.metrics.MetricRegistry;
 import com.codahale.metrics.MetricRegistryListener;
 import com.codahale.metrics.SharedMetricRegistries;
 import org.apache.cassandra.batchlog.LegacyBatchlogMigrator;
@@ -368,38 +365,6 @@ public class CassandraDaemon
 }
 }
 
-// Alternative metrics
-String metricsExporterClass = 
System.getProperty("cassandra.metricsExporter");
-if (metricsExporterClass != null)
-{
-logger.info("Trying to initialize metrics-exporter {}", 
metricsExporterClass);
-try
-{
-Constructor ctor = 
Class.forName(metricsExporterClass).getConstructor(MetricRegistry.class);
-Object metricsExporter = 
ctor.newInstance(CassandraMetricsRegistry.Metrics);
-if 
(metricsExporter.getClass().isAssignableFrom(Closeable.class))
-{
-Runtime.getRuntime().addShutdownHook(new Thread() {
-public void run()
-{
-try
-{
-((Closeable)metricsExporter).close();
-}
-catch (IOException e)
-{
-e.printSt

cassandra git commit: Fix CommitLogStressTest

2016-06-24 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk 33f2f844b -> a13add64f


Fix CommitLogStressTest

Patch by jmckenzie; reviewed by blambov for 12082


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a13add64
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a13add64
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a13add64

Branch: refs/heads/trunk
Commit: a13add64fe586ba16041db71f0a200a52da924be
Parents: 33f2f84
Author: Josh McKenzie 
Authored: Thu Jun 23 12:33:13 2016 -0400
Committer: Josh McKenzie 
Committed: Fri Jun 24 12:58:08 2016 -0400

--
 .../apache/cassandra/config/DatabaseDescriptor.java |  2 +-
 .../commitlog/AbstractCommitLogSegmentManager.java  | 14 ++
 .../apache/cassandra/db/commitlog/CommitLog.java|  1 +
 .../cassandra/db/commitlog/CommitLogStressTest.java | 16 ++--
 4 files changed, 18 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a13add64/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index e17a2bc..1375a39 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -1417,7 +1417,7 @@ public class DatabaseDescriptor
 * (one segment in compression, one written to, one in reserve); delays in 
compression may cause the log to use
 * more, depending on how soon the sync policy stops all writing threads.
 */
-public static int getCommitLogMaxCompressionBuffersPerPool()
+public static int getCommitLogMaxCompressionBuffersInPool()
 {
 return conf.commitlog_max_compression_buffers_in_pool;
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a13add64/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogSegmentManager.java
--
diff --git 
a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogSegmentManager.java
 
b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogSegmentManager.java
index b8f0a4e..7ea7439 100644
--- 
a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogSegmentManager.java
+++ 
b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogSegmentManager.java
@@ -75,15 +75,12 @@ public abstract class AbstractCommitLogSegmentManager
  */
 volatile boolean createReserveSegments = false;
 
-// Used by tests to determine if segment manager is active or not.
-volatile boolean processingTask = false;
-
 private Thread managerThread;
 protected volatile boolean run = true;
 protected final CommitLog commitLog;
 
 private static final SimpleCachedBufferPool bufferPool =
-new 
SimpleCachedBufferPool(DatabaseDescriptor.getCommitLogMaxCompressionBuffersPerPool(),
 DatabaseDescriptor.getCommitLogSegmentSize());
+new 
SimpleCachedBufferPool(DatabaseDescriptor.getCommitLogMaxCompressionBuffersInPool(),
 DatabaseDescriptor.getCommitLogSegmentSize());
 
 AbstractCommitLogSegmentManager(final CommitLog commitLog, String 
storageDirectory)
 {
@@ -103,7 +100,6 @@ public abstract class AbstractCommitLogSegmentManager
 try
 {
 Runnable task = segmentManagementTasks.poll();
-processingTask = true;
 if (task == null)
 {
 // if we have no more work to do, check if we 
should create a new segment
@@ -139,7 +135,6 @@ public abstract class AbstractCommitLogSegmentManager
 // queue rather than looping, grabbing another 
null, and repeating the above work.
 try
 {
-processingTask = false;
 task = segmentManagementTasks.take();
 }
 catch (InterruptedException e)
@@ -148,7 +143,6 @@ public abstract class AbstractCommitLogSegmentManager
 }
 }
 task.run();
-processingTask = false;
 }
 catch (Throwable t)
 {
@@ -507,8 +501,12 @@ public abstract class AbstractCommitLogSegmentManager
 // Used by tests only.
 void awaitManagementTasksCompletion()
 {
-while (segmentManagementTasks.size() > 0 || processingTask)
+while (!segmentManagementTasks.isEmpty())
 

[jira] [Commented] (CASSANDRA-8700) replace the wiki with docs in the git repo

2016-06-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348556#comment-15348556
 ] 

Sylvain Lebresne commented on CASSANDRA-8700:
-

Friday update: the 
[branch|https://github.com/pcmanus/cassandra/commits/doc_in_tree] has all the 
parts that has been submitted so far. I also (mostly) finished migrating the 
CQL doc (even though there is still some parts that I plan to improve) and I 
reorganized the doc slightly. Typically, having the whole CQL doc in the same 
file (being it source file or html output) was really unwieldy, and that was 
kind of true of the other top-level topic, so I've split things up a bit. The 
result can be (temporarily) seen 
[here|http://www.lebresne.net/~mcmanus/cassandra-doc-test/html/tools/index.html].

One thing I do want to point out again is that the current doc is for trunk. In 
a perfect world, we'd have the same doc but adapted to earlier versions, but 
the main difference is going to be the CQL doc, and trying to "rebuild" the CQL 
doc for earlier versions from the migrated version is going to be really 
painful and time consuming and I'm not volunteering. Besides, we can keep the 
link to the existing CQL doc for those old versions. So basically I'm 
suggesting that we start publishing our new doc with 3.8, and from that point 
on, update it only for tick-tock releases.

> replace the wiki with docs in the git repo
> --
>
> Key: CASSANDRA-8700
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8700
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Jon Haddad
>Assignee: Sylvain Lebresne
>Priority: Blocker
> Fix For: 3.8
>
> Attachments: TombstonesAndGcGrace.md, bloom_filters.md, 
> compression.md, contributing.zip, getting_started.zip, hardware.md
>
>
> The wiki as it stands is pretty terrible.  It takes several minutes to apply 
> a single update, and as a result, it's almost never updated.  The information 
> there has very little context as to what version it applies to.  Most people 
> I've talked to that try to use the information they find there find it is 
> more confusing than helpful.
> I'd like to propose that instead of using the wiki, the doc directory in the 
> cassandra repo be used for docs (already used for CQL3 spec) in a format that 
> can be built to a variety of output formats like HTML / epub / etc.  I won't 
> start the bikeshedding on which markup format is preferable - but there are 
> several options that can work perfectly fine.  I've personally use sphinx w/ 
> restructured text, and markdown.  Both can build easily and as an added bonus 
> be pushed to readthedocs (or something similar) automatically.  For an 
> example, see cqlengine's documentation, which I think is already 
> significantly better than the wiki: 
> http://cqlengine.readthedocs.org/en/latest/
> In addition to being overall easier to maintain, putting the documentation in 
> the git repo adds context, since it evolves with the versions of Cassandra.
> If the wiki were kept even remotely up to date, I wouldn't bother with this, 
> but not having at least some basic documentation in the repo, or anywhere 
> associated with the project, is frustrating.
> For reference, the last 3 updates were:
> 1/15/15 - updating committers list
> 1/08/15 - updating contributers and how to contribute
> 12/16/14 - added a link to CQL docs from wiki frontpage (by me)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-11967) Export metrics for prometheus in its native format

2016-06-24 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reopened CASSANDRA-11967:


Why are you doing this vs using the metrics-reporter-config?  That is how we 
support metrics reporters in C*.  Please roll this back 

> Export metrics for prometheus in its native format
> --
>
> Key: CASSANDRA-11967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11967
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.8
>
>
> https://github.com/snazy/prometheus-metrics-exporter allows to export 
> codahale metrics for prometheus.io. In order to integrate this, a minor 
> change to C* is necessary to load the library.
> This eliminates the need to use the additional graphite-exporter tool and 
> therefore also allows prometheus to track the up/down status of C*.
> (Will provide the patch soon)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11967) Export metrics for prometheus in its native format

2016-06-24 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11967:
-
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.8
   Status: Resolved  (was: Ready to Commit)

Thanks!
Committed as 
[33f2f844b6bef7b3e5977f649bb2bfaf2e4db904|https://github.com/apache/cassandra/commit/33f2f844b6bef7b3e5977f649bb2bfaf2e4db904]
 to [trunk|https://github.com/apache/cassandra/tree/trunk]


> Export metrics for prometheus in its native format
> --
>
> Key: CASSANDRA-11967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11967
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.8
>
>
> https://github.com/snazy/prometheus-metrics-exporter allows to export 
> codahale metrics for prometheus.io. In order to integrate this, a minor 
> change to C* is necessary to load the library.
> This eliminates the need to use the additional graphite-exporter tool and 
> therefore also allows prometheus to track the up/down status of C*.
> (Will provide the patch soon)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Allow metrics export for prometheus in its native format

2016-06-24 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk 578c85dc7 -> 33f2f844b


Allow metrics export for prometheus in its native format

patch by Robert Stupp; reviewed by Sam Tunnicliffe for CASSANDRA-11967


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33f2f844
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33f2f844
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33f2f844

Branch: refs/heads/trunk
Commit: 33f2f844b6bef7b3e5977f649bb2bfaf2e4db904
Parents: 578c85d
Author: Robert Stupp 
Authored: Fri Jun 24 18:28:32 2016 +0200
Committer: Robert Stupp 
Committed: Fri Jun 24 18:28:32 2016 +0200

--
 CHANGES.txt |  1 +
 NEWS.txt|  5 +++
 .../cassandra/service/CassandraDaemon.java  | 35 
 3 files changed, 41 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/33f2f844/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d40cab4..9495c96 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.8
+ * Allow metrics export for prometheus in its native format (CASSANDRA-11967)
  * Move skip_stop_words filter before stemming (CASSANDRA-12078)
  * Support seek() in EncryptedFileSegmentInputStream (CASSANDRA-11957)
  * SSTable tools mishandling LocalPartitioner (CASSANDRA-12002)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/33f2f844/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 7418f3a..11e3b37 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -19,6 +19,11 @@ using the provided 'sstableupgrade' tool.
 
 New features
 
+   - Support for alternative metrics exporters has been added. To use them, 
the appropiate
+ libraries need to be placed in the lib directory. Cassandra will load the 
class given in
+ the system property cassandra.metricsExporter and instantiate it by 
calling the constructor
+ taking an instance of com.codahale.metrics.MetricRegistry. If the 
provided class implements
+ java.io.Closeable, its close() method will be called on shutdown.
- Shared pool threads are now named according to the stage they are 
executing
  tasks for. Thread names mentioned in traced queries change accordingly.
- A new option has been added to cassandra-stress "-rate fixed={number}/s"

http://git-wip-us.apache.org/repos/asf/cassandra/blob/33f2f844/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index 2d21bff..b1c44be 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -17,10 +17,12 @@
  */
 package org.apache.cassandra.service;
 
+import java.io.Closeable;
 import java.io.File;
 import java.io.IOException;
 import java.lang.management.ManagementFactory;
 import java.lang.management.MemoryPoolMXBean;
+import java.lang.reflect.Constructor;
 import java.net.InetAddress;
 import java.net.URL;
 import java.net.UnknownHostException;
@@ -40,6 +42,7 @@ import org.slf4j.LoggerFactory;
 
 import com.addthis.metrics3.reporter.config.ReporterConfig;
 import com.codahale.metrics.Meter;
+import com.codahale.metrics.MetricRegistry;
 import com.codahale.metrics.MetricRegistryListener;
 import com.codahale.metrics.SharedMetricRegistries;
 import org.apache.cassandra.batchlog.LegacyBatchlogMigrator;
@@ -365,6 +368,38 @@ public class CassandraDaemon
 }
 }
 
+// Alternative metrics
+String metricsExporterClass = 
System.getProperty("cassandra.metricsExporter");
+if (metricsExporterClass != null)
+{
+logger.info("Trying to initialize metrics-exporter {}", 
metricsExporterClass);
+try
+{
+Constructor ctor = 
Class.forName(metricsExporterClass).getConstructor(MetricRegistry.class);
+Object metricsExporter = 
ctor.newInstance(CassandraMetricsRegistry.Metrics);
+if 
(metricsExporter.getClass().isAssignableFrom(Closeable.class))
+{
+Runtime.getRuntime().addShutdownHook(new Thread() {
+public void run()
+{
+try
+{
+((Closeable)metricsExporter).close();
+}
+catch (IOException e)
+{
+e.printS

[jira] [Updated] (CASSANDRA-12024) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_to_with_child_process_crashing

2016-06-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12024:

Issue Type: Test  (was: Bug)

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_to_with_child_process_crashing
> 
>
> Key: CASSANDRA-12024
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12024
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Stefania
>  Labels: dtest
> Attachments: node1.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/360/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_to_with_child_process_crashing
> Failed on CassCI build cassandra-2.1_offheap_dtest #360
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/dtest.py", line 889, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 2701, in test_copy_to_with_child_process_crashing
> self.assertIn('some records might be missing', err)
>   File "/usr/lib/python2.7/unittest/case.py", line 803, in assertIn
> self.fail(self._formatMessage(msg, standardMsg))
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> Error Message
> 'some records might be missing' not found in ''
> {code}
> Logs are attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11393) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test

2016-06-24 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348521#comment-15348521
 ] 

Russ Hatch commented on CASSANDRA-11393:


[~blerer] If you'd like, I can build a custom dtest branch to test these 
changes on the upgrade suite. I just need to know what version combinations you 
want tested and can take it from there. For point to point upgrades (A->B) we 
get the most tests, but they can also be the form of multiple upgrades 
(A->B->N), but that type of test is much more limited in scope. Both 
varieties is fine too, they aren't mutually exclusive.

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test
> --
>
> Key: CASSANDRA-11393
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11393
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Streaming and Messaging
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: dtest
> Fix For: 3.0.x, 3.x
>
> Attachments: 11393-3.0.txt
>
>
> We are seeing a failure in the upgrade tests that go from 2.1 to 3.0
> {code}
> node2: ERROR [SharedPool-Worker-2] 2016-03-10 20:05:17,865 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xeb79b477, 
> /127.0.0.1:39613 => /127.0.0.2:9042]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1208)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1155)
>  ~[main/:na]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:330)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:302)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:297)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:333)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:209)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
>   at 
> io.netty.channel.Simp

[jira] [Updated] (CASSANDRA-12072) dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test

2016-06-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12072:

Issue Type: Bug  (was: Test)

> dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test
> --
>
> Key: CASSANDRA-12072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12072
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Joel Knighton
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> Multiple failures:
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/create_and_grant_roles_with_superuser_status_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/drop_keyspace_cleans_up_function_level_permissions_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_read_wrong_column_names/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_boolstyle_round_trip/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/compaction_test/TestCompaction_with_SizeTieredCompactionStrategy/disable_autocompaction_alter_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe_mv/
> Logs are from 
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12072) dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test

2016-06-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12072:

Assignee: Joel Knighton  (was: Philip Thompson)

> dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test
> --
>
> Key: CASSANDRA-12072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12072
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Joel Knighton
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> Multiple failures:
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/create_and_grant_roles_with_superuser_status_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/drop_keyspace_cleans_up_function_level_permissions_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_read_wrong_column_names/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_boolstyle_round_trip/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/compaction_test/TestCompaction_with_SizeTieredCompactionStrategy/disable_autocompaction_alter_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe_mv/
> Logs are from 
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11993) Cannot read Snappy compressed tables with 3.6

2016-06-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348484#comment-15348484
 ] 

Stefania edited comment on CASSANDRA-11993 at 6/24/16 4:19 PM:
---

+1, agreed on keeping changes minimal.

There were some problems with the tests, I've rebased and relaunched:

|[patch|https://github.com/stef1927/cassandra/commits/11993]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11993-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11993-dtest/]|



was (Author: stefania):
+1, agreed on keeping changes minimal.

There was some problems with the tests, I've rebased and relaunched:

|[patch|https://github.com/stef1927/cassandra/commits/11993]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11993-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11993-dtest/]|


> Cannot read Snappy compressed tables with 3.6
> -
>
> Key: CASSANDRA-11993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11993
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nimi Wariboko Jr.
>Assignee: Branimir Lambov
> Fix For: 3.6
>
>
> After upgrading to 3.6, I can no longer read/compact sstables compressed with 
> snappy compression. The memtable_allocation_type makes no difference both 
> offheap_buffers and heap_buffers cause the errors.
> {code}
> WARN  [SharedPool-Worker-5] 2016-06-10 15:45:18,731 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> org.xerial.snappy.SnappyError: [NOT_A_DIRECT_BUFFER] destination is not a 
> direct buffer
>   at org.xerial.snappy.Snappy.uncompress(Snappy.java:509) 
> ~[snappy-java-1.1.1.7.jar:na]
>   at 
> org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:102)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Mmap.readChunk(CompressedSegmentedFile.java:323)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:137) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) 
> ~[na:1.8.0_66]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:220)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:138)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1779)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:103)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:44)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:72)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:65)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.initializeIterator(UnfilteredRowIteratorWithLowerBound.java:85)
>  ~[apache-cassandra-3.6.jar:3.6]

[jira] [Commented] (CASSANDRA-11967) Export metrics for prometheus in its native format

2016-06-24 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348490#comment-15348490
 ] 

Robert Stupp commented on CASSANDRA-11967:
--

Yup - that's right. You need a mapping file like [this 
one|https://github.com/snazy/prometheus-metrics-exporter/blob/master/mappings/cassandra-mappings.yaml]
 in the {{conf}} directory and add 
{{-Dorg.caffinitas.prometheus.config=cassandra-mappings.yaml}}. That helps to 
shorten the names that appear e.g. in Grafana and to re-organize the metrics as 
you personally prefer and to exclude metrics you don't want (like 
{{org.apache.cassandra.metrics.Table.EstimatedPartitionCount}}, which touches 
all sstables).

> Export metrics for prometheus in its native format
> --
>
> Key: CASSANDRA-11967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11967
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.x
>
>
> https://github.com/snazy/prometheus-metrics-exporter allows to export 
> codahale metrics for prometheus.io. In order to integrate this, a minor 
> change to C* is necessary to load the library.
> This eliminates the need to use the additional graphite-exporter tool and 
> therefore also allows prometheus to track the up/down status of C*.
> (Will provide the patch soon)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11993) Cannot read Snappy compressed tables with 3.6

2016-06-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348484#comment-15348484
 ] 

Stefania commented on CASSANDRA-11993:
--

+1, agreed on keeping changes minimal.

There was some problems with the tests, I've rebased and relaunched:

|[patch|https://github.com/stef1927/cassandra/commits/11993]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11993-testall/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11993-dtest/]|


> Cannot read Snappy compressed tables with 3.6
> -
>
> Key: CASSANDRA-11993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11993
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nimi Wariboko Jr.
>Assignee: Branimir Lambov
> Fix For: 3.6
>
>
> After upgrading to 3.6, I can no longer read/compact sstables compressed with 
> snappy compression. The memtable_allocation_type makes no difference both 
> offheap_buffers and heap_buffers cause the errors.
> {code}
> WARN  [SharedPool-Worker-5] 2016-06-10 15:45:18,731 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> org.xerial.snappy.SnappyError: [NOT_A_DIRECT_BUFFER] destination is not a 
> direct buffer
>   at org.xerial.snappy.Snappy.uncompress(Snappy.java:509) 
> ~[snappy-java-1.1.1.7.jar:na]
>   at 
> org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:102)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Mmap.readChunk(CompressedSegmentedFile.java:323)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:137) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) 
> ~[na:1.8.0_66]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:220)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:138)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1779)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:103)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:44)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:72)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:65)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.initializeIterator(UnfilteredRowIteratorWithLowerBound.java:85)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:99)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWith

[jira] [Comment Edited] (CASSANDRA-12072) dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test

2016-06-24 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348396#comment-15348396
 ] 

Philip Thompson edited comment on CASSANDRA-12072 at 6/24/16 4:05 PM:
--

Working on that suspicion, I am now re-running 11038 many times
http://cassci.datastax.com/view/Dev/view/ptnapoleon/job/ptnapoleon-11038-dtest/

No dice, maybe it isn't just 11038


was (Author: philipthompson):
Working on that suspicion, I am now re-running 11038 many times
http://cassci.datastax.com/view/Dev/view/ptnapoleon/job/ptnapoleon-11038-dtest/

> dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test
> --
>
> Key: CASSANDRA-12072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12072
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> Multiple failures:
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/create_and_grant_roles_with_superuser_status_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/drop_keyspace_cleans_up_function_level_permissions_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_read_wrong_column_names/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_boolstyle_round_trip/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/compaction_test/TestCompaction_with_SizeTieredCompactionStrategy/disable_autocompaction_alter_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe_mv/
> Logs are from 
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11967) Export metrics for prometheus in its native format

2016-06-24 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348481#comment-15348481
 ] 

Sam Tunnicliffe commented on CASSANDRA-11967:
-

+1 the C* changes look fine to me. 

I should note though that when I dropped in the necessary jars to smoke test 
with the Prometheus exporter, I'm seeing an awful lot of
{noformat}
INFO  [main] 2016-06-24 16:56:27,809 CassandraDaemon.java:375 - Trying to 
initialize metrics-exporter 
org.caffinitas.prometheusmetrics.PrometheusMetricsInitializer
INFO  [main] 2016-06-24 16:56:27,825 PrometheusMetricsExporter.java:82 - 
Setting up Prometheus metrics exporter on 127.0.0.1 port 8088 and SSL disabled
INFO  [main] 2016-06-24 16:56:27,829 PrometheusMetricsExporter.java:450 - No 
matching metric mapping for 
'org.apache.cassandra.metrics.Table.BloomFilterFalsePositives.system.built_views'
INFO  [main] 2016-06-24 16:56:27,830 PrometheusMetricsExporter.java:450 - No 
matching metric mapping for 
'org.apache.cassandra.metrics.Table.BytesFlushed.system.peers'
INFO  [main] 2016-06-24 16:56:27,830 PrometheusMetricsExporter.java:450 - No 
matching metric mapping for 
'org.apache.cassandra.metrics.Table.RowCacheHit.system.available_ranges'
INFO  [main] 2016-06-24 16:56:27,830 PrometheusMetricsExporter.java:450 - No 
matching metric mapping for 
'org.apache.cassandra.metrics.Table.MeanPartitionSize.system.views_builds_in_progress'
INFO  [main] 2016-06-24 16:56:27,831 PrometheusMetricsExporter.java:450 - No 
matching metric mapping for 
'org.apache.cassandra.metrics.Table.CoordinatorReadLatency.system.local'
INFO  [main] 2016-06-24 16:56:27,831 PrometheusMetricsExporter.java:450 - No 
matching metric mapping for 
'org.apache.cassandra.metrics.Table.EstimatedPartitionSizeHistogram.system.batches'
INFO  [main] 2016-06-24 16:56:27,831 PrometheusMetricsExporter.java:450 - No 
matching metric mapping for 
'org.apache.cassandra.metrics.Table.MemtableOffHeapSize.system.views_builds_in_progress'
INFO  [main] 2016-06-24 16:56:27,831 PrometheusMetricsExporter.java:450 - No 
matching metric mapping for 
'org.apache.cassandra.metrics.Table.SSTablesPerReadHistogram.system_traces.sessions'
INFO  [main] 2016-06-24 16:56:27,831 PrometheusMetricsExporter.java:450 - No 
matching metric mapping for 
'org.apache.cassandra.metrics.keyspace.LiveScannedHistogram.system'
INFO  [main] 2016-06-24 16:56:27,831 PrometheusMetricsExporter.java:450 - No 
matching metric mapping for 
'org.apache.cassandra.metrics.Table.CasPrepareLatency.system.schema_aggregates'
{noformat}


> Export metrics for prometheus in its native format
> --
>
> Key: CASSANDRA-11967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11967
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.x
>
>
> https://github.com/snazy/prometheus-metrics-exporter allows to export 
> codahale metrics for prometheus.io. In order to integrate this, a minor 
> change to C* is necessary to load the library.
> This eliminates the need to use the additional graphite-exporter tool and 
> therefore also allows prometheus to track the up/down status of C*.
> (Will provide the patch soon)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12034) Special handling for Netty's direct memory allocation failure

2016-06-24 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348479#comment-15348479
 ] 

Robert Stupp commented on CASSANDRA-12034:
--

Alright - now the patch just adds two calls to {{JVMStabilityInspector}}.
Also took the freedom to remove the netty settings from {{cassandra-env}} and 
moved it to {{jvm.options}}, also added some text about 
{{-XX:MaxDirectMemorySize}}.

> Special handling for Netty's direct memory allocation failure
> -
>
> Key: CASSANDRA-12034
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12034
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> With CASSANDRA-12032, Netty throws a 
> {{io.netty.util.internal.OutOfDirectMemoryError}} if there's not enough 
> off-heap memory for the response buffer. We can easily handle this situation 
> and return an error. This is not a condition that destabilizes the system and 
> should therefore not passed to {{JVMStabilityInspector}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12034) Special handling for Netty's direct memory allocation failure

2016-06-24 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-12034:
-
Labels: doc-impacting  (was: )

> Special handling for Netty's direct memory allocation failure
> -
>
> Key: CASSANDRA-12034
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12034
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>  Labels: doc-impacting
> Fix For: 3.x
>
>
> With CASSANDRA-12032, Netty throws a 
> {{io.netty.util.internal.OutOfDirectMemoryError}} if there's not enough 
> off-heap memory for the response buffer. We can easily handle this situation 
> and return an error. This is not a condition that destabilizes the system and 
> should therefore not passed to {{JVMStabilityInspector}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11967) Export metrics for prometheus in its native format

2016-06-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-11967:

Status: Ready to Commit  (was: Patch Available)

> Export metrics for prometheus in its native format
> --
>
> Key: CASSANDRA-11967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11967
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.x
>
>
> https://github.com/snazy/prometheus-metrics-exporter allows to export 
> codahale metrics for prometheus.io. In order to integrate this, a minor 
> change to C* is necessary to load the library.
> This eliminates the need to use the additional graphite-exporter tool and 
> therefore also allows prometheus to track the up/down status of C*.
> (Will provide the patch soon)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12086) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_writing_with_max_output_size

2016-06-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348469#comment-15348469
 ] 

Stefania commented on CASSANDRA-12086:
--

This may be a duplicate of CASSANDRA-11701.

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_writing_with_max_output_size
> -
>
> Key: CASSANDRA-12086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12086
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: DS Test Eng
>  Labels: dtest, windows
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/260/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_writing_with_max_output_size
> Failed on CassCI build cassandra-3.0_dtest_win32 #260



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12041) Add CDC to describe table

2016-06-24 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348464#comment-15348464
 ] 

Aleksey Yeschenko commented on CASSANDRA-12041:
---

[~aholmber] ping

> Add CDC to describe table
> -
>
> Key: CASSANDRA-12041
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12041
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Carl Yeksigian
>Assignee: Joshua McKenzie
>  Labels: client-impacting
> Fix For: 3.8
>
>
> Currently we do not output CDC with {{DESCRIBE TABLE}}, but should include 
> that for 3.8+ tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12041) Add CDC to describe table

2016-06-24 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12041:
--
Labels: client-impacting  (was: )

> Add CDC to describe table
> -
>
> Key: CASSANDRA-12041
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12041
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Carl Yeksigian
>Assignee: Joshua McKenzie
>  Labels: client-impacting
> Fix For: 3.8
>
>
> Currently we do not output CDC with {{DESCRIBE TABLE}}, but should include 
> that for 3.8+ tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12039) Add a "post bootstrap task" to the index machinery

2016-06-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12039:
-
Reviewer: Sam Tunnicliffe

> Add a "post bootstrap task" to the index machinery
> --
>
> Key: CASSANDRA-12039
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12039
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Sergio Bossa
>Assignee: Sergio Bossa
>
> Custom index implementations might need to be notified when the node finishes 
> bootstrapping in order to execute some blocking tasks before the node itself 
> goes into NORMAL state.
> This is a proposal to add such functionality, which should roughly require 
> the following:
> 1) Add a {{getPostBootstrapTask}} callback to the {{Index}} interface.
> 2) Add an {{executePostBootstrapBlockingTasks}} method to 
> {{SecondaryIndexManager}} calling into the previously mentioned callback.
> 3) Hook that into {{StorageService#joinTokenRing}}.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11937) Clean up buffer trimming large buffers in DataOutputBuffer after the Netty upgrade

2016-06-24 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348447#comment-15348447
 ] 

Alex Petrov commented on CASSANDRA-11937:
-

thank you!

> Clean up buffer trimming large buffers in DataOutputBuffer after the Netty 
> upgrade
> --
>
> Key: CASSANDRA-11937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11937
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>  Labels: lhf, netty, reminder
> Fix For: 3.8
>
> Attachments: Screen Shot 2016-06-22 at 15.24.05.png
>
>
> In [https://issues.apache.org/jira/browse/CASSANDRA-11838|11838], we're 
> trimming the large buffers in {{DataOutputBuffer}}. The patch is already 
> submitted and merged in [Netty 
> 4.1|https://github.com/netty/netty/commit/bbed330468b5b82c9e4defa59012d0fcdb70f1aa],
>  we only need to make sure that we throw large buffers away1 alltogether 
> instead of trimming them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12001) nodetool stopdaemon doesn't stop cassandra gracefully

2016-06-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12001:
-
Labels: lhf  (was: )

> nodetool stopdaemon  doesn't  stop cassandra gracefully 
> 
>
> Key: CASSANDRA-12001
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12001
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu: Linux  3.11.0-15-generic #25~precise1-Ubuntu SMP 
> Thu Jan 30 17:39:31 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
> Cassandra Version : 
> cassandra -v
> 2.1.2
>Reporter: Anshu Vajpayee
>Priority: Minor
>  Labels: lhf
>
> As per general opinion, nodetool stopdaemon should perform graceful shutdown 
> rater than crash killing of cassandra daemon .
> It  doesn't flush the memtables and also it doesn't stop the thrift and CQL 
> connection interfaces before crashing/stopping  the node.  It directly calls 
> SIGTERM on process as simple as kill -15/ctrl + c. 
>  
> 1. created a table  like as below:
> cqlsh:test_ks> create table t2(id1 int, id2 text, primary key(id1));
> cqlsh:test_ks> 
> cqlsh:test_ks> insert into t2(id1,id2) values (1,'a');
> cqlsh:test_ks> insert into t2(id1,id2) values (2,'a');
> cqlsh:test_ks> insert into t2(id1,id2) values (3,'a');
> cqlsh:test_ks> select * from t2;
>  id1 | id2
> -+-
>1 |   a
>2 |   a
>3 |   a
> 2.Flush  the memtable manually using nodetool flush
> student@cascor:~/node1/apache-cassandra-2.1.2/bin$ nodetool flush
> student@cascor:~/node1/apache-cassandra-2.1.2/bin$ cd 
> ../data/data/test_ks/t2-a671f6b0319a11e6a91ae3263299699d/
> student@cascor:~/node1/apache-cassandra-2.1.2/data/data/test_ks/t2-a671f6b0319a11e6a91ae3263299699d$
>  ls -ltr 
> total 36
> -rw-rw-r-- 1 student student   16 Jun 13 12:14 test_ks-t2-ka-1-Filter.db
> -rw-rw-r-- 1 student student   54 Jun 13 12:14 test_ks-t2-ka-1-Index.db
> -rw-rw-r-- 1 student student   93 Jun 13 12:14 test_ks-t2-ka-1-Data.db
> -rw-rw-r-- 1 student student   91 Jun 13 12:14 test_ks-t2-ka-1-TOC.txt
> -rw-rw-r-- 1 student student   80 Jun 13 12:14 test_ks-t2-ka-1-Summary.db
> -rw-rw-r-- 1 student student 4442 Jun 13 12:14 test_ks-t2-ka-1-Statistics.db
> -rw-rw-r-- 1 student student   10 Jun 13 12:14 test_ks-t2-ka-1-Digest.sha1
> -rw-rw-r-- 1 student student   43 Jun 13 12:14 
> test_ks-t2-ka-1-CompressionInfo.db
> 3. Make few more changes on table t2
> cqlsh:test_ks> insert into t2(id1,id2) values (5,'a');
> cqlsh:test_ks> insert into t2(id1,id2) values (6,'a');
> cqlsh:test_ks> insert into t2(id1,id2) values (7,'a');
> cqlsh:test_ks> insert into t2(id1,id2) values (8,'a');
> cqlsh:test_ks> select * from t2;
>  id1 | id2
> -+-
>5 |   a
>1 |   a
>8 |   a
>2 |   a
>7 |   a
>6 |   a
>3 |   a
> 4. Stopping the node using nodetool stopdaemon 
> student@cascor:~$ nodetool stopdaemon
> Cassandra has shutdown.
> error: Connection refused
> -- StackTrace --
> java.net.ConnectException: Connection refused
> 5. No new version of SStables . Reason stopdaemon doesn't run nodetool 
> flush/drain before actually stopping daemon.
> student@cascor:~/node1/apache-cassandra-2.1.2/data/data/test_ks/t2-a671f6b0319a11e6a91ae3263299699d$
>  ls -ltr
> total 36
> -rw-rw-r-- 1 student student   16 Jun 13 12:14 test_ks-t2-ka-1-Filter.db
> -rw-rw-r-- 1 student student   54 Jun 13 12:14 test_ks-t2-ka-1-Index.db
> -rw-rw-r-- 1 student student   93 Jun 13 12:14 test_ks-t2-ka-1-Data.db
> -rw-rw-r-- 1 student student   91 Jun 13 12:14 test_ks-t2-ka-1-TOC.txt
> -rw-rw-r-- 1 student student   80 Jun 13 12:14 test_ks-t2-ka-1-Summary.db
> -rw-rw-r-- 1 student student 4442 Jun 13 12:14 test_ks-t2-ka-1-Statistics.db
> -rw-rw-r-- 1 student student   10 Jun 13 12:14 test_ks-t2-ka-1-Digest.sha1
> -rw-rw-r-- 1 student student   43 Jun 13 12:14 
> test_ks-t2-ka-1-CompressionInfo.db
> student@cascor:~/node1/apache-cassandra-2.1.2/data/data/test_ks/t2-a671f6b0319a11e6a91ae3263299699d$
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11996) SSTableSet.CANONICAL can miss sstables

2016-06-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11996:
-
Assignee: Marcus Eriksson

> SSTableSet.CANONICAL can miss sstables
> --
>
> Key: CASSANDRA-11996
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11996
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Critical
> Fix For: 3.0.x, 3.x
>
>
> There is a race where we might miss sstables in SSTableSet.CANONICAL when we 
> finish up a compaction.
> Reproducing unit test pushed 
> [here|https://github.com/krummas/cassandra/commit/1292aaa61b89730cff0c022ed1262f45afd493e5]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12087) dtest failure in sstable_generation_loading_test.TestLoadKaSStables.sstableloader_compression_none_to_snappy_test

2016-06-24 Thread Craig Kodman (JIRA)
Craig Kodman created CASSANDRA-12087:


 Summary: dtest failure in 
sstable_generation_loading_test.TestLoadKaSStables.sstableloader_compression_none_to_snappy_test
 Key: CASSANDRA-12087
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12087
 Project: Cassandra
  Issue Type: Test
Reporter: Craig Kodman
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/260/testReport/sstable_generation_loading_test/TestLoadKaSStables/sstableloader_compression_none_to_snappy_test

Failed on CassCI build cassandra-3.0_dtest_win32 #260



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12086) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_writing_with_max_output_size

2016-06-24 Thread Craig Kodman (JIRA)
Craig Kodman created CASSANDRA-12086:


 Summary: dtest failure in 
cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_writing_with_max_output_size
 Key: CASSANDRA-12086
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12086
 Project: Cassandra
  Issue Type: Test
Reporter: Craig Kodman
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/260/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_writing_with_max_output_size

Failed on CassCI build cassandra-3.0_dtest_win32 #260



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12085) dtest failure in replace_address_test.TestReplaceAddress.resumable_replace_test

2016-06-24 Thread Craig Kodman (JIRA)
Craig Kodman created CASSANDRA-12085:


 Summary: dtest failure in 
replace_address_test.TestReplaceAddress.resumable_replace_test
 Key: CASSANDRA-12085
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12085
 Project: Cassandra
  Issue Type: Test
Reporter: Craig Kodman
Assignee: DS Test Eng


example failure:

http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/260/testReport/replace_address_test/TestReplaceAddress/resumable_replace_test

Failed on CassCI build cassandra-2.2_dtest_win32 #260



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-06-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11978:
-
Labels: lhf  (was: )

> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11973) Is MemoryUtil.getShort() supposed to return a sign-extended or non-sign-extended value?

2016-06-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11973:
-
Reviewer: Stefania

> Is MemoryUtil.getShort() supposed to return a sign-extended or 
> non-sign-extended value?
> ---
>
> Key: CASSANDRA-11973
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11973
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Rei Odaira
>Assignee: Rei Odaira
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 11973-2.2.txt
>
>
> In org.apache.cassandra.utils.memory.MemoryUtil.getShort(), the returned 
> value of unsafe.getShort(address) is bit-wise-AND'ed with 0x, while that 
> of getShortByByte(address) is not. This inconsistency results in different 
> returned values when the short integer is negative. Which is preferred 
> behavior? Looking at NativeClustering and NativeCellTest, it seems like 
> non-sign-extension is assumed.
> By the way, is there any reason MemoryUtil.getShort() and 
> MemoryUtil.getShortByByte() return "int", not "short"?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12072) dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test

2016-06-24 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348396#comment-15348396
 ] 

Philip Thompson commented on CASSANDRA-12072:
-

Working on that suspicion, I am now re-running 11038 many times
http://cassci.datastax.com/view/Dev/view/ptnapoleon/job/ptnapoleon-11038-dtest/

> dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test
> --
>
> Key: CASSANDRA-12072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12072
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> Multiple failures:
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/create_and_grant_roles_with_superuser_status_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/drop_keyspace_cleans_up_function_level_permissions_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_read_wrong_column_names/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_boolstyle_round_trip/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/compaction_test/TestCompaction_with_SizeTieredCompactionStrategy/disable_autocompaction_alter_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe_mv/
> Logs are from 
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10992) Hanging streaming sessions

2016-06-24 Thread mlowicki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348394#comment-15348394
 ] 

mlowicki commented on CASSANDRA-10992:
--

We're using now C* 2.1.14 (for couple of weeks) and no hanging streaming 
sessions so far.

> Hanging streaming sessions
> --
>
> Key: CASSANDRA-10992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10992
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.12, Debian Wheezy
>Reporter: mlowicki
>Assignee: Paulo Motta
> Fix For: 2.1.12
>
> Attachments: apache-cassandra-2.1.12-SNAPSHOT.jar, db1.ams.jstack, 
> db6.analytics.jstack
>
>
> I've started recently running repair using [Cassandra 
> Reaper|https://github.com/spotify/cassandra-reaper]  (built-in {{nodetool 
> repair}} doesn't work for me - CASSANDRA-9935). It behaves fine but I've 
> noticed hanging streaming sessions:
> {code}
> root@db1:~# date
> Sat Jan  9 16:43:00 UTC 2016
> root@db1:~# nt netstats -H | grep total
> Receiving 5 files, 46.59 MB total. Already received 1 files, 11.32 MB 
> total
> Sending 7 files, 46.28 MB total. Already sent 7 files, 46.28 MB total
> Receiving 6 files, 64.15 MB total. Already received 1 files, 12.14 MB 
> total
> Sending 5 files, 61.15 MB total. Already sent 5 files, 61.15 MB total
> Receiving 4 files, 7.75 MB total. Already received 3 files, 7.58 MB 
> total
> Sending 4 files, 4.29 MB total. Already sent 4 files, 4.29 MB total
> Receiving 12 files, 13.79 MB total. Already received 11 files, 7.66 
> MB total
> Sending 5 files, 15.32 MB total. Already sent 5 files, 15.32 MB total
> Receiving 8 files, 20.35 MB total. Already received 1 files, 13.63 MB 
> total
> Sending 38 files, 125.34 MB total. Already sent 38 files, 125.34 MB 
> total
> root@db1:~# date
> Sat Jan  9 17:45:42 UTC 2016
> root@db1:~# nt netstats -H | grep total
> Receiving 5 files, 46.59 MB total. Already received 1 files, 11.32 MB 
> total
> Sending 7 files, 46.28 MB total. Already sent 7 files, 46.28 MB total
> Receiving 6 files, 64.15 MB total. Already received 1 files, 12.14 MB 
> total
> Sending 5 files, 61.15 MB total. Already sent 5 files, 61.15 MB total
> Receiving 4 files, 7.75 MB total. Already received 3 files, 7.58 MB 
> total
> Sending 4 files, 4.29 MB total. Already sent 4 files, 4.29 MB total
> Receiving 12 files, 13.79 MB total. Already received 11 files, 7.66 
> MB total
> Sending 5 files, 15.32 MB total. Already sent 5 files, 15.32 MB total
> Receiving 8 files, 20.35 MB total. Already received 1 files, 13.63 MB 
> total
> Sending 38 files, 125.34 MB total. Already sent 38 files, 125.34 MB 
> total
> {code}
> Such sessions are left even when repair job is long time done (confirmed by 
> checking Reaper's and Cassandra's logs). {{streaming_socket_timeout_in_ms}} 
> in cassandra.yaml is set to default value (360).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12034) Special handling for Netty's direct memory allocation failure

2016-06-24 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348382#comment-15348382
 ] 

T Jake Luciani commented on CASSANDRA-12034:


I think you should also put a section in docs / NEWS.txt about this new netty 
flag.

> Special handling for Netty's direct memory allocation failure
> -
>
> Key: CASSANDRA-12034
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12034
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.x
>
>
> With CASSANDRA-12032, Netty throws a 
> {{io.netty.util.internal.OutOfDirectMemoryError}} if there's not enough 
> off-heap memory for the response buffer. We can easily handle this situation 
> and return an error. This is not a condition that destabilizes the system and 
> should therefore not passed to {{JVMStabilityInspector}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12034) Special handling for Netty's direct memory allocation failure

2016-06-24 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348377#comment-15348377
 ] 

T Jake Luciani commented on CASSANDRA-12034:


bq. Shall I change the patch to call JVMStabilityInspector

yes please 

> Special handling for Netty's direct memory allocation failure
> -
>
> Key: CASSANDRA-12034
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12034
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.x
>
>
> With CASSANDRA-12032, Netty throws a 
> {{io.netty.util.internal.OutOfDirectMemoryError}} if there's not enough 
> off-heap memory for the response buffer. We can easily handle this situation 
> and return an error. This is not a condition that destabilizes the system and 
> should therefore not passed to {{JVMStabilityInspector}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11937) Clean up buffer trimming large buffers in DataOutputBuffer after the Netty upgrade

2016-06-24 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-11937:
---
   Resolution: Fixed
 Reviewer: T Jake Luciani
Fix Version/s: 3.8
   Status: Resolved  (was: Patch Available)

committed {{578c85dc74522668e5c1e89119d25117cba5abf4}}

> Clean up buffer trimming large buffers in DataOutputBuffer after the Netty 
> upgrade
> --
>
> Key: CASSANDRA-11937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11937
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>  Labels: lhf, netty, reminder
> Fix For: 3.8
>
> Attachments: Screen Shot 2016-06-22 at 15.24.05.png
>
>
> In [https://issues.apache.org/jira/browse/CASSANDRA-11838|11838], we're 
> trimming the large buffers in {{DataOutputBuffer}}. The patch is already 
> submitted and merged in [Netty 
> 4.1|https://github.com/netty/netty/commit/bbed330468b5b82c9e4defa59012d0fcdb70f1aa],
>  we only need to make sure that we throw large buffers away1 alltogether 
> instead of trimming them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6936) Make all byte representations of types comparable by their unsigned byte representation only

2016-06-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348369#comment-15348369
 ] 

Jonathan Ellis commented on CASSANDRA-6936:
---

My understanding is that compaction is largely cpu-bound on cell comparisons.  
I'd like to see a prototype of what kind of benefits we can get there, e.g. 
using blob types (which are already byte-comparable).

> Make all byte representations of types comparable by their unsigned byte 
> representation only
> 
>
> Key: CASSANDRA-6936
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6936
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Branimir Lambov
>  Labels: compaction, performance
> Fix For: 4.x
>
>
> This could be a painful change, but is necessary for implementing a 
> trie-based index, and settling for less would be suboptimal; it also should 
> make comparisons cheaper all-round, and since comparison operations are 
> pretty much the majority of C*'s business, this should be easily felt (see 
> CASSANDRA-6553 and CASSANDRA-6934 for an example of some minor changes with 
> major performance impacts). No copying/special casing/slicing should mean 
> fewer opportunities to introduce performance regressions as well.
> Since I have slated for 3.0 a lot of non-backwards-compatible sstable 
> changes, hopefully this shouldn't be too much more of a burden.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Cleanup byte buffer recycling in DataOutputBuffer after Netty upgrade.

2016-06-24 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk c253f0806 -> 578c85dc7


Cleanup byte buffer recycling in DataOutputBuffer after Netty upgrade.

Patch by Alex Petrov; reviewed by Jake Luciani for CASSANDRA-11937

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/578c85dc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/578c85dc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/578c85dc

Branch: refs/heads/trunk
Commit: 578c85dc74522668e5c1e89119d25117cba5abf4
Parents: c253f08
Author: Alex Petrov 
Authored: Wed Jun 22 15:29:24 2016 +0200
Committer: T Jake Luciani 
Committed: Fri Jun 24 10:44:23 2016 -0400

--
 .../apache/cassandra/io/util/DataOutputBuffer.java| 12 +---
 src/java/org/apache/cassandra/utils/btree/BTree.java  | 14 --
 2 files changed, 17 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/578c85dc/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
--
diff --git a/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java 
b/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
index 2091ed0..f08b48f 100644
--- a/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
+++ b/src/java/org/apache/cassandra/io/util/DataOutputBuffer.java
@@ -87,13 +87,11 @@ public class DataOutputBuffer extends 
BufferedDataOutputStreamPlus
 {
 assert handle != null;
 
-// Avoid throwing away instances that are too large, trim large 
buffers to default size instead.
-// See CASSANDRA-11838 for details.
-if (buffer().capacity() > MAX_RECYCLE_BUFFER_SIZE)
-buffer = ByteBuffer.allocate(DEFAULT_INITIAL_BUFFER_SIZE);
-
-buffer.rewind();
-RECYCLER.recycle(this, handle);
+if (buffer().capacity() <= MAX_RECYCLE_BUFFER_SIZE)
+{
+buffer.rewind();
+RECYCLER.recycle(this, handle);
+}
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/578c85dc/src/java/org/apache/cassandra/utils/btree/BTree.java
--
diff --git a/src/java/org/apache/cassandra/utils/btree/BTree.java 
b/src/java/org/apache/cassandra/utils/btree/BTree.java
index 5665869..33f4152 100644
--- a/src/java/org/apache/cassandra/utils/btree/BTree.java
+++ b/src/java/org/apache/cassandra/utils/btree/BTree.java
@@ -831,12 +831,17 @@ public class BTree
 public void recycle()
 {
 if (recycleHandle != null)
+{
+this.cleanup();
 builderRecycler.recycle(this, recycleHandle);
+}
 }
 
-private void reuse(Comparator comparator)
+/**
+ * Cleans up the Builder instance before recycling it.
+ */
+private void cleanup()
 {
-this.comparator = comparator;
 quickResolver = null;
 Arrays.fill(values, 0, count, null);
 count = 0;
@@ -844,6 +849,11 @@ public class BTree
 auto = true;
 }
 
+private void reuse(Comparator comparator)
+{
+this.comparator = comparator;
+}
+
 public Builder auto(boolean auto)
 {
 this.auto = auto;



[jira] [Updated] (CASSANDRA-7075) Add the ability to automatically distribute your commitlogs across all data volumes

2016-06-24 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7075:
--
Assignee: (was: Branimir Lambov)

It's probably not worth pursuing this with the SEDA architecture.  Lots of 
effort for little gain (now that we have CL compression).

But we might need to do CL-segment-per-thread for TPC.  /cc [~iamaleksey]

> Add the ability to automatically distribute your commitlogs across all data 
> volumes
> ---
>
> Key: CASSANDRA-7075
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7075
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Local Write-Read Paths
>Reporter: Tupshin Harper
>Priority: Minor
>  Labels: performance
> Fix For: 3.x
>
>
> given the prevalance of ssds (no need to separate commitlog and data), and 
> improved jbod support, along with CASSANDRA-3578, it seems like we should 
> have an option to have one commitlog per data volume, to even the load. i've 
> been seeing more and more cases where there isn't an obvious "extra" volume 
> to put the commitlog on, and sticking it on only one of the jbodded ssd 
> volumes leads to IO imbalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10537) CONTAINS and CONTAINS KEY support for Lightweight Transactions

2016-06-24 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348246#comment-15348246
 ] 

Alex Petrov edited comment on CASSANDRA-10537 at 6/24/16 2:16 PM:
--

I've made a simple proof of concept to check how simple it would be to support 
this.
It's rather easy, as it requires a small change in grammar, although given the 
way the {{ColumnCondition}} is currently implemented, I'd suggest refactoring 
it a bit to make it more flexible to support different operations. 

  * {{ColumnCondition}} supports at least two cases in a single class: for 
operators such as {{EQ}}, {{GT}} etc.. and {{IN}}. These cases are mostly 
distinct and code might get simpler to read and modify if we had them in 
distinct classes
  * Other strong distinction that's made is between (for collection) the field 
and collection element (for UDT), and "simple" values.
  * There's some intersection with {{RowFilter}} and logic is quite similar in 
both cases, maybe some code and logic can be reused.

There are enough inner classes that handle all the mentioned distinctions, but 
{{ColumnCondition}} itself may benefit from reducing the amount of conditional 
logic involved into figuring out distinction, as code paths are rather separate.


was (Author: ifesdjeen):
I've made a simple proof of concept to check how simple it would be to support 
this.
It's rather easy, as it requires a small change in grammar, although given the 
way the {{ColumnCondition}} is currently implemented, I'd suggest refactoring 
it a bit to make it more flexible to support different operations. 

  * {{ColumnCondition}} supports at least two cases in a single class: for 
operators such as {{EQ}}, {{GT}} etc.. and {{IN}}. These cases are mostly 
distinct and code might get simpler to read and modify if we had them in 
distinct classes
  * Other strong distinction that's made is between (for collection) the field 
and collection element (for UDT), and "simple" values.

There are enough inner classes that handle all the mentioned distinctions, but 
{{ColumnCondition}} itself may benefit from reducing the amount of conditional 
logic involved into figuring out distinction, as code paths are rather separate.

> CONTAINS and CONTAINS KEY support for Lightweight Transactions
> --
>
> Key: CASSANDRA-10537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Nimi Wariboko Jr.
>Assignee: Alex Petrov
>  Labels: CQL
> Fix For: 3.x
>
>
> Conditional updates currently do not support CONTAINS and CONTAINS KEY 
> conditions. Queries such as 
> {{UPDATE mytable SET somefield = 4 WHERE pk = 'pkv' IF set_column CONTAINS 
> 5;}}
> are not possible.
> Would it also be possible to support the negation of these (ex. testing that 
> a value does not exist inside a set)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10537) CONTAINS and CONTAINS KEY support for Lightweight Transactions

2016-06-24 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348246#comment-15348246
 ] 

Alex Petrov commented on CASSANDRA-10537:
-

I've made a simple proof of concept to check how simple it would be to support 
this.
It's rather easy, as it requires a small change in grammar, although given the 
way the {{ColumnCondition}} is currently implemented, I'd suggest refactoring 
it a bit to make it more flexible to support different operations. 

  * {{ColumnCondition}} supports at least two cases in a single class: for 
operators such as {{EQ}}, {{GT}} etc.. and {{IN}}. These cases are mostly 
distinct and code might get simpler to read and modify if we had them in 
distinct classes
  * Other strong distinction that's made is between (for collection) the field 
and collection element (for UDT), and "simple" values.

There are enough inner classes that handle all the mentioned distinctions, but 
{{ColumnCondition}} itself may benefit from reducing the amount of conditional 
logic involved into figuring out distinction, as code paths are rather separate.

> CONTAINS and CONTAINS KEY support for Lightweight Transactions
> --
>
> Key: CASSANDRA-10537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Nimi Wariboko Jr.
>Assignee: Alex Petrov
>  Labels: CQL
> Fix For: 3.x
>
>
> Conditional updates currently do not support CONTAINS and CONTAINS KEY 
> conditions. Queries such as 
> {{UPDATE mytable SET somefield = 4 WHERE pk = 'pkv' IF set_column CONTAINS 
> 5;}}
> are not possible.
> Would it also be possible to support the negation of these (ex. testing that 
> a value does not exist inside a set)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12078) [SASI] Move skip_stop_words filter BEFORE stemming

2016-06-24 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348240#comment-15348240
 ] 

DOAN DuyHai commented on CASSANDRA-12078:
-

I'll work on it this weekend and proposed a proprer patch with update Unit tests

> [SASI] Move skip_stop_words filter BEFORE stemming
> --
>
> Key: CASSANDRA-12078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12078
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.7, Cassandra 3.8
>Reporter: DOAN DuyHai
>Assignee: DOAN DuyHai
> Fix For: 3.8
>
> Attachments: patch.txt
>
>
> Right now, if skip stop words and stemming are enabled, SASI will put 
> stemming in the filter pipeline BEFORE skip_stop_words:
> {code:java}
> private FilterPipelineTask getFilterPipeline()
> {
> FilterPipelineBuilder builder = new FilterPipelineBuilder(new 
> BasicResultFilters.NoOperation());
>  ...
> if (options.shouldStemTerms())
> builder = builder.add("term_stemming", new 
> StemmingFilters.DefaultStemmingFilter(options.getLocale()));
> if (options.shouldIgnoreStopTerms())
> builder = builder.add("skip_stop_words", new 
> StopWordFilters.DefaultStopWordFilter(options.getLocale()));
> return builder.build();
> }
> {code}
> The problem is that stemming before removing stop words can yield wrong 
> results.
> I have an example:
> {code:sql}
> SELECT * FROM music.albums WHERE country='France' AND title LIKE 'danse' 
> ALLOW FILTERING;
> {code}
> Because of stemming *danse* ( *dance* in English) becomes *dans* (the final 
> vowel is removed). Then skip stop words is applied. Unfortunately *dans* 
> (*in* in English) is a stop word in French so it is removed completely.
> In the end the query is equivalent to {{SELECT * FROM music.albums WHERE 
> country='France'}} and of course the results are wrong.
> Attached is a trivial patch to move the skip_stop_words filter BEFORE 
> stemming filter
> /cc [~xedin] [~jrwest] [~beobal]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10715) Allow filtering on NULL

2016-06-24 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348239#comment-15348239
 ] 

Alex Petrov commented on CASSANDRA-10715:
-

[~blerer] sure, let me check it out.

> Allow filtering on NULL
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: protocolv5
> Fix For: 4.0
>
> Attachments: 
> 0001-Allow-null-values-in-filtered-searches-reuse-Operato.patch
>
>
> This is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
> perform_simple_statement
> result = future.result()
>   File 
> "C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
>  line 3118, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'cons
> istency': 'ONE'}
> ←[0m
> {noformat}
> Cassandra log shows:
> {noformat}
> WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,10,main]: {}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:233)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:227)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:293)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.j

[jira] [Commented] (CASSANDRA-10715) Allow filtering on NULL

2016-06-24 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348234#comment-15348234
 ] 

Benjamin Lerer commented on CASSANDRA-10715:


Sorry, did not see that.
[~ifesdjeen] I am pretty busy with other stuff right now. Do you want to handle 
this ticket?

> Allow filtering on NULL
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: protocolv5
> Fix For: 4.0
>
> Attachments: 
> 0001-Allow-null-values-in-filtered-searches-reuse-Operato.patch
>
>
> This is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
> perform_simple_statement
> result = future.result()
>   File 
> "C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
>  line 3118, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'cons
> istency': 'ONE'}
> ←[0m
> {noformat}
> Cassandra log shows:
> {noformat}
> WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,10,main]: {}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:233)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:227)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:293)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.0.j

[jira] [Updated] (CASSANDRA-12083) Race condition during system.roles column family creation

2016-06-24 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12083:
--
Reviewer: Aleksey Yeschenko

> Race condition during system.roles column family creation
> -
>
> Key: CASSANDRA-12083
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12083
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Sam Tunnicliffe
>
> There is an issue where Cassandra fails with the following exception on 
> startup:
> {noformat}
> DEBUG [InternalResponseStage:2] 2016-06-20 09:43:00,651 Schema.java:465 - 
> Adding 
> org.apache.cassandra.config.CFMetaData@2882b66d[cfId=5bc52802-de25-35ed-aeab-188eecebb090,ksName=system_auth,cfName=roles,flags=[COMPOUND],params=TableParams{comment=role
>  definitions, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, 
> bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, 
> default_time_to_live=0, memtable_flush_period_in_ms=360, 
> min_index_interval=128, max_index_interval=2048, 
> speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' 
> : 'NONE'}, 
> compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,
>  options={max_threshold=32, min_threshold=4}}, 
> compression=org.apache.cassandra.schema.CompressionParams@e6e0212, 
> extensions={}},comparator=comparator(),partitionColumns=[[] | [can_login 
> is_superuser salted_hash 
> member_of]],partitionKeyColumns=[ColumnDefinition{name=role, 
> type=org.apache.cassandra.db.marshal.UTF8Type, kind=PARTITION_KEY, 
> position=0}],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[ColumnDefinition{name=role,
>  type=org.apache.cassandra.db.marshal.UTF8Type, kind=PARTITION_KEY, 
> position=0}, ColumnDefinition{name=salted_hash, 
> type=org.apache.cassandra.db.marshal.UTF8Type, kind=REGULAR, position=-1}, 
> ColumnDefinition{name=member_of, 
> type=org.apache.cassandra.db.marshal.SetType(org.apache.cassandra.db.marshal.UTF8Type),
>  kind=REGULAR, position=-1}, ColumnDefinition{name=can_login, 
> type=org.apache.cassandra.db.marshal.BooleanType, kind=REGULAR, position=-1}, 
> ColumnDefinition{name=is_superuser, 
> type=org.apache.cassandra.db.marshal.BooleanType, kind=REGULAR, 
> position=-1}],droppedColumns={},triggers=[],indexes=[]] to cfIdMap
> INFO  [InternalResponseStage:2] 2016-06-20 09:43:00,653 
> ColumnFamilyStore.java:381 - Initializing system_auth.roles
> DEBUG [InternalResponseStage:2] 2016-06-20 09:43:00,664 
> MigrationManager.java:556 - Gossiping my schema version 
> c2a2bb4f-7d31-3fb8-a216-00b41a643650
> DEBUG [InternalResponseStage:1] 2016-06-20 09:43:00,669 
> ColumnFamilyStore.java:831 - Enqueuing flush of keyspaces: 1566 (0%) on-heap, 
> 0 (0%) off-heap
> DEBUG [MemtableFlushWriter:2] 2016-06-20 09:43:00,669 Memtable.java:372 - 
> Writing Memtable-keyspaces@650010305(0.437KiB serialized bytes, 3 ops, 0%/0% 
> of on/off-heap limit)
> ERROR [main] 2016-06-20 09:43:00,670 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.IllegalArgumentException: Unknown CF 
> 5bc52802-de25-35ed-aeab-188eecebb090
> at 
> org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:206) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:199) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:168)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:874)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:821)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:809)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.prepare(CassandraRoleManager.java:446)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.setup(CassandraRoleManager.java:144)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.service.StorageService.doAuthSetup(StorageService.java:1084)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:1032)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:755)
>  ~[apache-cassandra-3.0.6.ja

[jira] [Updated] (CASSANDRA-12083) Race condition during system.roles column family creation

2016-06-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12083:

Status: Patch Available  (was: Open)

This regression is due to CASSANDRA-11027, in which I chose the wrong 
{{initCf}} call to remove. Pushed patches for 3.0 and trunk fixing that and 
waiting on CI.

[~iamaleksey] would you mind reviewing please?

||branch||testall||dtest||
|[12083-3.0|https://github.com/beobal/cassandra/tree/12083-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12083-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12083-3.0-dtest]|
|[12083-trunk|https://github.com/beobal/cassandra/tree/12083-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12083-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-12083-trunk-dtest]|



> Race condition during system.roles column family creation
> -
>
> Key: CASSANDRA-12083
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12083
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Sam Tunnicliffe
>
> There is an issue where Cassandra fails with the following exception on 
> startup:
> {noformat}
> DEBUG [InternalResponseStage:2] 2016-06-20 09:43:00,651 Schema.java:465 - 
> Adding 
> org.apache.cassandra.config.CFMetaData@2882b66d[cfId=5bc52802-de25-35ed-aeab-188eecebb090,ksName=system_auth,cfName=roles,flags=[COMPOUND],params=TableParams{comment=role
>  definitions, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, 
> bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, 
> default_time_to_live=0, memtable_flush_period_in_ms=360, 
> min_index_interval=128, max_index_interval=2048, 
> speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' 
> : 'NONE'}, 
> compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,
>  options={max_threshold=32, min_threshold=4}}, 
> compression=org.apache.cassandra.schema.CompressionParams@e6e0212, 
> extensions={}},comparator=comparator(),partitionColumns=[[] | [can_login 
> is_superuser salted_hash 
> member_of]],partitionKeyColumns=[ColumnDefinition{name=role, 
> type=org.apache.cassandra.db.marshal.UTF8Type, kind=PARTITION_KEY, 
> position=0}],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[ColumnDefinition{name=role,
>  type=org.apache.cassandra.db.marshal.UTF8Type, kind=PARTITION_KEY, 
> position=0}, ColumnDefinition{name=salted_hash, 
> type=org.apache.cassandra.db.marshal.UTF8Type, kind=REGULAR, position=-1}, 
> ColumnDefinition{name=member_of, 
> type=org.apache.cassandra.db.marshal.SetType(org.apache.cassandra.db.marshal.UTF8Type),
>  kind=REGULAR, position=-1}, ColumnDefinition{name=can_login, 
> type=org.apache.cassandra.db.marshal.BooleanType, kind=REGULAR, position=-1}, 
> ColumnDefinition{name=is_superuser, 
> type=org.apache.cassandra.db.marshal.BooleanType, kind=REGULAR, 
> position=-1}],droppedColumns={},triggers=[],indexes=[]] to cfIdMap
> INFO  [InternalResponseStage:2] 2016-06-20 09:43:00,653 
> ColumnFamilyStore.java:381 - Initializing system_auth.roles
> DEBUG [InternalResponseStage:2] 2016-06-20 09:43:00,664 
> MigrationManager.java:556 - Gossiping my schema version 
> c2a2bb4f-7d31-3fb8-a216-00b41a643650
> DEBUG [InternalResponseStage:1] 2016-06-20 09:43:00,669 
> ColumnFamilyStore.java:831 - Enqueuing flush of keyspaces: 1566 (0%) on-heap, 
> 0 (0%) off-heap
> DEBUG [MemtableFlushWriter:2] 2016-06-20 09:43:00,669 Memtable.java:372 - 
> Writing Memtable-keyspaces@650010305(0.437KiB serialized bytes, 3 ops, 0%/0% 
> of on/off-heap limit)
> ERROR [main] 2016-06-20 09:43:00,670 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.IllegalArgumentException: Unknown CF 
> 5bc52802-de25-35ed-aeab-188eecebb090
> at 
> org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:206) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:199) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:168)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:874)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:821)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:8

[jira] [Reopened] (CASSANDRA-12078) [SASI] Move skip_stop_words filter BEFORE stemming

2016-06-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reopened CASSANDRA-12078:
-

Reopening as it breaks {{StandardAnalyzerTest}} 
http://cassci.datastax.com/view/trunk/job/trunk_testall/979/

> [SASI] Move skip_stop_words filter BEFORE stemming
> --
>
> Key: CASSANDRA-12078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12078
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Cassandra 3.7, Cassandra 3.8
>Reporter: DOAN DuyHai
>Assignee: DOAN DuyHai
> Fix For: 3.8
>
> Attachments: patch.txt
>
>
> Right now, if skip stop words and stemming are enabled, SASI will put 
> stemming in the filter pipeline BEFORE skip_stop_words:
> {code:java}
> private FilterPipelineTask getFilterPipeline()
> {
> FilterPipelineBuilder builder = new FilterPipelineBuilder(new 
> BasicResultFilters.NoOperation());
>  ...
> if (options.shouldStemTerms())
> builder = builder.add("term_stemming", new 
> StemmingFilters.DefaultStemmingFilter(options.getLocale()));
> if (options.shouldIgnoreStopTerms())
> builder = builder.add("skip_stop_words", new 
> StopWordFilters.DefaultStopWordFilter(options.getLocale()));
> return builder.build();
> }
> {code}
> The problem is that stemming before removing stop words can yield wrong 
> results.
> I have an example:
> {code:sql}
> SELECT * FROM music.albums WHERE country='France' AND title LIKE 'danse' 
> ALLOW FILTERING;
> {code}
> Because of stemming *danse* ( *dance* in English) becomes *dans* (the final 
> vowel is removed). Then skip stop words is applied. Unfortunately *dans* 
> (*in* in English) is a stop word in French so it is removed completely.
> In the end the query is equivalent to {{SELECT * FROM music.albums WHERE 
> country='France'}} and of course the results are wrong.
> Attached is a trivial patch to move the skip_stop_words filter BEFORE 
> stemming filter
> /cc [~xedin] [~jrwest] [~beobal]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10715) Allow filtering on NULL

2016-06-24 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348209#comment-15348209
 ] 

Jeremiah Jordan commented on CASSANDRA-10715:
-

[~blerer] see discussions on CASSANDRA-10786 about moving forward with v5 
changes before 4.0

> Allow filtering on NULL
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: protocolv5
> Fix For: 4.0
>
> Attachments: 
> 0001-Allow-null-values-in-filtered-searches-reuse-Operato.patch
>
>
> This is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
> perform_simple_statement
> result = future.result()
>   File 
> "C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
>  line 3118, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'cons
> istency': 'ONE'}
> ←[0m
> {noformat}
> Cassandra log shows:
> {noformat}
> WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,10,main]: {}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:233)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:227)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:293)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
>

[jira] [Commented] (CASSANDRA-12076) Add username to AuthenticationException messages

2016-06-24 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348200#comment-15348200
 ] 

Sam Tunnicliffe commented on CASSANDRA-12076:
-

bq. which version should I be restricting the existing relevant tests to?
as this will be targetting trunk you'll just need to tweak a couple of the 
tests to check for the new error message where version >= 3.8

> Add username to AuthenticationException messages
> 
>
> Key: CASSANDRA-12076
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12076
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Trivial
> Attachments: 12076-trunk-v2.txt, 12076-trunk.txt
>
>
> When an {{AuthenticationException}} is thrown, there are a few places where 
> the user that initiated the request is not included in the exception message. 
> It can be useful to have this information included for logging purposes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11393) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test

2016-06-24 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11393:
---
Reviewer: Tyler Hobbs  (was: Sylvain Lebresne)

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test
> --
>
> Key: CASSANDRA-11393
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11393
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Streaming and Messaging
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: dtest
> Fix For: 3.0.x, 3.x
>
> Attachments: 11393-3.0.txt
>
>
> We are seeing a failure in the upgrade tests that go from 2.1 to 3.0
> {code}
> node2: ERROR [SharedPool-Worker-2] 2016-03-10 20:05:17,865 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xeb79b477, 
> /127.0.0.1:39613 => /127.0.0.2:9042]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1208)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1155)
>  ~[main/:na]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:330)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:302)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:297)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:333)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:209)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Fi

[jira] [Commented] (CASSANDRA-10715) Allow filtering on NULL

2016-06-24 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348198#comment-15348198
 ] 

Benjamin Lerer commented on CASSANDRA-10715:


v5 will be Cassandra 4.0 and we do not have a branch for it yet.

> Allow filtering on NULL
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: protocolv5
> Fix For: 4.0
>
> Attachments: 
> 0001-Allow-null-values-in-filtered-searches-reuse-Operato.patch
>
>
> This is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
> perform_simple_statement
> result = future.result()
>   File 
> "C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
>  line 3118, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'cons
> istency': 'ONE'}
> ←[0m
> {noformat}
> Cassandra log shows:
> {noformat}
> WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,10,main]: {}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:233)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:227)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:293)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadCo

[jira] [Updated] (CASSANDRA-10715) Allow filtering on NULL

2016-06-24 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-10715:
---
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   4.0

> Allow filtering on NULL
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: protocolv5
> Fix For: 4.0
>
> Attachments: 
> 0001-Allow-null-values-in-filtered-searches-reuse-Operato.patch
>
>
> This is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
> perform_simple_statement
> result = future.result()
>   File 
> "C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
>  line 3118, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'cons
> istency': 'ONE'}
> ←[0m
> {noformat}
> Cassandra log shows:
> {noformat}
> WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,10,main]: {}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:233)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:227)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:293)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadComma

[jira] [Updated] (CASSANDRA-10715) Allow filtering on NULL

2016-06-24 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-10715:
---
Labels: protocolv5  (was: )

> Allow filtering on NULL
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
>Priority: Minor
>  Labels: protocolv5
> Fix For: 4.0
>
> Attachments: 
> 0001-Allow-null-values-in-filtered-searches-reuse-Operato.patch
>
>
> This is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
> perform_simple_statement
> result = future.result()
>   File 
> "C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
>  line 3118, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'cons
> istency': 'ONE'}
> ←[0m
> {noformat}
> Cassandra log shows:
> {noformat}
> WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,10,main]: {}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:233)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:227)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:293)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:288) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   a

[jira] [Commented] (CASSANDRA-11303) New inbound throughput parameters for streaming

2016-06-24 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348194#comment-15348194
 ] 

Paulo Motta commented on CASSANDRA-11303:
-

[~skonno] don't worry about the test failures, they're unrelated to this patch 
(unfortunately some tests are a bit flakey, but we're working on fixing it).

> New inbound throughput parameters for streaming
> ---
>
> Key: CASSANDRA-11303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11303
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Configuration
>Reporter: Satoshi Konno
>Priority: Minor
> Attachments: 11303_inbound_limit_debug_20160419.log, 
> 11303_inbound_nolimit_debug_20160419.log, 
> 11303_inbound_patch_for_trunk_20160419.diff, 
> 11303_inbound_patch_for_trunk_20160525.diff, 
> 200vs40inboundstreamthroughput.png, cassandra_inbound_stream.diff
>
>
> Hi,
> To specify stream throughputs of a node more clearly, I would like to add the 
> following new inbound parameters like existing outbound parameters in the 
> cassandra.yaml.
> - stream_throughput_inbound_megabits_per_sec
> - inter_dc_stream_throughput_outbound_megabits_per_sec  
> We use only the existing outbound parameters now, but it is difficult to 
> control the total throughputs of a node. In our production network, some 
> critical alerts occurs when a node exceed the specified total throughput 
> which is the sum of the input and output throughputs.
> In our operation of Cassandra, the alerts occurs during the bootstrap or 
> repair processing when a new node is added. In the worst case, we have to 
> stop the operation of the exceed node.
> I have attached the patch under consideration. I would like to add a new 
> limiter class, StreamInboundRateLimiter, and use the limiter class in 
> StreamDeserializer class. I use Row::dataSize( )to get the input throughput 
> in StreamDeserializer::newPartition(), but I am not sure whether the 
> dataSize() returns the correct data size.
> Can someone please tell me how to do it ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10715) Allow filtering on NULL

2016-06-24 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348177#comment-15348177
 ] 

Alex Petrov commented on CASSANDRA-10715:
-

We have more issues that would require v5 by now, although it seems that only 
[CASSANDRA-10786] has patch available by now. Should we consider moving forward 
with v5 and this change?

> Allow filtering on NULL
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
> Attachments: 
> 0001-Allow-null-values-in-filtered-searches-reuse-Operato.patch
>
>
> This is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
> perform_simple_statement
> result = future.result()
>   File 
> "C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
>  line 3118, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'cons
> istency': 'ONE'}
> ←[0m
> {noformat}
> Cassandra log shows:
> {noformat}
> WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,10,main]: {}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:233)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:227)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:293)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:

[jira] [Assigned] (CASSANDRA-11990) Address rows rather than partitions in SASI

2016-06-24 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-11990:
---

Assignee: Alex Petrov

> Address rows rather than partitions in SASI
> ---
>
> Key: CASSANDRA-11990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11990
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>
> Currently, the lookup in SASI index would return the key position of the 
> partition. After the partition lookup, the rows are iterated and the 
> operators are applied in order to filter out ones that do not match.
> bq. TokenTree which accepts variable size keys (such would enable different 
> partitioners, collections support, primary key indexing etc.), 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11182) Enable SASI index for collections

2016-06-24 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-11182:
---

Assignee: (was: Alex Petrov)

> Enable SASI index for collections
> -
>
> Key: CASSANDRA-11182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11182
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: DOAN DuyHai
>Priority: Minor
>
> This is a follow up ticket for post Cassandra 3.4 SASI integration.
> Right now it is possible with standard Cassandra 2nd index to:
> 1. index list and set elements ( {{WHERE list CONTAINS xxx}})
> 2. index map keys ( {{WHERE map CONTAINS KEYS 'abc'}} )
> 3. index map entries ( {{WHERE map\['key'\]=value}})
>  It would be nice to enable these features in SASI too.
>  With regard to tokenizing, we might want to allow wildcards ({{%}}) with the 
> CONTAINS syntax as well as with index map entries. Ex:
> * {{WHERE list CONTAINS 'John%'}}
> * {{WHERE map CONTAINS KEY '%an%'}}
> * {{WHERE map\['key'\] LIKE '%val%'}}
> /cc [~xedin] [~rustyrazorblade] [~jkrupan]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12025) dtest failure in paging_test.TestPagingData.test_paging_with_filtering_on_counter_columns

2016-06-24 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348160#comment-15348160
 ] 

Alex Petrov commented on CASSANDRA-12025:
-

I could not reproduce this on different patchlevels. In order to distinguish 
counter failures from paging failures on, I've added the current page size to 
the error message in 
[dtest|https://github.com/riptano/cassandra-dtest/pull/1061]. 

Although it seems that counter was undercounted and the result was incorrect.

> dtest failure in 
> paging_test.TestPagingData.test_paging_with_filtering_on_counter_columns
> -
>
> Key: CASSANDRA-12025
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12025
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest
> Fix For: 3.x
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1276/testReport/paging_test/TestPagingData/test_paging_with_filtering_on_counter_columns
> Failed on CassCI build trunk_dtest #1276
> {code}
> Error Message
> Lists differ: [[4, 7, 8, 9], [4, 9, 10, 11]] != [[4, 7, 8, 9], [4, 8, 9, 10], 
> ...
> First differing element 1:
> [4, 9, 10, 11]
> [4, 8, 9, 10]
> Second list contains 1 additional elements.
> First extra element 2:
> [4, 9, 10, 11]
> - [[4, 7, 8, 9], [4, 9, 10, 11]]
> + [[4, 7, 8, 9], [4, 8, 9, 10], [4, 9, 10, 11]]
> ?+++  
> {code}
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 288, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/paging_test.py", line 1148, in 
> test_paging_with_filtering_on_counter_columns
> self._test_paging_with_filtering_on_counter_columns(session, True)
>   File "/home/automaton/cassandra-dtest/paging_test.py", line 1107, in 
> _test_paging_with_filtering_on_counter_columns
> [4, 9, 10, 11]])
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 742, in assertListEqual
> self.assertSequenceEqual(list1, list2, msg, seq_type=list)
>   File "/usr/lib/python2.7/unittest/case.py", line 724, in assertSequenceEqual
> self.fail(msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 410, in fail
> raise self.failureException(msg)
> {code}
> Logs are attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12016) Create MessagingService mocking classes

2016-06-24 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348135#comment-15348135
 ] 

Stefan Podkowinski commented on CASSANDRA-12016:


I've now implemented two more tests for CASSANDRA-3486. I'm quite happy with 
the result, especially 
[RepairRunnableTest.java|https://github.com/spodkowinski/cassandra/blob/WIP-3486/test/unit/org/apache/cassandra/repair/RepairRunnableTest.java].
 The tested code is not that trivial and was not covered by any unit tests 
before. I'd personally prefer this approach compared to having to write dtests, 
not even mentioning that it takes less than a second to run the java test.

> Create MessagingService mocking classes
> ---
>
> Key: CASSANDRA-12016
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12016
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Testing
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>
> Interactions between clients and nodes in the cluster are taking place by 
> exchanging messages through the {{MessagingService}}. Black box testing for 
> message based systems is usually pretty easy, as we're just dealing with 
> messages in/out. My suggestion would be to add tests that make use of this 
> fact by mocking message exchanges via MessagingService. Given the right use 
> case, this would turn out to be a much simpler and more efficient alternative 
> for dtests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12016) Create MessagingService mocking classes

2016-06-24 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15333973#comment-15333973
 ] 

Stefan Podkowinski edited comment on CASSANDRA-12016 at 6/24/16 10:38 AM:
--

Please find the suggested implementation in the linked WIP branch. An example 
how a unit test using those classes looks like can be found 
[here|https://github.com/spodkowinski/cassandra/blob/WIP-11960/test/unit/org/apache/cassandra/hints/HintsServiceTest.java].
 I'm looking forward for any feedback.



was (Author: spo...@gmail.com):
Please find the suggested implementation in the linked WIP branch. An example 
how a unit test using those classes looks like can be found 
[here|https://github.com/spodkowinski/cassandra/blob/3cd4ef203cd147713a6f8c4b1466703436124e0b/test/unit/org/apache/cassandra/hints/HintsServiceTest.java].
 I'm looking forward for any feedback.


> Create MessagingService mocking classes
> ---
>
> Key: CASSANDRA-12016
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12016
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Testing
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>
> Interactions between clients and nodes in the cluster are taking place by 
> exchanging messages through the {{MessagingService}}. Black box testing for 
> message based systems is usually pretty easy, as we're just dealing with 
> messages in/out. My suggestion would be to add tests that make use of this 
> fact by mocking message exchanges via MessagingService. Given the right use 
> case, this would turn out to be a much simpler and more efficient alternative 
> for dtests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12083) Race condition during system.roles column family creation

2016-06-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-12083:
---

Assignee: Sam Tunnicliffe

> Race condition during system.roles column family creation
> -
>
> Key: CASSANDRA-12083
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12083
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Sharvanath Pathak
>Assignee: Sam Tunnicliffe
>
> There is an issue where Cassandra fails with the following exception on 
> startup:
> {noformat}
> DEBUG [InternalResponseStage:2] 2016-06-20 09:43:00,651 Schema.java:465 - 
> Adding 
> org.apache.cassandra.config.CFMetaData@2882b66d[cfId=5bc52802-de25-35ed-aeab-188eecebb090,ksName=system_auth,cfName=roles,flags=[COMPOUND],params=TableParams{comment=role
>  definitions, read_repair_chance=0.0, dclocal_read_repair_chance=0.0, 
> bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=7776000, 
> default_time_to_live=0, memtable_flush_period_in_ms=360, 
> min_index_interval=128, max_index_interval=2048, 
> speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' 
> : 'NONE'}, 
> compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,
>  options={max_threshold=32, min_threshold=4}}, 
> compression=org.apache.cassandra.schema.CompressionParams@e6e0212, 
> extensions={}},comparator=comparator(),partitionColumns=[[] | [can_login 
> is_superuser salted_hash 
> member_of]],partitionKeyColumns=[ColumnDefinition{name=role, 
> type=org.apache.cassandra.db.marshal.UTF8Type, kind=PARTITION_KEY, 
> position=0}],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[ColumnDefinition{name=role,
>  type=org.apache.cassandra.db.marshal.UTF8Type, kind=PARTITION_KEY, 
> position=0}, ColumnDefinition{name=salted_hash, 
> type=org.apache.cassandra.db.marshal.UTF8Type, kind=REGULAR, position=-1}, 
> ColumnDefinition{name=member_of, 
> type=org.apache.cassandra.db.marshal.SetType(org.apache.cassandra.db.marshal.UTF8Type),
>  kind=REGULAR, position=-1}, ColumnDefinition{name=can_login, 
> type=org.apache.cassandra.db.marshal.BooleanType, kind=REGULAR, position=-1}, 
> ColumnDefinition{name=is_superuser, 
> type=org.apache.cassandra.db.marshal.BooleanType, kind=REGULAR, 
> position=-1}],droppedColumns={},triggers=[],indexes=[]] to cfIdMap
> INFO  [InternalResponseStage:2] 2016-06-20 09:43:00,653 
> ColumnFamilyStore.java:381 - Initializing system_auth.roles
> DEBUG [InternalResponseStage:2] 2016-06-20 09:43:00,664 
> MigrationManager.java:556 - Gossiping my schema version 
> c2a2bb4f-7d31-3fb8-a216-00b41a643650
> DEBUG [InternalResponseStage:1] 2016-06-20 09:43:00,669 
> ColumnFamilyStore.java:831 - Enqueuing flush of keyspaces: 1566 (0%) on-heap, 
> 0 (0%) off-heap
> DEBUG [MemtableFlushWriter:2] 2016-06-20 09:43:00,669 Memtable.java:372 - 
> Writing Memtable-keyspaces@650010305(0.437KiB serialized bytes, 3 ops, 0%/0% 
> of on/off-heap limit)
> ERROR [main] 2016-06-20 09:43:00,670 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.IllegalArgumentException: Unknown CF 
> 5bc52802-de25-35ed-aeab-188eecebb090
> at 
> org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:206) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:199) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.restrictions.StatementRestrictions.(StatementRestrictions.java:168)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepareRestrictions(SelectStatement.java:874)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:821)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:809)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.prepare(CassandraRoleManager.java:446)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.setup(CassandraRoleManager.java:144)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.service.StorageService.doAuthSetup(StorageService.java:1084)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:1032)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:755)
>  ~[apache-cassandra-3.0.6.j

[jira] [Commented] (CASSANDRA-11303) New inbound throughput parameters for streaming

2016-06-24 Thread Satoshi Konno (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348104#comment-15348104
 ] 

Satoshi Konno commented on CASSANDRA-11303:
---

Hi Paulo,

Many thanks for your review and refactoring. 

I check the results of dtests and unit tests, but the some tests has been 
failed yet.
Please let me know if there is anything I can do.

> New inbound throughput parameters for streaming
> ---
>
> Key: CASSANDRA-11303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11303
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Configuration
>Reporter: Satoshi Konno
>Priority: Minor
> Attachments: 11303_inbound_limit_debug_20160419.log, 
> 11303_inbound_nolimit_debug_20160419.log, 
> 11303_inbound_patch_for_trunk_20160419.diff, 
> 11303_inbound_patch_for_trunk_20160525.diff, 
> 200vs40inboundstreamthroughput.png, cassandra_inbound_stream.diff
>
>
> Hi,
> To specify stream throughputs of a node more clearly, I would like to add the 
> following new inbound parameters like existing outbound parameters in the 
> cassandra.yaml.
> - stream_throughput_inbound_megabits_per_sec
> - inter_dc_stream_throughput_outbound_megabits_per_sec  
> We use only the existing outbound parameters now, but it is difficult to 
> control the total throughputs of a node. In our production network, some 
> critical alerts occurs when a node exceed the specified total throughput 
> which is the sum of the input and output throughputs.
> In our operation of Cassandra, the alerts occurs during the bootstrap or 
> repair processing when a new node is added. In the worst case, we have to 
> stop the operation of the exceed node.
> I have attached the patch under consideration. I would like to add a new 
> limiter class, StreamInboundRateLimiter, and use the limiter class in 
> StreamDeserializer class. I use Row::dataSize( )to get the input throughput 
> in StreamDeserializer::newPartition(), but I am not sure whether the 
> dataSize() returns the correct data size.
> Can someone please tell me how to do it ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12082) CommitLogStressTest failing post-CASSANDRA-8844

2016-06-24 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348102#comment-15348102
 ] 

Branimir Lambov commented on CASSANDRA-12082:
-

+1 on the change. These races should be fully fixed in CASSANDRA-10202.

> CommitLogStressTest failing post-CASSANDRA-8844
> ---
>
> Key: CASSANDRA-12082
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12082
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
> Fix For: 3.8
>
> Attachments: 0001-Fix-CommitLogStressTest.patch
>
>
> Test timing out after CASSANDRA-8844.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11993) Cannot read Snappy compressed tables with 3.6

2016-06-24 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348098#comment-15348098
 ] 

Branimir Lambov commented on CASSANDRA-11993:
-

You are right, whether or not OFF_HEAP is the right thing to do, this does 
become inconsistent with the rest of the machinery. 

Either choice would provide correctness, and I do not want to change it in 
other places in the code without some testing to show that this change makes 
sense everywhere. Updated patch here:

|[trunk, 
compressor-preferred|https://github.com/blambov/cassandra/tree/11993-preferred]|[utests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-11993-preferred-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-11993-preferred-dtest/]|

> Cannot read Snappy compressed tables with 3.6
> -
>
> Key: CASSANDRA-11993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11993
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nimi Wariboko Jr.
>Assignee: Branimir Lambov
> Fix For: 3.6
>
>
> After upgrading to 3.6, I can no longer read/compact sstables compressed with 
> snappy compression. The memtable_allocation_type makes no difference both 
> offheap_buffers and heap_buffers cause the errors.
> {code}
> WARN  [SharedPool-Worker-5] 2016-06-10 15:45:18,731 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> org.xerial.snappy.SnappyError: [NOT_A_DIRECT_BUFFER] destination is not a 
> direct buffer
>   at org.xerial.snappy.Snappy.uncompress(Snappy.java:509) 
> ~[snappy-java-1.1.1.7.jar:na]
>   at 
> org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:102)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Mmap.readChunk(CompressedSegmentedFile.java:323)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:137) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) 
> ~[na:1.8.0_66]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:220)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:138)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1779)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:103)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:44)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:72)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:65)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.initializeIterator(UnfilteredRowIteratorWithLowerBound.java:85)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apac

[jira] [Updated] (CASSANDRA-12004) Inconsistent timezone in logs

2016-06-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jérôme Mainaud updated CASSANDRA-12004:
---
Attachment: 12004-trunk.patch2.txt

My first intent was to just correct was I considered as a bug but I agree with 
your point of view.

Here is a patch for trunk. Line patterns are identical to STDOUT and SYSTEMLOG 
in logback.xml and logback-test.xml.

I kept the small pattern for logback-tools.xml as there usually is a human 
being using the tool but I changed it to print local time.

> Inconsistent timezone in logs
> -
>
> Key: CASSANDRA-12004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12004
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jérôme Mainaud
>Priority: Trivial
> Fix For: 2.1.x
>
> Attachments: 12004-trunk.patch2.txt, patch.txt
>
>
> An error in provided logback.xml lead to inconsistent timestamp usage in logs.
> In log files, local time zone is used.
> On the console, UTC time zone is used (and millisconds are missing.)
> Example, the same log line (Local time zone: CEST) :
> in system.log
> {code}
> INFO  [main] 2016-06-14 14:01:51,638 StorageService.java:2081 - Node 
> localhost/127.0.0.1 state jump to NORMAL}}
> {code}
> in console
> {code}
> INFO  12:01:51 Node localhost/127.0.0.1 state jump to NORMAL
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11993) Cannot read Snappy compressed tables with 3.6

2016-06-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348058#comment-15348058
 ] 

Stefania commented on CASSANDRA-11993:
--

Thanks for the patch. 

Are you positive we need to force {{BufferType.OFF_HEAP}} rather than letting 
the compressor choose via {{ChunkedReader.preferredBufferType()}} as it is done 
by RAR, {{ChecksummedDataInput}} and {{CompressedChecksummedDataInput}}?

The LZ4 and Snappy compressors will request OFF_HEAP. 

I do agree buffer type changing is not a good thing though, so if we force 
OFF_HEAP I wonder about the other call paths? Maybe another ticket?

> Cannot read Snappy compressed tables with 3.6
> -
>
> Key: CASSANDRA-11993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11993
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nimi Wariboko Jr.
>Assignee: Branimir Lambov
> Fix For: 3.6
>
>
> After upgrading to 3.6, I can no longer read/compact sstables compressed with 
> snappy compression. The memtable_allocation_type makes no difference both 
> offheap_buffers and heap_buffers cause the errors.
> {code}
> WARN  [SharedPool-Worker-5] 2016-06-10 15:45:18,731 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> org.xerial.snappy.SnappyError: [NOT_A_DIRECT_BUFFER] destination is not a 
> direct buffer
>   at org.xerial.snappy.Snappy.uncompress(Snappy.java:509) 
> ~[snappy-java-1.1.1.7.jar:na]
>   at 
> org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:102)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Mmap.readChunk(CompressedSegmentedFile.java:323)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:137) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) 
> ~[na:1.8.0_66]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:220)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:138)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1779)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:103)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:44)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:72)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:65)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.initializeIterator(UnfilteredRowIteratorWithLowerBound.java:85)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator

[jira] [Updated] (CASSANDRA-11993) Cannot read Snappy compressed tables with 3.6

2016-06-24 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-11993:

Reviewer: Stefania
  Status: Patch Available  (was: Open)

> Cannot read Snappy compressed tables with 3.6
> -
>
> Key: CASSANDRA-11993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11993
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nimi Wariboko Jr.
>Assignee: Branimir Lambov
> Fix For: 3.6
>
>
> After upgrading to 3.6, I can no longer read/compact sstables compressed with 
> snappy compression. The memtable_allocation_type makes no difference both 
> offheap_buffers and heap_buffers cause the errors.
> {code}
> WARN  [SharedPool-Worker-5] 2016-06-10 15:45:18,731 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> org.xerial.snappy.SnappyError: [NOT_A_DIRECT_BUFFER] destination is not a 
> direct buffer
>   at org.xerial.snappy.Snappy.uncompress(Snappy.java:509) 
> ~[snappy-java-1.1.1.7.jar:na]
>   at 
> org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:102)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Mmap.readChunk(CompressedSegmentedFile.java:323)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:137) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) 
> ~[na:1.8.0_66]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:220)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:138)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1779)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:103)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:44)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:72)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:65)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.initializeIterator(UnfilteredRowIteratorWithLowerBound.java:85)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:99)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:94)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:26)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.utils.Abstract

[jira] [Commented] (CASSANDRA-11993) Cannot read Snappy compressed tables with 3.6

2016-06-24 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348031#comment-15348031
 ] 

Branimir Lambov commented on CASSANDRA-11993:
-

This was an oversight on my part -- the chunk cache should always use the same 
type of buffers to make sure buffer type does not change unexpectedly. Patch:
|[trunk 
patch|https://github.com/blambov/cassandra/tree/11993]|[utests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-11993-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blambov/job/blambov-11993-dtest/]|

> Cannot read Snappy compressed tables with 3.6
> -
>
> Key: CASSANDRA-11993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11993
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nimi Wariboko Jr.
>Assignee: Branimir Lambov
> Fix For: 3.6
>
>
> After upgrading to 3.6, I can no longer read/compact sstables compressed with 
> snappy compression. The memtable_allocation_type makes no difference both 
> offheap_buffers and heap_buffers cause the errors.
> {code}
> WARN  [SharedPool-Worker-5] 2016-06-10 15:45:18,731 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> org.xerial.snappy.SnappyError: [NOT_A_DIRECT_BUFFER] destination is not a 
> direct buffer
>   at org.xerial.snappy.Snappy.uncompress(Snappy.java:509) 
> ~[snappy-java-1.1.1.7.jar:na]
>   at 
> org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:102)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Mmap.readChunk(CompressedSegmentedFile.java:323)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:137) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) 
> ~[na:1.8.0_66]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:220)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:138)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1779)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:103)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:44)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:72)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:65)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.initializeIterator(UnfilteredRowIteratorWithLowerBound.java:85)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:99)
>  ~[apache-cassandra-3.6.jar:3.6]
>   

[jira] [Assigned] (CASSANDRA-11993) Cannot read Snappy compressed tables with 3.6

2016-06-24 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov reassigned CASSANDRA-11993:
---

Assignee: Branimir Lambov

> Cannot read Snappy compressed tables with 3.6
> -
>
> Key: CASSANDRA-11993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11993
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nimi Wariboko Jr.
>Assignee: Branimir Lambov
> Fix For: 3.6
>
>
> After upgrading to 3.6, I can no longer read/compact sstables compressed with 
> snappy compression. The memtable_allocation_type makes no difference both 
> offheap_buffers and heap_buffers cause the errors.
> {code}
> WARN  [SharedPool-Worker-5] 2016-06-10 15:45:18,731 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> org.xerial.snappy.SnappyError: [NOT_A_DIRECT_BUFFER] destination is not a 
> direct buffer
>   at org.xerial.snappy.Snappy.uncompress(Snappy.java:509) 
> ~[snappy-java-1.1.1.7.jar:na]
>   at 
> org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:102)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Mmap.readChunk(CompressedSegmentedFile.java:323)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:137) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) 
> ~[na:1.8.0_66]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:220)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:138)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1779)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:103)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:44)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:72)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:65)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.initializeIterator(UnfilteredRowIteratorWithLowerBound.java:85)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:99)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:94)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:26)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIter

[jira] [Commented] (CASSANDRA-11993) Cannot read Snappy compressed tables with 3.6

2016-06-24 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15347997#comment-15347997
 ] 

Branimir Lambov commented on CASSANDRA-11993:
-

The chunk cache will only be taking buffers from the pool at the size of the 
chunk, i.e. it will only do that if the chunk size is over 64k.

> Cannot read Snappy compressed tables with 3.6
> -
>
> Key: CASSANDRA-11993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11993
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nimi Wariboko Jr.
> Fix For: 3.6
>
>
> After upgrading to 3.6, I can no longer read/compact sstables compressed with 
> snappy compression. The memtable_allocation_type makes no difference both 
> offheap_buffers and heap_buffers cause the errors.
> {code}
> WARN  [SharedPool-Worker-5] 2016-06-10 15:45:18,731 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-5,5,main]: {}
> org.xerial.snappy.SnappyError: [NOT_A_DIRECT_BUFFER] destination is not a 
> direct buffer
>   at org.xerial.snappy.Snappy.uncompress(Snappy.java:509) 
> ~[snappy-java-1.1.1.7.jar:na]
>   at 
> org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:102)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Mmap.readChunk(CompressedSegmentedFile.java:323)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:137) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at org.apache.cassandra.cache.ChunkCache.load(ChunkCache.java:19) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.lambda$new$0(BoundedLocalCache.java:2949)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$15(BoundedLocalCache.java:1807)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853) 
> ~[na:1.8.0_66]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:1805)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:1788)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:97)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:66)
>  ~[caffeine-2.2.6.jar:na]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:215)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.cache.ChunkCache$CachingRebufferer.rebuffer(ChunkCache.java:193)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.reBufferAt(RandomAccessReader.java:78)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:220)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.util.SegmentedFile.createReader(SegmentedFile.java:138)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.getFileDataInput(SSTableReader.java:1779)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.AbstractSSTableIterator.(AbstractSSTableIterator.java:103)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.columniterator.SSTableIterator.(SSTableIterator.java:44)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:72)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableReader.iterator(BigTableReader.java:65)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.initializeIterator(UnfilteredRowIteratorWithLowerBound.java:85)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.computeNext(LazilyInitializedUnfilteredRowIterator.java:99)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:94)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBoun

[jira] [Comment Edited] (CASSANDRA-3486) Node Tool command to stop repair

2016-06-24 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15344556#comment-15344556
 ] 

Stefan Podkowinski edited comment on CASSANDRA-3486 at 6/24/16 8:36 AM:


I've now looked at this issue as a potential use case for CASSANDRA-12016 and 
added a test on top of it. The branch can be found at 
[WIP-3486|https://github.com/spodkowinski/cassandra/tree/WIP-3486] and the test 
I'm talking about in 
[ActiveRepairServiceMessagingTest.java|https://github.com/spodkowinski/cassandra/blob/WIP-3486/test/unit/org/apache/cassandra/service/ActiveRepairServiceMessagingTest.java]
 and 
[RepairRunnableTest.java|https://github.com/spodkowinski/cassandra/blob/WIP-3486/test/unit/org/apache/cassandra/repair/RepairRunnableTest.java].
 

My goal was to be able to make coordination between different nodes in repair 
scenarios easier to test. The basic cases covered so far are pretty simple, but 
I like to add more edge cases in the future. Nonetheless I wanted to share this 
early on in case [~pauloricardomg] and others have any feedback on how this 
approach would be helpful to make progress on this issue.




was (Author: spo...@gmail.com):
I've now looked at this issue as a potential use case for CASSANDRA-12016 and 
added a test on top of it. The branch can be found at 
[WIP-3486|https://github.com/spodkowinski/cassandra/tree/WIP-3486] and the test 
I'm talking about in 
[ActiveRepairServiceMessagingTest.java|https://github.com/spodkowinski/cassandra/blob/3a9ba2edcfe5a3a774089884d5fa7f4df4c9b70c/test/unit/org/apache/cassandra/service/ActiveRepairServiceMessagingTest.java].
 

My goal was to be able to make coordination between different nodes in repair 
scenarios easier to test. The basic cases covered so far are pretty simple, but 
I like to add more edge cases in the future. Nonetheless I wanted to share this 
early on in case [~pauloricardomg] and others have any feedback on how this 
approach would be helpful to make progress on this issue.



> Node Tool command to stop repair
> 
>
> Key: CASSANDRA-3486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3486
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
> Environment: JVM
>Reporter: Vijay
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: repair
> Fix For: 2.1.x
>
> Attachments: 0001-stop-repair-3583.patch
>
>
> After CASSANDRA-1740, If the validation compaction is stopped then the repair 
> will hang. This ticket will allow users to kill the original repair.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12075) Include whether or not the client should retry the request when throwing a RequestExecutionException

2016-06-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15347976#comment-15347976
 ] 

Sylvain Lebresne commented on CASSANDRA-12075:
--

I think the idea is to indicate whether the query was idempotent in the first 
place or not. So a timeout on a counter update, or a list append would say it 
isn't.

And I don't think that's crazy since currently client have to often rely on 
client declaring whether the query is idempotent since they don't parse queries 
(and kind of need to know on timeout for the purpose of retrying).

That said, I think it's mostly useful for timeout exception, not all 
{{RequestExecutionException}} as I believe the other exceptions are precise 
enough for client to make their decision). I also don't think there is cases 
where we can meaningfully say it should be retried on a different host.

But anyway, insofar as this ticket is about adding a boolean {{isIdemptotent}} 
to timeout exceptions, then I'm in favor of that. This is protocol v5 thing 
though.

> Include whether or not the client should retry the request when throwing a 
> RequestExecutionException
> 
>
> Key: CASSANDRA-12075
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12075
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
>
> Some requests that result in an error should not be retried by the client. 
> Right now if the client gets an error, it has no way of knowing whether or 
> not it should retry. We can include an extra field in each 
> {{RequestExecutionException}} that will indicate whether the client should 
> retry, retry on a different host, or not retry at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11907) 2i behaviour is different in different versions

2016-06-24 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-11907:

Reproduced In: 3.0.6, 2.1.14, 2.2.7  (was: 3.0.6, 2.2.7)
   Status: Patch Available  (was: Open)

> 2i behaviour is different in different versions
> ---
>
> Key: CASSANDRA-11907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11907
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tommy Stendahl
>Assignee: Alex Petrov
>
>  I think I have found more cases where 2i behave different in different 
> Cassandra versions, CASSANDRA-11510 solved one such case but I think there 
> are a few more.
> I get one behaviour with 2.1.14 and Trunk and I think this is the correct 
> one. With 2.2.7 and 3.0.6 the behaviour is different.
> To test this I used ccm to setup one node clusters with the different 
> versions, I prepared each cluster with these commands:
> {code:sql}
> CREATE KEYSPACE test WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'datacenter1': '1' };
> CREATE TABLE test.table1 (name text,class int,inter text,foo text,power 
> int,PRIMARY KEY (name, class, inter, foo)) WITH CLUSTERING ORDER BY (class 
> DESC, inter ASC);
> CREATE INDEX table1_power ON test.table1 (power) ;
> CREATE TABLE test.table2 (name text,class int,inter text,foo text,power 
> int,PRIMARY KEY (name, class, inter, foo)) WITH CLUSTERING ORDER BY (class 
> DESC, inter ASC);
> CREATE INDEX table2_inter ON test.table2 (inter) ;
> {code}
> I executed two select quieries on each cluster:
> {code:sql}
> SELECT * FROM test.table1 where name='R1' AND class>0 AND class<4 AND 
> inter='int1' AND power=18 ALLOW FILTERING;
> SELECT * FROM test.table2 where name='R1' AND class>0 AND class<4 AND 
> inter='int1' AND foo='aa' ALLOW FILTERING;
> {code}
> On 2.1.14 and Trunk they where successful. But on 2.2.7 and 3.0.6 they 
> failed, the first one with {{InvalidRequest: code=2200 [Invalid query] 
> message="Clustering column "inter" cannot be restricted (preceding column 
> "class" is restricted by a non-EQ relation)"}} and the second one with 
> {{InvalidRequest: code=2200 [Invalid query] message="Clustering column "foo" 
> cannot be restricted (preceding column "inter" is restricted by a non-EQ 
> relation)"}}.
> I could get the queries to execute successfully on 2.2.7 and 3.0.6 by 
> creating two more 2i:
> {code:sql}
> CREATE INDEX table1_inter ON test.table1 (inter) ;
> CREATE INDEX table2_foo ON test.table2 (foo) ;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9587) Serialize table schema as a sstable component

2016-06-24 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-9587:
--

Assignee: (was: Alex Petrov)

> Serialize table schema as a sstable component
> -
>
> Key: CASSANDRA-9587
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9587
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Sylvain Lebresne
> Fix For: 3.x
>
>
> Having the schema with each sstable would be tremendously useful for offline 
> tools and for debugging purposes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12045) Cassandra failure during write query at consistency LOCAL_QUORUM

2016-06-24 Thread Raghavendra Pinninti (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15347873#comment-15347873
 ] 

Raghavendra Pinninti commented on CASSANDRA-12045:
--

I followed this link : 
https://support.datastax.com/hc/en-us/articles/207267063-Mutation-of-x-bytes-is-too-large-for-the-maxiumum-size-of-y-
by setting commitlog_segment_size_in_mb: 64 in one node out of 3 nodes and 
restarted but no luck :(

>  Cassandra failure during write query at consistency LOCAL_QUORUM 
> --
>
> Key: CASSANDRA-12045
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12045
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Local Write-Read Paths
> Environment: Eclipse java environment
>Reporter: Raghavendra Pinninti
> Fix For: 3.x
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> While I am writing xml file into Cassandra table column I am facing following 
> exception.Its a 3 node cluster and All nodes are up.
> com.datastax.driver.core.exceptions.WriteFailureException: Cassandra failure 
> during write query at consistency LOCAL_QUORUM (2 responses were required but 
> only 0 replica responded, 1 failed) at 
> com.datastax.driver.core.exceptions.WriteFailureException.copy(WriteFailureException.java:80)
>  at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>  at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
>  at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:55) 
> at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39) 
> at DBConnection.oracle2Cassandra(DBConnection.java:267) at 
> DBConnection.main(DBConnection.java:292) Caused by: 
> com.datastax.driver.core.exceptions.WriteFailureException: Cassandra failure 
> during write query at consistency LOCAL_QUORUM (2 responses were required but 
> only 0 replica responded, 1 failed) at 
> com.datastax.driver.core.exceptions.WriteFailureException.copy(WriteFailureException.java:91)
>  at com.datastax.driver.core.Responses$Error.asException(Responses.java:119) 
> at 
> com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:180)
>  at 
> com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:186)
>  at 
> com.datastax.driver.core.RequestHandler.access$2300(RequestHandler.java:44) 
> at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:754)
>  at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:576)
> It would be great if someone helps me out from this situation. Thanks
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)