[jira] [Resolved] (CASSANDRA-13791) unable to install apache-cassandra-3.11.0 in linux box

2017-08-23 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa resolved CASSANDRA-13791.

Resolution: Duplicate

This was caused (and resolved) by CASSANDRA-13072 , where JNA was upgraded to 
{{4.4.0}}. That 4.4.0 JNA library was linked against a different version of 
glibc ( {{GLIBC_2.14}} ), which causes this error. This is fixed in {{3.11.1}}, 
but you can work around the issue by downloading the 4.2.2 JNA jar from 
[here|https://github.com/apache/cassandra/raw/00a777ec8ab701b843172e23a6cbdc4d6cf48f8d/lib/jna-4.2.2.jar]
 and placing it into the classpath ({{lib/}}), removing {{jna-4.4.0.jar}} in 
the process.


> unable to install apache-cassandra-3.11.0 in linux  box
> ---
>
> Key: CASSANDRA-13791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13791
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: Linux
>Reporter: rajasekhar
>Priority: Critical
> Fix For: 3.11.0
>
> Attachments: log_cassdb.txt
>
>
> While  installing the Cassandra in linux serverr,  I am getting below error . 
> could you please look int it and provide suggestions. PFA atttached log for 
> more information.
> [cassdb@alsc_dev_db bin]$ sh 
> /u01/Cassandra_home/apache-cassandra-3.11.0/bin/cassandra
> Error:
> ERROR [main] 2017-08-23 09:48:21,467 NativeLibraryLinux.java:62 - Failed to 
> link the C library against JNA. Native methods will be unavailable.
> java.lang.UnsatisfiedLinkError: 
> /tmp/jna--1367560132/jna4859101025087222330.tmp: /lib64/libc.so.6: version 
> `GLIBC_2.14' not found (required by 
> /tmp/jna--1367560132/jna4859101025087222330.tmp)
> at java.lang.ClassLoader$NativeLibrary.load(Native Method) 
> ~[na:1.8.0_71]
> at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1938) 
> ~[na:1.8.0_71]
>at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1821) 
> ~[na:1.8.0_71]
> at java.lang.Runtime.load0(Runtime.java:809) ~[na:1.8.0_71]
> at java.lang.System.load(System.java:1086) ~[na:1.8.0_71]
> at 
> com.sun.jna.Native.loadNativeDispatchLibraryFromClasspath(Native.java:947) 
> ~[jna-4.4.0.jar:4.4.0 (b0)]
> at com.sun.jna.Native.loadNativeDispatchLibrary(Native.java:922) 
> ~[jna-4.4.0.jar:4.4.0 (b0)]
> at com.sun.jna.Native.(Native.java:190) ~[jna-4.4.0.jar:4.4.0 
> (b0)]
> at 
> org.apache.cassandra.utils.NativeLibraryLinux.(NativeLibraryLinux.java:53)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.NativeLibrary.(NativeLibrary.java:93) 
> [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:196) 
> [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
> [apache-cassandra-3.11.0.jar:3.11.0]
> WARN  [main] 2017-08-23 09:48:21,468 StartupChecks.java:127 - jemalloc shared 
> library could not be preloaded to speed up memory allocations
> WARN  [main] 2017-08-23 09:48:21,469 StartupChecks.java:160 - JMX is not 
> enabled to receive remote connections. Please see cassandra-env.sh for more 
> info.
> ERROR [main] 2017-08-23 09:48:21,470 CassandraDaemon.java:706 - The native 
> library could not be initialized properly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13791) unable to install apache-cassandra-3.11.0 in linux box

2017-08-23 Thread rajasekhar (JIRA)
rajasekhar created CASSANDRA-13791:
--

 Summary: unable to install apache-cassandra-3.11.0 in linux  box
 Key: CASSANDRA-13791
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13791
 Project: Cassandra
  Issue Type: Bug
  Components: Configuration
 Environment: Linux
Reporter: rajasekhar
Priority: Critical
 Fix For: 3.11.0
 Attachments: log_cassdb.txt

While  installing the Cassandra in linux serverr,  I am getting below error . 
could you please look int it and provide suggestions. PFA atttached log for 
more information.

[cassdb@alsc_dev_db bin]$ sh 
/u01/Cassandra_home/apache-cassandra-3.11.0/bin/cassandra

Error:

ERROR [main] 2017-08-23 09:48:21,467 NativeLibraryLinux.java:62 - Failed to 
link the C library against JNA. Native methods will be unavailable.
java.lang.UnsatisfiedLinkError: 
/tmp/jna--1367560132/jna4859101025087222330.tmp: /lib64/libc.so.6: version 
`GLIBC_2.14' not found (required by 
/tmp/jna--1367560132/jna4859101025087222330.tmp)
at java.lang.ClassLoader$NativeLibrary.load(Native Method) 
~[na:1.8.0_71]
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1938) 
~[na:1.8.0_71]
   at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1821) 
~[na:1.8.0_71]
at java.lang.Runtime.load0(Runtime.java:809) ~[na:1.8.0_71]
at java.lang.System.load(System.java:1086) ~[na:1.8.0_71]
at 
com.sun.jna.Native.loadNativeDispatchLibraryFromClasspath(Native.java:947) 
~[jna-4.4.0.jar:4.4.0 (b0)]
at com.sun.jna.Native.loadNativeDispatchLibrary(Native.java:922) 
~[jna-4.4.0.jar:4.4.0 (b0)]
at com.sun.jna.Native.(Native.java:190) ~[jna-4.4.0.jar:4.4.0 
(b0)]
at 
org.apache.cassandra.utils.NativeLibraryLinux.(NativeLibraryLinux.java:53)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.NativeLibrary.(NativeLibrary.java:93) 
[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:196) 
[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:600) 
[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:689) 
[apache-cassandra-3.11.0.jar:3.11.0]
WARN  [main] 2017-08-23 09:48:21,468 StartupChecks.java:127 - jemalloc shared 
library could not be preloaded to speed up memory allocations
WARN  [main] 2017-08-23 09:48:21,469 StartupChecks.java:160 - JMX is not 
enabled to receive remote connections. Please see cassandra-env.sh for more 
info.
ERROR [main] 2017-08-23 09:48:21,470 CassandraDaemon.java:706 - The native 
library could not be initialized properly




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10783) Allow literal value as parameter of UDF & UDA

2017-08-23 Thread Drew Kutcharian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139537#comment-16139537
 ] 

Drew Kutcharian commented on CASSANDRA-10783:
-

Any chance of this getting back-ported to 3.0.x?

> Allow literal value as parameter of UDF & UDA
> -
>
> Key: CASSANDRA-10783
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10783
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: DOAN DuyHai
>Assignee: Sylvain Lebresne
>Priority: Minor
>  Labels: CQL3, UDF, client-impacting, doc-impacting
> Fix For: 3.8
>
>
> I have defined the following UDF
> {code:sql}
> CREATE OR REPLACE FUNCTION  maxOf(current int, testValue int) RETURNS NULL ON 
> NULL INPUT 
> RETURNS int 
> LANGUAGE java 
> AS  'return Math.max(current,testValue);'
> CREATE TABLE maxValue(id int primary key, val int);
> INSERT INTO maxValue(id, val) VALUES(1, 100);
> SELECT maxOf(val, 101) FROM maxValue WHERE id=1;
> {code}
> I got the following error message:
> {code}
> SyntaxException:  message="line 1:19 no viable alternative at input '101' (SELECT maxOf(val1, 
> [101]...)">
> {code}
>  It would be nice to allow literal value as parameter of UDF and UDA too.
>  I was thinking about an use-case for an UDA groupBy() function where the end 
> user can *inject* at runtime a literal value to select which aggregation he 
> want to display, something similar to GROUP BY ... HAVING 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10496) Make DTCS/TWCS split partitions based on time during compaction

2017-08-23 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139536#comment-16139536
 ] 

mck commented on CASSANDRA-10496:
-

While completely forgetting about this ticket, I did the following (completely 
untested) experiment: 
https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_twcs-sstable-size

This took a slightly different approach in that it splits compacted sstables 
based on the 'sstable_size_in_mb' option (which is only used as a hint really), 
but still sorts partitions over the splits by time. It wouldn't isolate 
specific "old" data into seperate old sstables as the ticket description 
describes, but it helps in the situation where different TTLs are used within 
the same TWCS table, and would partially help in the ticket's described 
use-case. In the same line of thinking it's worth noting that CASSANDRA-10540 
can help in situations (depending on the extent and distribution of the 
problem).

> Make DTCS/TWCS split partitions based on time during compaction
> ---
>
> Key: CASSANDRA-10496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10496
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>  Labels: dtcs
> Fix For: 4.x
>
>
> To avoid getting old data in new time windows with DTCS (or related, like 
> [TWCS|CASSANDRA-9666]), we need to split out old data into its own sstable 
> during compaction.
> My initial idea is to just create two sstables, when we create the compaction 
> task we state the start and end times for the window, and any data older than 
> the window will be put in its own sstable.
> By creating a single sstable with old data, we will incrementally get the 
> windows correct - say we have an sstable with these timestamps:
> {{[100, 99, 98, 97, 75, 50, 10]}}
> and we are compacting in window {{[100, 80]}} - we would create two sstables:
> {{[100, 99, 98, 97]}}, {{[75, 50, 10]}}, and the first window is now 
> 'correct'. The next compaction would compact in window {{[80, 60]}} and 
> create sstables {{[75]}}, {{[50, 10]}} etc.
> We will probably also want to base the windows on the newest data in the 
> sstables so that we actually have older data than the window.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-11748) Schema version mismatch may leads to Casandra OOM at bootstrap during a rolling upgrade process

2017-08-23 Thread Michael Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139527#comment-16139527
 ] 

Michael Fong edited comment on CASSANDRA-11748 at 8/24/17 4:08 AM:
---

Hi, [~mbyrd]

Thanks for further looking into this problem. And, yes, above mentioned items 
were the symptoms we observed (and suspected) when OOM issue happened. I 
totally agree on your analysis on this particular problem - Thanks for 
answering them thoroughly! If we could locate the log back then, we will check 
and share it on this ticket. 

Thanks again for working on this issue!


was (Author: mcfongtw):
Hi, [~mbyrd]

Thanks for further looking into this problem. And, yes, above mentioned items 
were the symptoms we observed (and suspected) when OOM issue happened. I 
totally agree on your analysis on this particular problem - Thanks for 
answering them thoroughly! If we have located the log back them, we will check 
and share it on this ticket. 

Thanks again for working on this issue!

> Schema version mismatch may leads to Casandra OOM at bootstrap during a 
> rolling upgrade process
> ---
>
> Key: CASSANDRA-11748
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11748
> Project: Cassandra
>  Issue Type: Bug
> Environment: Rolling upgrade process from 1.2.19 to 2.0.17. 
> CentOS 6.6
> Occurred in different C* node of different scale of deployment (2G ~ 5G)
>Reporter: Michael Fong
>Assignee: Matt Byrd
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have observed multiple times when a multi-node C* (v2.0.17) cluster ran 
> into OOM in bootstrap during a rolling upgrade process from 1.2.19 to 2.0.17. 
> Here is the simple guideline of our rolling upgrade process
> 1. Update schema on a node, and wait until all nodes to be in schema version 
> agreemnt - via nodetool describeclulster
> 2. Restart a Cassandra node
> 3. After restart, there is a chance that the the restarted node has different 
> schema version.
> 4. All nodes in cluster start to rapidly exchange schema information, and any 
> of node could run into OOM. 
> The following is the system.log that occur in one of our 2-node cluster test 
> bed
> --
> Before rebooting node 2:
> Node 1: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,326 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> Node 2: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,122 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> After rebooting node 2, 
> Node 2: DEBUG [main] 2016-04-19 11:18:18,016 MigrationManager.java (line 328) 
> Gossiping my schema version f5270873-ba1f-39c7-ab2e-a86db868b09b
> The node2  keeps submitting the migration task over 100+ times to the other 
> node.
> INFO [GossipStage:1] 2016-04-19 11:18:18,261 Gossiper.java (line 1011) Node 
> /192.168.88.33 has restarted, now UP
> INFO [GossipStage:1] 2016-04-19 11:18:18,262 TokenMetadata.java (line 414) 
> Updating topology for /192.168.88.33
> ...
> DEBUG [GossipStage:1] 2016-04-19 11:18:18,265 MigrationManager.java (line 
> 102) Submitting migration task for /192.168.88.33
> ... ( over 100+ times)
> --
> On the otherhand, Node 1 keeps updating its gossip information, followed by 
> receiving and submitting migrationTask afterwards: 
> INFO [RequestResponseStage:3] 2016-04-19 11:18:18,333 Gossiper.java (line 
> 978) InetAddress /192.168.88.34 is now UP
> ...
> DEBUG [MigrationStage:1] 2016-04-19 11:18:18,496 
> MigrationRequestVerbHandler.java (line 41) Received migration request from 
> /192.168.88.34.
> …… ( over 100+ times)
> DEBUG [OptionalTasks:1] 2016-04-19 11:19:18,337 MigrationManager.java (line 
> 127) submitting migration task for /192.168.88.34
> .  (over 50+ times)
> On the side note, we have over 200+ column families defined in Cassandra 
> database, which may related to this amount of rpc traffic.
> P.S.2 The over requested schema migration task will eventually have 
> InternalResponseStage performing schema merge operation. Since this operation 
> requires a compaction for each merge and is much slower to consume. Thus, the 
> back-pressure of incoming schema migration content objects consumes all of 
> the heap space and ultimately ends up OOM!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11748) Schema version mismatch may leads to Casandra OOM at bootstrap during a rolling upgrade process

2017-08-23 Thread Michael Fong (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139527#comment-16139527
 ] 

Michael Fong commented on CASSANDRA-11748:
--

Hi, [~mbyrd]

Thanks for further looking into this problem. And, yes, above mentioned items 
were the symptoms we observed (and suspected) when OOM issue happened. I 
totally agree on your analysis on this particular problem - Thanks for 
answering them thoroughly! If we have located the log back them, we will check 
and share it on this ticket. 

Thanks again for working on this issue!

> Schema version mismatch may leads to Casandra OOM at bootstrap during a 
> rolling upgrade process
> ---
>
> Key: CASSANDRA-11748
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11748
> Project: Cassandra
>  Issue Type: Bug
> Environment: Rolling upgrade process from 1.2.19 to 2.0.17. 
> CentOS 6.6
> Occurred in different C* node of different scale of deployment (2G ~ 5G)
>Reporter: Michael Fong
>Assignee: Matt Byrd
>Priority: Critical
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> We have observed multiple times when a multi-node C* (v2.0.17) cluster ran 
> into OOM in bootstrap during a rolling upgrade process from 1.2.19 to 2.0.17. 
> Here is the simple guideline of our rolling upgrade process
> 1. Update schema on a node, and wait until all nodes to be in schema version 
> agreemnt - via nodetool describeclulster
> 2. Restart a Cassandra node
> 3. After restart, there is a chance that the the restarted node has different 
> schema version.
> 4. All nodes in cluster start to rapidly exchange schema information, and any 
> of node could run into OOM. 
> The following is the system.log that occur in one of our 2-node cluster test 
> bed
> --
> Before rebooting node 2:
> Node 1: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,326 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> Node 2: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,122 
> MigrationManager.java (line 328) Gossiping my schema version 
> 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
> After rebooting node 2, 
> Node 2: DEBUG [main] 2016-04-19 11:18:18,016 MigrationManager.java (line 328) 
> Gossiping my schema version f5270873-ba1f-39c7-ab2e-a86db868b09b
> The node2  keeps submitting the migration task over 100+ times to the other 
> node.
> INFO [GossipStage:1] 2016-04-19 11:18:18,261 Gossiper.java (line 1011) Node 
> /192.168.88.33 has restarted, now UP
> INFO [GossipStage:1] 2016-04-19 11:18:18,262 TokenMetadata.java (line 414) 
> Updating topology for /192.168.88.33
> ...
> DEBUG [GossipStage:1] 2016-04-19 11:18:18,265 MigrationManager.java (line 
> 102) Submitting migration task for /192.168.88.33
> ... ( over 100+ times)
> --
> On the otherhand, Node 1 keeps updating its gossip information, followed by 
> receiving and submitting migrationTask afterwards: 
> INFO [RequestResponseStage:3] 2016-04-19 11:18:18,333 Gossiper.java (line 
> 978) InetAddress /192.168.88.34 is now UP
> ...
> DEBUG [MigrationStage:1] 2016-04-19 11:18:18,496 
> MigrationRequestVerbHandler.java (line 41) Received migration request from 
> /192.168.88.34.
> …… ( over 100+ times)
> DEBUG [OptionalTasks:1] 2016-04-19 11:19:18,337 MigrationManager.java (line 
> 127) submitting migration task for /192.168.88.34
> .  (over 50+ times)
> On the side note, we have over 200+ column families defined in Cassandra 
> database, which may related to this amount of rpc traffic.
> P.S.2 The over requested schema migration task will eventually have 
> InternalResponseStage performing schema merge operation. Since this operation 
> requires a compaction for each merge and is much slower to consume. Thus, the 
> back-pressure of incoming schema migration content objects consumes all of 
> the heap space and ultimately ends up OOM!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13299) Potential OOMs and lock contention in write path streams

2017-08-23 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-13299:
-
Status: Awaiting Feedback  (was: Open)

> Potential OOMs and lock contention in write path streams
> 
>
> Key: CASSANDRA-13299
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13299
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Roth
>Assignee: ZhaoYang
>
> I see a potential OOM, when a stream (e.g. repair) goes through the write 
> path as it is with MVs.
> StreamReceiveTask gets a bunch of SSTableReaders. These produce rowiterators 
> and they again produce mutations. So every partition creates a single 
> mutation, which in case of (very) big partitions can result in (very) big 
> mutations. Those are created on heap and stay there until they finished 
> processing.
> I don't think it is necessary to create a single mutation for each partition. 
> Why don't we implement a PartitionUpdateGeneratorIterator that takes a 
> UnfilteredRowIterator and a max size and spits out PartitionUpdates to be 
> used to create and apply mutations?
> The max size should be something like min(reasonable_absolute_max_size, 
> max_mutation_size, commitlog_segment_size / 2). reasonable_absolute_max_size 
> could be like 16M or sth.
> A mutation shouldn't be too large as it also affects MV partition locking. 
> The longer a MV partition is locked during a stream, the higher chances are 
> that WTE's occur during streams.
> I could also imagine that a max number of updates per mutation regardless of 
> size in bytes could make sense to avoid lock contention.
> Love to get feedback and suggestions, incl. naming suggestions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13299) Potential OOMs and lock contention in write path streams

2017-08-23 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134733#comment-16134733
 ] 

ZhaoYang edited comment on CASSANDRA-13299 at 8/24/17 2:20 AM:
---

[trunk|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13299-trunk]
[dtest|https://github.com/riptano/cassandra-dtest/commits/CASSANDRA-13299 ]

{code}
Changes:

1. Throttle by number of base unfiltered. default is 100. 

2. A pair of open/close range tombstone could have any number of unshadowed 
rows in between. In the patch, when reaching the limit of each batch, if there 
is an open range-tombstone-mark, it will generate a corresponding close marker. 
It's to avoid handling range-tombstone-mark separately from row which costs 1 
more read-before-write for each pair of markers. This also help to reduce the 
impact of a large range tombstone.

3. Partition deletion is only applied on first mutation to avoid reading entire 
partition more than once.
{code}

Note:
One partition deletion or a range deletion could still cause huge number of 
view rows to be removed, thus view mutation may fail to apply due to WTE or 
max_mutation_size, but it could be resolved separately in CASSANDRA-12783. 
Here, I only address the issue of holding entire partition into memory when 
repairing base with mv.

Cherry-pick CASSANDRA-13787 to pass dtest


was (Author: jasonstack):
[trunk|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13299-trunk]
[dtest|https://github.com/riptano/cassandra-dtest/commits/CASSANDRA-13299 ]

Changes:

1. Throttle by number of base unfiltered. default is 100. 
2. A pair of open/close range tombstone could have any number of unshadowed 
rows in between. In the patch, when reaching the limit of each batch, if there 
is an open range-tombstone-mark, it will generate a corresponding close marker 
for it. It's to avoid handling range-tombstone-mark separately from row which 
costs 1 more read-before-write for each pair of markers. This also help to 
reduce the impact of a large range tombstone.
3. Partition deletion is only applied on first mutation to avoid reading entire 
partition more than once.


Note:
One partition deletion or a range deletion could cause huge number of view rows 
to be removed, thus view mutation may fail to apply due to WTE or 
max_mutation_size, but it could be resolved separately in CASSANDRA-12783. 
Here, I only address the issue of holding entire partition into memory when 
repairing base with mv.

> Potential OOMs and lock contention in write path streams
> 
>
> Key: CASSANDRA-13299
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13299
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Roth
>Assignee: ZhaoYang
>
> I see a potential OOM, when a stream (e.g. repair) goes through the write 
> path as it is with MVs.
> StreamReceiveTask gets a bunch of SSTableReaders. These produce rowiterators 
> and they again produce mutations. So every partition creates a single 
> mutation, which in case of (very) big partitions can result in (very) big 
> mutations. Those are created on heap and stay there until they finished 
> processing.
> I don't think it is necessary to create a single mutation for each partition. 
> Why don't we implement a PartitionUpdateGeneratorIterator that takes a 
> UnfilteredRowIterator and a max size and spits out PartitionUpdates to be 
> used to create and apply mutations?
> The max size should be something like min(reasonable_absolute_max_size, 
> max_mutation_size, commitlog_segment_size / 2). reasonable_absolute_max_size 
> could be like 16M or sth.
> A mutation shouldn't be too large as it also affects MV partition locking. 
> The longer a MV partition is locked during a stream, the higher chances are 
> that WTE's occur during streams.
> I could also imagine that a max number of updates per mutation regardless of 
> size in bytes could make sense to avoid lock contention.
> Love to get feedback and suggestions, incl. naming suggestions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13622) Better config validation/documentation

2017-08-23 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139396#comment-16139396
 ] 

Kurt Greaves commented on CASSANDRA-13622:
--

Sorry missed the email for your last comment. CircleCI appears to have failed 
on [trunk|https://circleci.com/gh/jasonstack/cassandra/424#tests/containers/0]. 
Might be related as it is testing DatabaseDescriptor. On that note probably 
wouldn't hurt to have some simple tests for these cases.

Also the merge into trunk has an extra line in {{CHANGES.txt}} before the 
change. The {{3.0.15}} should be removed.

Might want to rebase as well (sorry for my slowness :/)

Also not a committer so will prod one to commit/review once tests are 
written/are working.
Thanks.

> Better config validation/documentation
> --
>
> Key: CASSANDRA-13622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13622
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Kurt Greaves
>Assignee: ZhaoYang
>Priority: Minor
>  Labels: lhf
> Fix For: 4.0
>
>
> There are a number of properties in the yaml that are "in_mb", however 
> resolve to bytes when calculated in {{DatabaseDescriptor.java}}, but are 
> stored in int's. This means that their maximum values are 2047, as any higher 
> when converted to bytes overflows the int.
> Where possible/reasonable we should convert these to be long's, and stored as 
> long's. If there is no reason for the value to ever be >2047 we should at 
> least document that as the max value, or better yet make it error if set 
> higher than that. Noting that although it's bad practice to increase a lot of 
> them to such high values, there may be cases where it is necessary and in 
> which case we should handle it appropriately rather than overflowing and 
> surprising the user. That is, causing it to break but not in the way the user 
> expected it to :)
> Following are functions that currently could be at risk of the above:
> {code:java|title=DatabaseDescriptor.java}
> getThriftFramedTransportSize()
> getMaxValueSize()
> getCompactionLargePartitionWarningThreshold()
> getCommitLogSegmentSize()
> getNativeTransportMaxFrameSize()
> # These are in KB so max value of 2096128
> getBatchSizeWarnThreshold()
> getColumnIndexSize()
> getColumnIndexCacheSize()
> getMaxMutationSize()
> {code}
> Note we may not actually need to fix all of these, and there may be more. 
> This was just from a rough scan over the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-08-23 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139378#comment-16139378
 ] 

mck commented on CASSANDRA-13418:
-

[~rgerard]'s patch, again (a35b9e8):


|| branch || testall || dtest ||
| [trunk_13418|https://github.com/criteo-forks/cassandra/tree/CASSANDRA-13418]  
| [testall|https://circleci.com/gh/thelastpickle/cassandra/22]  | 
[dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/214]
 |


> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13433) RPM distribution improvements and known issues

2017-08-23 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139324#comment-16139324
 ] 

Michael Shuler commented on CASSANDRA-13433:


Feel free to attach your spec file, but I'd be most interested in the precise 
steps for a user to go from the stock CentOS/RHEL 6 AWS AMIs to installing and 
successfully starting the service and running cqlsh on these OS versions. If we 
go there, this needs to be documented on the download page. I don't believe 
just installing the Cassandra RPM will be able to pull dependencies, since it 
sounds like there is a step to add some other repository. I imagine the 
Cassandra versions that require JDK8 could also be problematic on CentOS 6?

Good starting points for step-by-step How-To on installing Cassandra with RPM.
[CentOS 6 AMI|https://aws.amazon.com/marketplace/pp/B00A6KUVBW]
[RHEL 6 AMI|https://aws.amazon.com/marketplace/pp/B007ORSS8I]

I do think it would be interesting and helpful for users stuck on ancient OS 
versions, but will require someone interested in those OS versions to do some 
testing, update documentation, [add docker 
builds|https://github.com/apache/cassandra-builds] for these OS versions, and 
verify what's built works properly (might also be possible in docker). Thanks!

> RPM distribution improvements and known issues
> --
>
> Key: CASSANDRA-13433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13433
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>
> Starting with CASSANDRA-13252, new releases will be provided as both official 
> RPM and Debian packages.  While the Debian packages are already well 
> established with our user base, the RPMs just have been release for the first 
> time and still require some attention. 
> Feel free to discuss RPM related issues in this ticket and open a sub-task to 
> fill a bug report. 
> Please note that native systemd support will be implemented with 
> CASSANDRA-13148 and this is not strictly a RPM specific issue. We still 
> intent to offer non-systemd support based on the already working init scripts 
> that we ship. Therefor the first step is to make use of systemd backward 
> compatibility for SysV/LSB scripts, so we can provide RPMs for both systemd 
> and non-systemd environments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13433) RPM distribution improvements and known issues

2017-08-23 Thread Nathaniel Tabernero (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139278#comment-16139278
 ] 

Nathaniel Tabernero commented on CASSANDRA-13433:
-

We had the spec file depend on python27 which is from the centos-release-scl 
repository.

> RPM distribution improvements and known issues
> --
>
> Key: CASSANDRA-13433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13433
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>
> Starting with CASSANDRA-13252, new releases will be provided as both official 
> RPM and Debian packages.  While the Debian packages are already well 
> established with our user base, the RPMs just have been release for the first 
> time and still require some attention. 
> Feel free to discuss RPM related issues in this ticket and open a sub-task to 
> fill a bug report. 
> Please note that native systemd support will be implemented with 
> CASSANDRA-13148 and this is not strictly a RPM specific issue. We still 
> intent to offer non-systemd support based on the already working init scripts 
> that we ship. Therefor the first step is to make use of systemd backward 
> compatibility for SysV/LSB scripts, so we can provide RPMs for both systemd 
> and non-systemd environments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13790) AssertionError in Bounds.java from withNewRight

2017-08-23 Thread Aaron Ten Clay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Ten Clay updated CASSANDRA-13790:
---
Description: 
We are seeing this error logged on 1 of 6 nodes in our cluster. The triggering 
query is:

{noformat}
SELECT * FROM "CF" WHERE column1 IN ('00802F14247A') ALLOW FILTERING;
{noformat}

This is a legacy column family with a single row. The query works fine on the 
other 5 nodes. I can reproduce the issue directly in cqlsh.


{noformat}
ERROR [SharedPool-Worker-2] 2017-08-23 20:27:31,482 Message.java:542 - 
Unexpected exception during request; channel = [id: 0x50cdb7a4, 
L:/10.51.10.57:9042 - R:/10.51.10.41:34028]
java.lang.AssertionError: [min(-1),max(-9207579300395621549)]
at org.apache.cassandra.dht.Bounds.(Bounds.java:50) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.(Bounds.java:43) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.withNewRight(Bounds.java:156) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1768) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.queryNextPage(RangeNamesQueryPager.java:78)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:85)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.fetchPage(RangeNamesQueryPager.java:38)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:231)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:68)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:519)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:138)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [apache-cassandra-2.1.17.jar:2.1.17]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_121]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-2.1.17.jar:2.1.17]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_121]

{noformat}


  was:
We are seeing this error logged on 1 of 6 nodes in our cluster. The triggering 
query is:

{noformat}
SELECT * FROM "CF" WHERE column1 IN ('00802F14247A') ALLOW FILTERING;
{noformat}

This is a legacy column family with a single row. The query works fine on the 
other 5 nodes.


{noformat}
ERROR [SharedPool-Worker-2] 2017-08-23 20:27:31,482 Message.java:542 - 
Unexpected exception during request; channel = [id: 0x50cdb7a4, 
L:/10.51.10.57:9042 - R:/10.51.10.41:34028]
java.lang.AssertionError: [min(-1),max(-9207579300395621549)]
at org.apache.cassandra.dht.Bounds.(Bounds.java:50) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.(Bounds.java:43) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.withNewRight(Bounds.java:156) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1768) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.queryNextPage(RangeNamesQueryPager.java:78)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:85)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
  

[jira] [Updated] (CASSANDRA-13790) AssertionError in Bounds.java from withNewRight

2017-08-23 Thread Aaron Ten Clay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Ten Clay updated CASSANDRA-13790:
---
Description: 
We are seeing this error logged on 1 of 6 nodes in our cluster. The triggering 
query is:

{noformat}
SELECT * FROM "CF" WHERE column1 IN ('00802F14247A') ALLOW FILTERING;
{noformat}

This is a legacy column family with a single row. The query works fine on the 
other 5 nodes.


{noformat}
ERROR [SharedPool-Worker-2] 2017-08-23 20:27:31,482 Message.java:542 - 
Unexpected exception during request; channel = [id: 0x50cdb7a4, 
L:/10.51.10.57:9042 - R:/10.51.10.41:34028]
java.lang.AssertionError: [min(-1),max(-9207579300395621549)]
at org.apache.cassandra.dht.Bounds.(Bounds.java:50) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.(Bounds.java:43) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.withNewRight(Bounds.java:156) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1768) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.queryNextPage(RangeNamesQueryPager.java:78)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:85)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.fetchPage(RangeNamesQueryPager.java:38)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:231)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:68)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:519)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:138)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [apache-cassandra-2.1.17.jar:2.1.17]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_121]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-2.1.17.jar:2.1.17]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_121]

{noformat}


  was:
We are seeing this error logged on 1 of 6 nodes in our cluster. I'm working to 
get the offending  CQL query text and will update this issue if/when I am able 
to suss out what application code is triggering this.


{noformat}
ERROR [SharedPool-Worker-2] 2017-08-23 20:27:31,482 Message.java:542 - 
Unexpected exception during request; channel = [id: 0x50cdb7a4, 
L:/10.51.10.57:9042 - R:/10.51.10.41:34028]
java.lang.AssertionError: [min(-1),max(-9207579300395621549)]
at org.apache.cassandra.dht.Bounds.(Bounds.java:50) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.(Bounds.java:43) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.withNewRight(Bounds.java:156) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1768) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.queryNextPage(RangeNamesQueryPager.java:78)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:85)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.fetchPage(RangeNamesQueryPager.java:38)
 

[jira] [Updated] (CASSANDRA-13790) AssertionError in Bounds.java from withNewRight

2017-08-23 Thread Aaron Ten Clay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Ten Clay updated CASSANDRA-13790:
---
Description: 
We are seeing this error logged on 1 of 6 nodes in our cluster. I'm working to 
get the offending  CQL query text and will update this issue if/when I am able 
to suss out what application code is triggering this.


{noformat}
ERROR [SharedPool-Worker-2] 2017-08-23 20:27:31,482 Message.java:542 - 
Unexpected exception during request; channel = [id: 0x50cdb7a4, 
L:/10.51.10.57:9042 - R:/10.51.10.41:34028]
java.lang.AssertionError: [min(-1),max(-9207579300395621549)]
at org.apache.cassandra.dht.Bounds.(Bounds.java:50) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.(Bounds.java:43) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.withNewRight(Bounds.java:156) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1768) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.queryNextPage(RangeNamesQueryPager.java:78)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:85)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.fetchPage(RangeNamesQueryPager.java:38)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:231)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:68)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:519)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:138)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [apache-cassandra-2.1.17.jar:2.1.17]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_121]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-2.1.17.jar:2.1.17]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_121]

{noformat}


  was:
We are seeing this error logged on one of 6 nodes in our cluster. I'm working 
to get the offending  CQL query text and will update this issue if/when I am 
able to suss out what application code is triggering this.


{noformat}
ERROR [SharedPool-Worker-2] 2017-08-23 20:27:31,482 Message.java:542 - 
Unexpected exception during request; channel = [id: 0x50cdb7a4, 
L:/10.51.10.57:9042 - R:/10.51.10.41:34028]
java.lang.AssertionError: [min(-1),max(-9207579300395621549)]
at org.apache.cassandra.dht.Bounds.(Bounds.java:50) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.(Bounds.java:43) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.withNewRight(Bounds.java:156) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1768) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.queryNextPage(RangeNamesQueryPager.java:78)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:85)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.fetchPage(RangeNamesQueryPager.java:38)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 

[jira] [Created] (CASSANDRA-13790) AssertionError in Bounds.java from withNewRight

2017-08-23 Thread Aaron Ten Clay (JIRA)
Aaron Ten Clay created CASSANDRA-13790:
--

 Summary: AssertionError in Bounds.java from withNewRight
 Key: CASSANDRA-13790
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13790
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 12.04.5 LTS
Cassandra 2.1.17 | CQL spec 3.2.1 
Datastax Ruby driver 3.0.3
Reporter: Aaron Ten Clay


We are seeing this error logged on one of 6 nodes in our cluster. I'm working 
to get the offending  CQL query text and will update this issue if/when I am 
able to suss out what application code is triggering this.


{noformat}
ERROR [SharedPool-Worker-2] 2017-08-23 20:27:31,482 Message.java:542 - 
Unexpected exception during request; channel = [id: 0x50cdb7a4, 
L:/10.51.10.57:9042 - R:/10.51.10.41:34028]
java.lang.AssertionError: [min(-1),max(-9207579300395621549)]
at org.apache.cassandra.dht.Bounds.(Bounds.java:50) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.(Bounds.java:43) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.dht.Bounds.withNewRight(Bounds.java:156) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1768) 
~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.queryNextPage(RangeNamesQueryPager.java:78)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:85)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.service.pager.RangeNamesQueryPager.fetchPage(RangeNamesQueryPager.java:38)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:231)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:68)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:519)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:138)
 ~[apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439)
 [apache-cassandra-2.1.17.jar:2.1.17]
at 
org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335)
 [apache-cassandra-2.1.17.jar:2.1.17]
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348)
 [netty-all-4.0.44.Final.jar:4.0.44.Final]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_121]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 [apache-cassandra-2.1.17.jar:2.1.17]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-2.1.17.jar:2.1.17]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_121]

{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-5290) Refactor inter-node SSL support to make it possible enable without total cluster downtime

2017-08-23 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown resolved CASSANDRA-5290.

Resolution: Duplicate

> Refactor inter-node SSL support to make it possible enable without total 
> cluster downtime
> -
>
> Key: CASSANDRA-5290
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5290
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Priority: Minor
>
> should be possible by doing something similar to what compression does today:
> * node A connects to node B, indicating that it wants SSL
> * node A and B upgrade/wrap sockets
> * node B must still be able to talk to A (and all other nodes) without SSL
> nothing secret is shared during hand shake phase
> one difference compared to compression is that after the cluster is rolled 
> there must be a way to not allow any non-ssl talk at all to avoid any 
> plaintext-communication due to configuration mistakes. this should be doable 
> via configuration and perhaps JMX



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-9375) force minumum timeout value

2017-08-23 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-9375:
---
   Resolution: Fixed
Fix Version/s: (was: 2.1.x)
   4.0
   Status: Resolved  (was: Patch Available)

> force minumum timeout value 
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Varun Barala
>Priority: Trivial
>  Labels: patch
> Fix For: 4.0
>
> Attachments: CASSANDRA-9375_after_review, 
> CASSANDRA-9375_after_review_2.patch, CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9375) force minumum timeout value

2017-08-23 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139228#comment-16139228
 ] 

Jason Brown commented on CASSANDRA-9375:


I made the executive decision to commit to trunk (4.0) only. Added a NEWS.txt 
entry, as well. 

committed as sha {{d2dcd7f884cc997905c820d7cef8c9fc886ff4f7}}. thanks!

> force minumum timeout value 
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Varun Barala
>Priority: Trivial
>  Labels: patch
> Fix For: 4.0
>
> Attachments: CASSANDRA-9375_after_review, 
> CASSANDRA-9375_after_review_2.patch, CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: force minumum timeout value

2017-08-23 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/trunk fc92db2b9 -> d2dcd7f88


force minumum timeout value

patch by Varun Barala; reviewed by jasobrown for CASSANDRA-9375


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d2dcd7f8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d2dcd7f8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d2dcd7f8

Branch: refs/heads/trunk
Commit: d2dcd7f884cc997905c820d7cef8c9fc886ff4f7
Parents: fc92db2
Author: Jason Brown 
Authored: Wed Aug 23 15:08:11 2017 -0700
Committer: Jason Brown 
Committed: Wed Aug 23 15:08:11 2017 -0700

--
 CHANGES.txt |  1 +
 NEWS.txt|  3 +
 conf/cassandra.yaml | 19 +--
 .../cassandra/config/DatabaseDescriptor.java| 58 
 .../config/DatabaseDescriptorTest.java  | 40 ++
 5 files changed, 115 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d2dcd7f8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a14e390..d0ec78d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * force minumum timeout value (CASSANDRA-9375)
  * use netty for streaming (CASSANDRA-12229)
  * Use netty for internode messaging (CASSANDRA-8457)
  * Add bytes repaired/unrepaired to nodetool tablestats (CASSANDRA-13774)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d2dcd7f8/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 4d30631..253d773 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -60,6 +60,9 @@ Upgrading
 - Config option index_interval has been removed (it was deprecated since 
2.0)
 - Deprecated repair JMX APIs are removed.
 - The version of snappy-java has been upgraded to 1.1.2.6
+   - the miniumum value for internode message timeouts is 10ms. 
Previously, any
+ positive value was allowed. See cassandra.yaml entries like
+ read_request_timeout_in_ms for more details.
 
 3.11.0
 ==

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d2dcd7f8/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index bdc68d1..3db82a1 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -780,22 +780,29 @@ sstable_preemptive_open_interval_in_mb: 50
 # When unset, the default is 200 Mbps or 25 MB/s
 # inter_dc_stream_throughput_outbound_megabits_per_sec: 200
 
-# How long the coordinator should wait for read operations to complete
+# How long the coordinator should wait for read operations to complete.
+# Lowest acceptable value is 10 ms.
 read_request_timeout_in_ms: 5000
-# How long the coordinator should wait for seq or index scans to complete
+# How long the coordinator should wait for seq or index scans to complete.
+# Lowest acceptable value is 10 ms.
 range_request_timeout_in_ms: 1
-# How long the coordinator should wait for writes to complete
+# How long the coordinator should wait for writes to complete.
+# Lowest acceptable value is 10 ms.
 write_request_timeout_in_ms: 2000
-# How long the coordinator should wait for counter writes to complete
+# How long the coordinator should wait for counter writes to complete.
+# Lowest acceptable value is 10 ms.
 counter_write_request_timeout_in_ms: 5000
 # How long a coordinator should continue to retry a CAS operation
-# that contends with other proposals for the same row
+# that contends with other proposals for the same row.
+# Lowest acceptable value is 10 ms.
 cas_contention_timeout_in_ms: 1000
 # How long the coordinator should wait for truncates to complete
 # (This can be much longer, because unless auto_snapshot is disabled
 # we need to flush first so we can snapshot before removing the data.)
+# Lowest acceptable value is 10 ms.
 truncate_request_timeout_in_ms: 6
-# The default timeout for other, miscellaneous operations
+# The default timeout for other, miscellaneous operations.
+# Lowest acceptable value is 10 ms.
 request_timeout_in_ms: 1
 
 # How long before a node logs slow queries. Select queries that take longer 
than

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d2dcd7f8/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 302a528..a839224 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ 

[jira] [Updated] (CASSANDRA-9375) force minumum timeout value

2017-08-23 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-9375:
---
Summary: force minumum timeout value   (was: setting timeouts to 1ms 
prevents startup)

> force minumum timeout value 
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Varun Barala
>Priority: Trivial
>  Labels: patch
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-9375_after_review, 
> CASSANDRA-9375_after_review_2.patch, CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13789) Reduce memory copies and object creations when acting on ByteBufs

2017-08-23 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139175#comment-16139175
 ] 

Jason Brown commented on CASSANDRA-13789:
-

Running utests with patch applied to trunk: [circleci jobs for 
branch|https://circleci.com/gh/jasobrown/cassandra/tree/13789]

> Reduce memory copies and object creations when acting on  ByteBufs
> --
>
> Key: CASSANDRA-13789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13789
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Norman Maurer
>Assignee: Norman Maurer
> Attachments: 
> 0001-Reduce-memory-copies-and-object-creations-when-actin.patch
>
>
> There are multiple "low-hanging-fruits" when it comes to reduce memory copies 
> and object allocations when acting on ByteBufs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13752) Corrupted SSTables created in 3.11

2017-08-23 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139111#comment-16139111
 ] 

Jeff Jirsa commented on CASSANDRA-13752:


I'll take review in conjunction with CASSANDRA-13756


> Corrupted SSTables created in 3.11
> --
>
> Key: CASSANDRA-13752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Assignee: Hannu Kröger
>Priority: Blocker
>
> We have discovered issues with corrupted SSTables. 
> {code}
> ERROR [SSTableBatchOpen:22] 2017-08-03 20:19:53,195 SSTableReader.java:577 - 
> Cannot read sstable 
> /cassandra/data/mykeyspace/mytable-7a4992800d5611e7b782cb90016f2d17/mc-35556-big=[Data.db,
>  Statistics.db, Summary.db, Digest.crc32, CompressionInfo.db, TOC.txt, 
> Index.db, Filter.db]; other IO error, skipping table
> java.io.EOFException: EOF after 1898 bytes out of 21093
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:377)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:325)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:231)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:122)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:93)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:488)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:396)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader$5.run(SSTableReader.java:561)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_111]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_111]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> {code}
> Files look like this:
> {code}
> -rw-r--r--. 1 cassandra cassandra 3899251 Aug  7 08:37 
> mc-6166-big-CompressionInfo.db
> -rw-r--r--. 1 cassandra cassandra 16874421686 Aug  7 08:37 mc-6166-big-Data.db
> -rw-r--r--. 1 cassandra cassandra  10 Aug  7 08:37 
> mc-6166-big-Digest.crc32
> -rw-r--r--. 1 cassandra cassandra 2930904 Aug  7 08:37 
> mc-6166-big-Filter.db
> -rw-r--r--. 1 cassandra cassandra   75880 Aug  7 08:37 
> mc-6166-big-Index.db
> -rw-r--r--. 1 cassandra cassandra   13762 Aug  7 08:37 
> mc-6166-big-Statistics.db
> -rw-r--r--. 1 cassandra cassandra  882008 Aug  7 08:37 
> mc-6166-big-Summary.db
> -rw-r--r--. 1 cassandra cassandra  92 Aug  7 08:37 mc-6166-big-TOC.txt
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13752) Corrupted SSTables created in 3.11

2017-08-23 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa reassigned CASSANDRA-13752:
--

Assignee: Hannu Kröger  (was: Jeff Jirsa)

> Corrupted SSTables created in 3.11
> --
>
> Key: CASSANDRA-13752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Assignee: Hannu Kröger
>Priority: Blocker
>
> We have discovered issues with corrupted SSTables. 
> {code}
> ERROR [SSTableBatchOpen:22] 2017-08-03 20:19:53,195 SSTableReader.java:577 - 
> Cannot read sstable 
> /cassandra/data/mykeyspace/mytable-7a4992800d5611e7b782cb90016f2d17/mc-35556-big=[Data.db,
>  Statistics.db, Summary.db, Digest.crc32, CompressionInfo.db, TOC.txt, 
> Index.db, Filter.db]; other IO error, skipping table
> java.io.EOFException: EOF after 1898 bytes out of 21093
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:377)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:325)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:231)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:122)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:93)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:488)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:396)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader$5.run(SSTableReader.java:561)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_111]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_111]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> {code}
> Files look like this:
> {code}
> -rw-r--r--. 1 cassandra cassandra 3899251 Aug  7 08:37 
> mc-6166-big-CompressionInfo.db
> -rw-r--r--. 1 cassandra cassandra 16874421686 Aug  7 08:37 mc-6166-big-Data.db
> -rw-r--r--. 1 cassandra cassandra  10 Aug  7 08:37 
> mc-6166-big-Digest.crc32
> -rw-r--r--. 1 cassandra cassandra 2930904 Aug  7 08:37 
> mc-6166-big-Filter.db
> -rw-r--r--. 1 cassandra cassandra   75880 Aug  7 08:37 
> mc-6166-big-Index.db
> -rw-r--r--. 1 cassandra cassandra   13762 Aug  7 08:37 
> mc-6166-big-Statistics.db
> -rw-r--r--. 1 cassandra cassandra  882008 Aug  7 08:37 
> mc-6166-big-Summary.db
> -rw-r--r--. 1 cassandra cassandra  92 Aug  7 08:37 mc-6166-big-TOC.txt
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13752) Corrupted SSTables created in 3.11

2017-08-23 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa reassigned CASSANDRA-13752:
--

Assignee: Jeff Jirsa

> Corrupted SSTables created in 3.11
> --
>
> Key: CASSANDRA-13752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Assignee: Jeff Jirsa
>Priority: Blocker
>
> We have discovered issues with corrupted SSTables. 
> {code}
> ERROR [SSTableBatchOpen:22] 2017-08-03 20:19:53,195 SSTableReader.java:577 - 
> Cannot read sstable 
> /cassandra/data/mykeyspace/mytable-7a4992800d5611e7b782cb90016f2d17/mc-35556-big=[Data.db,
>  Statistics.db, Summary.db, Digest.crc32, CompressionInfo.db, TOC.txt, 
> Index.db, Filter.db]; other IO error, skipping table
> java.io.EOFException: EOF after 1898 bytes out of 21093
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:377)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:325)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:231)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:122)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:93)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:488)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:396)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader$5.run(SSTableReader.java:561)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_111]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_111]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> {code}
> Files look like this:
> {code}
> -rw-r--r--. 1 cassandra cassandra 3899251 Aug  7 08:37 
> mc-6166-big-CompressionInfo.db
> -rw-r--r--. 1 cassandra cassandra 16874421686 Aug  7 08:37 mc-6166-big-Data.db
> -rw-r--r--. 1 cassandra cassandra  10 Aug  7 08:37 
> mc-6166-big-Digest.crc32
> -rw-r--r--. 1 cassandra cassandra 2930904 Aug  7 08:37 
> mc-6166-big-Filter.db
> -rw-r--r--. 1 cassandra cassandra   75880 Aug  7 08:37 
> mc-6166-big-Index.db
> -rw-r--r--. 1 cassandra cassandra   13762 Aug  7 08:37 
> mc-6166-big-Statistics.db
> -rw-r--r--. 1 cassandra cassandra  882008 Aug  7 08:37 
> mc-6166-big-Summary.db
> -rw-r--r--. 1 cassandra cassandra  92 Aug  7 08:37 mc-6166-big-TOC.txt
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13752) Corrupted SSTables created in 3.11

2017-08-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139006#comment-16139006
 ] 

Hannu Kröger commented on CASSANDRA-13752:
--

[~whangsf] I ported the fix to 3.0 as well (assuming the same race condition 
can happen there as well).

> Corrupted SSTables created in 3.11
> --
>
> Key: CASSANDRA-13752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Priority: Blocker
>
> We have discovered issues with corrupted SSTables. 
> {code}
> ERROR [SSTableBatchOpen:22] 2017-08-03 20:19:53,195 SSTableReader.java:577 - 
> Cannot read sstable 
> /cassandra/data/mykeyspace/mytable-7a4992800d5611e7b782cb90016f2d17/mc-35556-big=[Data.db,
>  Statistics.db, Summary.db, Digest.crc32, CompressionInfo.db, TOC.txt, 
> Index.db, Filter.db]; other IO error, skipping table
> java.io.EOFException: EOF after 1898 bytes out of 21093
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:377)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:325)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:231)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:122)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:93)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:488)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:396)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader$5.run(SSTableReader.java:561)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_111]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_111]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> {code}
> Files look like this:
> {code}
> -rw-r--r--. 1 cassandra cassandra 3899251 Aug  7 08:37 
> mc-6166-big-CompressionInfo.db
> -rw-r--r--. 1 cassandra cassandra 16874421686 Aug  7 08:37 mc-6166-big-Data.db
> -rw-r--r--. 1 cassandra cassandra  10 Aug  7 08:37 
> mc-6166-big-Digest.crc32
> -rw-r--r--. 1 cassandra cassandra 2930904 Aug  7 08:37 
> mc-6166-big-Filter.db
> -rw-r--r--. 1 cassandra cassandra   75880 Aug  7 08:37 
> mc-6166-big-Index.db
> -rw-r--r--. 1 cassandra cassandra   13762 Aug  7 08:37 
> mc-6166-big-Statistics.db
> -rw-r--r--. 1 cassandra cassandra  882008 Aug  7 08:37 
> mc-6166-big-Summary.db
> -rw-r--r--. 1 cassandra cassandra  92 Aug  7 08:37 mc-6166-big-TOC.txt
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13752) Corrupted SSTables created in 3.11

2017-08-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136624#comment-16136624
 ] 

Hannu Kröger edited comment on CASSANDRA-13752 at 8/23/17 8:10 PM:
---

I created a branch with potential fix for this particular problem

||Branch||utest||dtest||
|[3.11|https://github.com/hkroger/cassandra/tree/cassandra-3.11-13752]|[3.11 
circle|https://circleci.com/gh/hkroger/cassandra/tree/cassandra-3.11-13752]|???|
|[3.0|https://github.com/hkroger/cassandra/tree/cassandra-3.0-13752]|[3.0 
circle|https://circleci.com/gh/hkroger/cassandra/tree/cassandra-3.0-13752]|???|



was (Author: hkroger):
I created a branch with potential fix for this particular problem

||Branch||utest||dtest||
|[3.11|https://github.com/hkroger/cassandra/tree/cassandra-3.11-13752]|[3.11 
circle|https://circleci.com/gh/hkroger/cassandra/tree/cassandra-3.11-13752]|???|


> Corrupted SSTables created in 3.11
> --
>
> Key: CASSANDRA-13752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Priority: Blocker
>
> We have discovered issues with corrupted SSTables. 
> {code}
> ERROR [SSTableBatchOpen:22] 2017-08-03 20:19:53,195 SSTableReader.java:577 - 
> Cannot read sstable 
> /cassandra/data/mykeyspace/mytable-7a4992800d5611e7b782cb90016f2d17/mc-35556-big=[Data.db,
>  Statistics.db, Summary.db, Digest.crc32, CompressionInfo.db, TOC.txt, 
> Index.db, Filter.db]; other IO error, skipping table
> java.io.EOFException: EOF after 1898 bytes out of 21093
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:377)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:325)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:231)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:122)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:93)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:488)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:396)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader$5.run(SSTableReader.java:561)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_111]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_111]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> {code}
> Files look like this:
> {code}
> -rw-r--r--. 1 cassandra cassandra 3899251 Aug  7 08:37 
> mc-6166-big-CompressionInfo.db
> -rw-r--r--. 1 cassandra cassandra 16874421686 Aug  7 08:37 mc-6166-big-Data.db
> -rw-r--r--. 1 cassandra cassandra  10 Aug  7 08:37 
> mc-6166-big-Digest.crc32
> -rw-r--r--. 1 cassandra cassandra 2930904 Aug  7 08:37 
> mc-6166-big-Filter.db
> -rw-r--r--. 1 cassandra cassandra   75880 Aug  7 08:37 
> mc-6166-big-Index.db
> -rw-r--r--. 1 cassandra cassandra   13762 Aug  7 08:37 
> mc-6166-big-Statistics.db
> -rw-r--r--. 1 cassandra cassandra  882008 Aug  7 08:37 
> mc-6166-big-Summary.db
> -rw-r--r--. 1 cassandra cassandra  92 Aug  7 08:37 mc-6166-big-TOC.txt
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: 

[jira] [Comment Edited] (CASSANDRA-13738) Load is over calculated after each IndexSummaryRedistribution

2017-08-23 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115899#comment-16115899
 ] 

Jay Zhuang edited comment on CASSANDRA-13738 at 8/23/17 8:04 PM:
-

Updated unittest to make it stable:
| branch | utest |
| [13738-2.2|https://github.com/cooldoger/cassandra/tree/13738-2.2] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/13738-2.2.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/13738-2.2]
 |
| [13738-3.0|https://github.com/cooldoger/cassandra/tree/13738-3.0] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/13738-3.0.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/13738-3.0]
 |
| [13738-3.11|https://github.com/cooldoger/cassandra/tree/13738-3.11] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/13738-3.11.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/13738-3.11]
 |
| [13738-trunk|https://github.com/cooldoger/cassandra/tree/31738-trunk] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/13738-trunk.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/13738-trunk]
 |



was (Author: jay.zhuang):
Updated unittest to make it stable:
| branch | utest |
| [13738-2.2|https://github.com/cooldoger/cassandra/tree/13738-2.2] | 
[circleci#70 passed|https://circleci.com/gh/cooldoger/cassandra/70] |
| [13738-3.0|https://github.com/cooldoger/cassandra/tree/13738-3.0] | 
[circleci#69 passed|https://circleci.com/gh/cooldoger/cassandra/69] |
| [13738-3.11|https://github.com/cooldoger/cassandra/tree/13738-3.11] | 
[circleci#68 passed|https://circleci.com/gh/cooldoger/cassandra/68] |
| [13738-trunk|https://github.com/cooldoger/cassandra/tree/trunk] | 
[circleci#67 passed|https://circleci.com/gh/cooldoger/cassandra/67] |

> Load is over calculated after each IndexSummaryRedistribution
> -
>
> Key: CASSANDRA-13738
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13738
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
> Attachments: sizeIssue.png
>
>
> For example, here is one of our cluster with about 500GB per node, but 
> {{nodetool status}} shows far more load than it actually is and keeps 
> increasing, restarting the process will reset the load, but keeps increasing 
> afterwards:
> {noformat}
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens   Owns (effective)  Host ID  
>  Rack
> UN  IP1*   13.52 TB   256  100.0%
> c4c31e0a-3f01-49f7-8a22-33043737975d  rac1
> UN  IP2*   14.25 TB   256  100.0%
> efec4980-ec9e-4424-8a21-ce7ddaf80aa0  rac1
> UN  IP3*   13.52 TB   256  100.0%
> 7dbcfdfc-9c07-4b1a-a4b9-970b715ebed8  rac1
> UN  IP4*   22.13 TB   256  100.0%
> 8879e6c4-93e3-4cc5-b957-f999c6b9b563  rac1
> UN  IP5*   18.02 TB   256  100.0%
> 4a1eaf22-4a83-4736-9e1c-12f898d685fa  rac1
> UN  IP6*   11.68 TB   256  100.0%
> d633c591-28af-42cc-bc5e-47d1c8bcf50f  rac1
> {noformat}
> !sizeIssue.png|test!
> The root cause is if the SSTable index summary is redistributed (typically 
> executes hourly), the updated SSTable size is added again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13785) Compaction fails for SSTables with large number of keys

2017-08-23 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137614#comment-16137614
 ] 

Jay Zhuang edited comment on CASSANDRA-13785 at 8/23/17 7:58 PM:
-

I'm able to reproduce the problem with an unit-test and here is the patch:
| branch | dTest |
| [13785-3.0|https://github.com/cooldoger/cassandra/tree/13785-3.0] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/13785-3.0.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/13785-3.0]
 |
| [13785-3.11|https://github.com/cooldoger/cassandra/tree/13785-3.11] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/13785-3.11.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/13785-3.11]
 |
| [13785-trunk|https://github.com/cooldoger/cassandra/tree/13785-trunk] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/13785-trunk.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/13785-trunk]
 |

[~iamaleksey] would you please review it?


was (Author: jay.zhuang):
I'm able to reproduce the problem with an unit-test and here is the patch:
| branch | dTest |
| [13785-3.0|https://github.com/cooldoger/cassandra/tree/13785-3.0] | 
[circleci#76 passed|https://circleci.com/gh/cooldoger/cassandra/76] |
| [13785-3.11|https://github.com/cooldoger/cassandra/tree/13785-3.11] | 
[circleci#77 running|https://circleci.com/gh/cooldoger/cassandra/77] |
| [13785-trunk|https://github.com/cooldoger/cassandra/tree/13785-trunk] | 
[circleci#78 running|https://circleci.com/gh/cooldoger/cassandra/78] |

[~iamaleksey] would you please review it?

> Compaction fails for SSTables with large number of keys
> ---
>
> Key: CASSANDRA-13785
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13785
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>
> Every a few minutes there're "LEAK DTECTED" messages in the log:
> {noformat}
> ERROR [Reference-Reaper:1] 2017-08-18 17:18:40,357 Ref.java:223 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3ed22d7) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$Tidy@1022568824:[Memory@[0..159b6ba4),
>  Memory@[0..d8123468)] was not released before the reference was garbage 
> collected
> ERROR [Reference-Reaper:1] 2017-08-18 17:20:49,693 Ref.java:223 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6470405b) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$Tidy@97898152:[Memory@[0..159b6ba4),
>  Memory@[0..d8123468)] was not released before the reference was garbage 
> collected
> ERROR [Reference-Reaper:1] 2017-08-18 17:22:38,519 Ref.java:223 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@6fc4af5f) to class 
> org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$Tidy@1247404854:[Memory@[0..159b6ba4),
>  Memory@[0..d8123468)] was not released before the reference was garbage 
> collected
> {noformat}
> Debugged the issue and found it's triggered by failed compactions, if the 
> compacted SSTable has more than 51m {{Integer.MAX_VALUE / 40}}) keys, it will 
> fail to create the IndexSummary: 
> [IndexSummary:84|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/sstable/IndexSummary.java#L84].
> Cassandra compaction tried to compact every a few minutes and keeps failing.
> The root cause is while [creating 
> SafeMemoryWriter|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/sstable/IndexSummaryBuilder.java#L112]
>  with {{> Integer.MAX_VALUE}} space, it returns the tailing 
> {{Integer.MAX_VALUE}} space 
> [SafeMemoryWriter.java:83|https://github.com/apache/cassandra/blob/6a1b1f26b7174e8c9bf86a96514ab626ce2a4117/src/java/org/apache/cassandra/io/util/SafeMemoryWriter.java#L83],
>  which makes the first 
> [entries.length()|https://github.com/apache/cassandra/blob/6a1b1f26b7174e8c9bf86a96514ab626ce2a4117/src/java/org/apache/cassandra/io/sstable/IndexSummaryBuilder.java#L173]
>  not 0. So the assert fails here: 
> [IndexSummary:84|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/io/sstable/IndexSummary.java#L84]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13752) Corrupted SSTables created in 3.11

2017-08-23 Thread Andrew Whang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138959#comment-16138959
 ] 

Andrew Whang commented on CASSANDRA-13752:
--

Can we get this patched to 3.0 as well?

> Corrupted SSTables created in 3.11
> --
>
> Key: CASSANDRA-13752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Hannu Kröger
>Priority: Blocker
>
> We have discovered issues with corrupted SSTables. 
> {code}
> ERROR [SSTableBatchOpen:22] 2017-08-03 20:19:53,195 SSTableReader.java:577 - 
> Cannot read sstable 
> /cassandra/data/mykeyspace/mytable-7a4992800d5611e7b782cb90016f2d17/mc-35556-big=[Data.db,
>  Statistics.db, Summary.db, Digest.crc32, CompressionInfo.db, TOC.txt, 
> Index.db, Filter.db]; other IO error, skipping table
> java.io.EOFException: EOF after 1898 bytes out of 21093
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) 
> ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:377)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:325)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.StatsMetadata$StatsMetadataSerializer.deserialize(StatsMetadata.java:231)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:122)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.metadata.MetadataSerializer.deserialize(MetadataSerializer.java:93)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:488)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:396)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> org.apache.cassandra.io.sstable.format.SSTableReader$5.run(SSTableReader.java:561)
>  ~[apache-cassandra-3.11.0.jar:3.11.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_111]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_111]
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [apache-cassandra-3.11.0.jar:3.11.0]
> {code}
> Files look like this:
> {code}
> -rw-r--r--. 1 cassandra cassandra 3899251 Aug  7 08:37 
> mc-6166-big-CompressionInfo.db
> -rw-r--r--. 1 cassandra cassandra 16874421686 Aug  7 08:37 mc-6166-big-Data.db
> -rw-r--r--. 1 cassandra cassandra  10 Aug  7 08:37 
> mc-6166-big-Digest.crc32
> -rw-r--r--. 1 cassandra cassandra 2930904 Aug  7 08:37 
> mc-6166-big-Filter.db
> -rw-r--r--. 1 cassandra cassandra   75880 Aug  7 08:37 
> mc-6166-big-Index.db
> -rw-r--r--. 1 cassandra cassandra   13762 Aug  7 08:37 
> mc-6166-big-Statistics.db
> -rw-r--r--. 1 cassandra cassandra  882008 Aug  7 08:37 
> mc-6166-big-Summary.db
> -rw-r--r--. 1 cassandra cassandra  92 Aug  7 08:37 mc-6166-big-TOC.txt
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9375) setting timeouts to 1ms prevents startup

2017-08-23 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138917#comment-16138917
 ] 

Jason Brown commented on CASSANDRA-9375:


[~jjirsa] Since this change will affect somebody's configured values (forces 
the value to the minimum acceptable), should we only apply this change to 
trunk? Meaning, will we muck with some operator's sense of the universe if we 
commit this to 3.0? I wonder if we should just commit to 3.11 and 4.0, or just 
4.0. wdyt?

At minimum, we should add an entry to NEWS.txt, which ever version this lands 
in.

> setting timeouts to 1ms prevents startup
> 
>
> Key: CASSANDRA-9375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9375
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Varun Barala
>Priority: Trivial
>  Labels: patch
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-9375_after_review, 
> CASSANDRA-9375_after_review_2.patch, CASSANDRA-9375.patch
>
>
> Granted, this is a nonsensical setting, but the error message makes it tough 
> to discern what's wrong:
> {noformat}
> ERROR 17:13:28,726 Exception encountered during startup
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> java.lang.ExceptionInInitializerError
>  at 
> org.apache.cassandra.net.MessagingService.instance(MessagingService.java:310)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:233)
>  at 
> org.apache.cassandra.service.StorageService.(StorageService.java:141)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:87)
>  at 
> org.apache.cassandra.locator.DynamicEndpointSnitch.(DynamicEndpointSnitch.java:63)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:518)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:350)
>  at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:112)
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:213)
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
> Caused by: java.lang.IllegalArgumentException
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:586)
>  at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor.scheduleWithFixedDelay(DebuggableScheduledThreadPoolExecutor.java:64)
>  at org.apache.cassandra.utils.ExpiringMap.(ExpiringMap.java:103)
>  at 
> org.apache.cassandra.net.MessagingService.(MessagingService.java:360)
>  at org.apache.cassandra.net.MessagingService.(MessagingService.java:68)
>  at 
> org.apache.cassandra.net.MessagingService$MSHandle.(MessagingService.java:306)
>  ... 11 more
> Exception encountered during startup: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: 

[jira] [Comment Edited] (CASSANDRA-13578) mx4j configuration minor improvement

2017-08-23 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137858#comment-16137858
 ] 

Jay Zhuang edited comment on CASSANDRA-13578 at 8/23/17 6:16 PM:
-

[~jjordan] that makes sense. I updated the patch to make it backward 
compatible, would you please review?
I tested it with both new way {{MX4J_PORT=8081}} and older way 
{{MX4J_PORT=-Dmx4jport=8081}}, all works fine:
| branch | uTest |
| [13578-trunk|https://github.com/cooldoger/cassandra/tree/13578-trunk] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/13578-trunk.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/13578-trunk]
 |





was (Author: jay.zhuang):
[~jjordan] that makes sense. I updated the patch to make it backward 
compatible, would you please review?
I tested it with both new way {{MX4J_PORT=8081}} and older way 
{{MX4J_PORT=-Dmx4jport=8081}}, all works fine:
| branch | uTest |
| [13578-trunk|https://github.com/cooldoger/cassandra/tree/13578-trunk] | 
[circleci#80|https://circleci.com/gh/cooldoger/cassandra/80] |

> mx4j configuration minor improvement
> 
>
> Key: CASSANDRA-13578
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13578
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
> Fix For: 4.x
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-11091) Insufficient disk space in memtable flush should trigger disk fail policy

2017-08-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-11091:
--

Assignee: (was: Dimitar Dimitrov)

> Insufficient disk space in memtable flush should trigger disk fail policy
> -
>
> Key: CASSANDRA-11091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11091
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Richard Low
>
> If there's insufficient disk space to flush, 
> DiskAwareRunnable.getWriteDirectory throws and the flush fails. The 
> commitlogs then grow indefinitely because the latch is never counted down.
> This should be an FSError so the disk fail policy is triggered. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13692) CompactionAwareWriter_getWriteDirectory throws incompatible exceptions

2017-08-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-13692:
--

Assignee: Dimitar Dimitrov

> CompactionAwareWriter_getWriteDirectory throws incompatible exceptions
> --
>
> Key: CASSANDRA-13692
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13692
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Hao Zhong
>Assignee: Dimitar Dimitrov
>  Labels: lhf
>
> The CompactionAwareWriter_getWriteDirectory throws RuntimeException:
> {code}
> public Directories.DataDirectory getWriteDirectory(Iterable 
> sstables, long estimatedWriteSize)
> {
> File directory = null;
> for (SSTableReader sstable : sstables)
> {
> if (directory == null)
> directory = sstable.descriptor.directory;
> if (!directory.equals(sstable.descriptor.directory))
> {
> logger.trace("All sstables not from the same disk - putting 
> results in {}", directory);
> break;
> }
> }
> Directories.DataDirectory d = 
> getDirectories().getDataDirectoryForFile(directory);
> if (d != null)
> {
> long availableSpace = d.getAvailableSpace();
> if (availableSpace < estimatedWriteSize)
> throw new RuntimeException(String.format("Not enough space to 
> write %s to %s (%s available)",
>  
> FBUtilities.prettyPrintMemory(estimatedWriteSize),
>  d.location,
>  
> FBUtilities.prettyPrintMemory(availableSpace)));
> logger.trace("putting compaction results in {}", directory);
> return d;
> }
> d = getDirectories().getWriteableLocation(estimatedWriteSize);
> if (d == null)
> throw new RuntimeException(String.format("Not enough disk space 
> to store %s",
>  
> FBUtilities.prettyPrintMemory(estimatedWriteSize)));
> return d;
> }
> {code}
> However, the thrown exception does not  trigger the failure policy. 
> CASSANDRA-11448 fixed a similar problem. The buggy code is:
> {code}
> protected Directories.DataDirectory getWriteDirectory(long writeSize)
> {
> Directories.DataDirectory directory = 
> getDirectories().getWriteableLocation(writeSize);
> if (directory == null)
> throw new RuntimeException("Insufficient disk space to write " + 
> writeSize + " bytes");
> return directory;
> }
> {code}
> The fixed code is:
> {code}
> protected Directories.DataDirectory getWriteDirectory(long writeSize)
> {
> Directories.DataDirectory directory = 
> getDirectories().getWriteableLocation(writeSize);
> if (directory == null)
> throw new FSWriteError(new IOException("Insufficient disk space 
> to write " + writeSize + " bytes"), "");
> return directory;
> }
> {code}
> The fixed code throws FSWE and triggers the failure policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13176) DROP INDEX seemingly doesn't stop existing Index build

2017-08-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-13176:
--

Assignee: Dimitar Dimitrov

> DROP INDEX seemingly doesn't stop existing Index build
> --
>
> Key: CASSANDRA-13176
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13176
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction, CQL
> Environment: CentOS Linux, JRE 1.8
>Reporter: Soumya Sanyal
>Assignee: Dimitar Dimitrov
>
> There appears to be an edge case with secondary indexes (non SASI). I 
> originally issued a CREATE INDEX on a column, and upon listening to advice 
> from folks in the #cassandra room, decided against it, and issued a DROP 
> INDEX. 
> I didn't check the cluster overnight, but this morning, I found out that our 
> cluster CPU usage was pegged around 80%. Looking at compaction stats, I saw 
> that the index build was still ongoing. We had to restart the entire cluster 
> for the changes to take effect.
> Version: 3.9



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-11091) Insufficient disk space in memtable flush should trigger disk fail policy

2017-08-23 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-11091:
--

Assignee: Dimitar Dimitrov

> Insufficient disk space in memtable flush should trigger disk fail policy
> -
>
> Key: CASSANDRA-11091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11091
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Richard Low
>Assignee: Dimitar Dimitrov
>
> If there's insufficient disk space to flush, 
> DiskAwareRunnable.getWriteDirectory throws and the flush fails. The 
> commitlogs then grow indefinitely because the latch is never counted down.
> This should be an FSError so the disk fail policy is triggered. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13719) Potential AssertionError during ReadRepair of range tombstone and partition deletions

2017-08-23 Thread Branimir Lambov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Branimir Lambov updated CASSANDRA-13719:

Status: Ready to Commit  (was: Patch Available)

> Potential AssertionError during ReadRepair of range tombstone and partition 
> deletions
> -
>
> Key: CASSANDRA-13719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.11.x
>
>
> When reconciling range tombstones for read repair in 
> {{DataResolver.RepairMergeListener.MergeListener}}, when we check if there is 
> ongoing deletion repair for a source, we don't look for partition level 
> deletions which throw off the logic and can throw an AssertionError.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13719) Potential AssertionError during ReadRepair of range tombstone and partition deletions

2017-08-23 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138531#comment-16138531
 ] 

Branimir Lambov commented on CASSANDRA-13719:
-

LGTM

> Potential AssertionError during ReadRepair of range tombstone and partition 
> deletions
> -
>
> Key: CASSANDRA-13719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.11.x
>
>
> When reconciling range tombstones for read repair in 
> {{DataResolver.RepairMergeListener.MergeListener}}, when we check if there is 
> ongoing deletion repair for a source, we don't look for partition level 
> deletions which throw off the logic and can throw an AssertionError.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13622) Better config validation/documentation

2017-08-23 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138526#comment-16138526
 ] 

ZhaoYang commented on CASSANDRA-13622:
--

[~KurtG] could you have a look at the final patch before commiting? thanks

> Better config validation/documentation
> --
>
> Key: CASSANDRA-13622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13622
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Kurt Greaves
>Assignee: ZhaoYang
>Priority: Minor
>  Labels: lhf
> Fix For: 4.0
>
>
> There are a number of properties in the yaml that are "in_mb", however 
> resolve to bytes when calculated in {{DatabaseDescriptor.java}}, but are 
> stored in int's. This means that their maximum values are 2047, as any higher 
> when converted to bytes overflows the int.
> Where possible/reasonable we should convert these to be long's, and stored as 
> long's. If there is no reason for the value to ever be >2047 we should at 
> least document that as the max value, or better yet make it error if set 
> higher than that. Noting that although it's bad practice to increase a lot of 
> them to such high values, there may be cases where it is necessary and in 
> which case we should handle it appropriately rather than overflowing and 
> surprising the user. That is, causing it to break but not in the way the user 
> expected it to :)
> Following are functions that currently could be at risk of the above:
> {code:java|title=DatabaseDescriptor.java}
> getThriftFramedTransportSize()
> getMaxValueSize()
> getCompactionLargePartitionWarningThreshold()
> getCommitLogSegmentSize()
> getNativeTransportMaxFrameSize()
> # These are in KB so max value of 2096128
> getBatchSizeWarnThreshold()
> getColumnIndexSize()
> getColumnIndexCacheSize()
> getMaxMutationSize()
> {code}
> Note we may not actually need to fix all of these, and there may be more. 
> This was just from a rough scan over the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13640) CQLSH error when using 'login' to switch users

2017-08-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138494#comment-16138494
 ] 

Andrés de la Peña commented on CASSANDRA-13640:
---

Right. There are the patches for 3.0, 3.11, trunk and dtest:
||[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...adelapena:13640-3.0]||[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...adelapena:13640-3.11]||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:13640-trunk]||[dtest|https://github.com/apache/cassandra-dtest/compare/master...adelapena:13640]||

Thanks for the review.

> CQLSH error when using 'login' to switch users
> --
>
> Key: CASSANDRA-13640
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13640
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Minor
> Fix For: 3.0.x
>
>
> Using {{PasswordAuthenticator}} and {{CassandraAuthorizer}}:
> {code}
> bin/cqlsh -u cassandra -p cassandra
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 3.0.14-SNAPSHOT | CQL spec 3.4.0 | Native protocol 
> v4]
> Use HELP for help.
> cassandra@cqlsh> create role super with superuser = true and password = 'p' 
> and login = true;
> cassandra@cqlsh> login super;
> Password:
> super@cqlsh> list roles;
> 'Row' object has no attribute 'values'
> {code}
> When we initialize the Shell, we configure certain settings on the session 
> object such as
> {code}
> self.session.default_timeout = request_timeout
> self.session.row_factory = ordered_dict_factory
> self.session.default_consistency_level = cassandra.ConsistencyLevel.ONE
> {code}
> However, once we perform a LOGIN cmd, which calls do_login(..), we create a 
> new cluster/session object but actually never set those settings on the new 
> session.
> It isn't failing on 3.x. 
> As a workaround, it is possible to logout and log back in and things work 
> correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13737) Node start can fail if the base table of a materialized view is not found

2017-08-23 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-13737:

Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   (was: 3.0.x)
   4.0
   3.11.1
   3.0.15

> Node start can fail if the base table of a materialized view is not found
> -
>
> Key: CASSANDRA-13737
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13737
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata, Materialized Views
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> Node start can fail if the base table of a materialized view is not found, 
> which is something that can happen under certain circumstances. There is a 
> dtest reproducing the problem:
> {code}
> cluster = self.cluster
> cluster.populate(3)
> cluster.start()
> node1, node2, node3 = self.cluster.nodelist()
> session = self.patient_cql_connection(node1, 
> consistency_level=ConsistencyLevel.QUORUM)
> create_ks(session, 'ks', 3)
> session.execute('CREATE TABLE users (username varchar PRIMARY KEY, state 
> varchar)')
> node3.stop(wait_other_notice=True)
> # create a materialized view only in nodes 1 and 2
> session.execute(('CREATE MATERIALIZED VIEW users_by_state AS '
>  'SELECT * FROM users WHERE state IS NOT NULL AND username IS 
> NOT NULL '
>  'PRIMARY KEY (state, username)'))
> node1.stop(wait_other_notice=True)
> node2.stop(wait_other_notice=True)
> # drop the base table only in node 3
> node3.start(wait_for_binary_proto=True)
> session = self.patient_cql_connection(node3, 
> consistency_level=ConsistencyLevel.QUORUM)
> session.execute('DROP TABLE ks.users')
> cluster.stop()
> cluster.start()  # Fails
> {code}
> This is the error during node start:
> {code}
> java.lang.IllegalArgumentException: Unknown CF 
> 958ebc30-76e4-11e7-869a-9d8367a71c76
>   at 
> org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:215) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.view.ViewManager.addView(ViewManager.java:143) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.view.ViewManager.reload(ViewManager.java:113) 
> ~[main/:na]
>   at org.apache.cassandra.schema.Schema.alterKeyspace(Schema.java:618) 
> ~[main/:na]
>   at org.apache.cassandra.schema.Schema.lambda$merge$18(Schema.java:591) 
> ~[main/:na]
>   at 
> java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.lambda$entryConsumer$0(Collections.java:1575)
>  ~[na:1.8.0_131]
>   at java.util.HashMap$EntrySet.forEach(HashMap.java:1043) ~[na:1.8.0_131]
>   at 
> java.util.Collections$UnmodifiableMap$UnmodifiableEntrySet.forEach(Collections.java:1580)
>  ~[na:1.8.0_131]
>   at org.apache.cassandra.schema.Schema.merge(Schema.java:591) ~[main/:na]
>   at 
> org.apache.cassandra.schema.Schema.mergeAndAnnounceVersion(Schema.java:564) 
> ~[main/:na]
>   at 
> org.apache.cassandra.schema.MigrationTask$1.response(MigrationTask.java:89) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72) 
> ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_131]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_131]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_131]
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [main/:na]
>   at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2017-08-23 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138397#comment-16138397
 ] 

Jeremiah Jordan commented on CASSANDRA-12373:
-

bq. That's where CASSANDRA-10857 come in, as it's goal is exactly to allow 
exposing that internal layout and this will work for all compact tables, SCF 
included (and yes, all the data will be exposed this way).

(y)

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x, 3.11.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13630) support large internode messages with netty

2017-08-23 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138399#comment-16138399
 ] 

Ariel Weisberg commented on CASSANDRA-13630:


bq. We do not have 1x amplification in pre-4.0 code; 
You always had to have the entire message in memory as a POJO so I count that 
as 1x. We don't lazily materialize result messages or message contents with the 
exception of paging.

bq. The cost of the amplification is hidden by that reusable backing buffer, 
but it's still there.
64k is a constant though. I'm not concerned with CPU time and performance I am 
concerned about OOMing. You can get predictable behavior for a cluster once all 
connections are provisioned. With 8647 the memory utilization is no longer 
constant it's linear with message size multiplied by fan out.

> support large internode messages with netty
> ---
>
> Key: CASSANDRA-13630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13630
> Project: Cassandra
>  Issue Type: Task
>  Components: Streaming and Messaging
>Reporter: Jason Brown
>Assignee: Jason Brown
> Fix For: 4.0
>
>
> As part of CASSANDRA-8457, we decided to punt on large mesages to reduce the 
> scope of that ticket. However, we still need that functionality to ship a 
> correctly operating internode messaging subsystem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13776) Adding a field to an UDT can corrupte the tables using it

2017-08-23 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138392#comment-16138392
 ] 

Robert Stupp commented on CASSANDRA-13776:
--

Nice work, Benjamin!
The new unit tests cover the issue.

Just a few things:
* We should order on an array (or ArrayList) in 
{{SerializationHeader.orderByDescendingGeneration()} and use "static" 
comparator instances
* Checks on component count are mussing for tuples and composites in 
{{AbstractTypeVersionComparator}}
* Test should cover all type "types" in {{AbstractTypeVersionComparatorTest}}

Pushed some commits 
[here|https://github.com/snazy/cassandra/commits/13776-3.0-review].

+1 with the above changes. CI (internal one) looks good.


> Adding a field to an UDT can corrupte the tables using it
> -
>
> Key: CASSANDRA-13776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13776
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Critical
>
> Adding a field to an UDT which is used as a {{Set}} element or as a {{Map}} 
> element can corrupt the table.
> The problem can be reproduced using the following test case:
> {code}
> @Test
> public void testReadAfterAlteringUserTypeNestedWithinSet() throws 
> Throwable
> {
> String ut1 = createType("CREATE TYPE %s (a int)");
> String columnType = KEYSPACE + "." + ut1;
> try
> {
> createTable("CREATE TABLE %s (x int PRIMARY KEY, y set columnType + ">>)");
> disableCompaction();
> execute("INSERT INTO %s (x, y) VALUES(1, ?)", set(userType(1), 
> userType(2)));
> assertRows(execute("SELECT * FROM %s"), row(1, set(userType(1), 
> userType(2;
> flush();
> assertRows(execute("SELECT * FROM %s WHERE x = 1"),
>row(1, set(userType(1), userType(2;
> execute("ALTER TYPE " + KEYSPACE + "." + ut1 + " ADD b int");
> execute("UPDATE %s SET y = y + ? WHERE x = 1",
> set(userType(1, 1), userType(1, 2), userType(2, 1)));
> flush();
> assertRows(execute("SELECT * FROM %s WHERE x = 1"),
>row(1, set(userType(1),
>   userType(1, 1),
>   userType(1, 2),
>   userType(2),
>   userType(2, 1;
> compact();
> assertRows(execute("SELECT * FROM %s WHERE x = 1"),
>row(1, set(userType(1),
>   userType(1, 1),
>   userType(1, 2),
>   userType(2),
>   userType(2, 1;
> }
> finally
> {
> enableCompaction();
> }
> }
> {code} 
> There are in fact 2 problems:
> # When the {{sets}} from the 2 versions are merged the {{ColumnDefinition}} 
> being picked up can be the older one. In which case when the tuples are 
> sorted it my lead to an {{IndexOutOfBoundsException}}.
> # During compaction, the old column definition can be the one being kept for 
> the SSTable metadata. If it is the case the SSTable will not be readable any 
> more and will be marked as {{corrupted}}.
> If one of the tables using the type has a Materialized View attached to it, 
> the MV updates can also fail with {{IndexOutOfBoundsException}}.
> This problem can be reproduced using the following test:
> {code}
> @Test
> public void testAlteringUserTypeNestedWithinSetWithView() throws Throwable
> {
> String columnType = typeWithKs(createType("CREATE TYPE %s (a int)"));
> createTable("CREATE TABLE %s (pk int, c int, v int, s set columnType + ">>, PRIMARY KEY (pk, c))");
> execute("CREATE MATERIALIZED VIEW " + keyspace() + ".view1 AS SELECT 
> c, pk, v FROM %s WHERE pk IS NOT NULL AND c IS NOT NULL AND v IS NOT NULL 
> PRIMARY KEY (c, pk)");
> execute("INSERT INTO %s (pk, c, v, s) VALUES(?, ?, ?, ?)", 1, 1, 1, 
> set(userType(1), userType(2)));
> flush();
> execute("ALTER TYPE " + columnType + " ADD b int");
> execute("UPDATE %s SET s = s + ?, v = ? WHERE pk = ? AND c = ?",
> set(userType(1, 1), userType(1, 2), userType(2, 1)), 2, 1, 1);
> assertRows(execute("SELECT * FROM %s WHERE pk = ? AND c = ?", 1, 1),
>row(1, 1, 2, set(userType(1),
> userType(1, 1),
> userType(1, 2),
> userType(2),
> 

[jira] [Updated] (CASSANDRA-13789) Reduce memory copies and object creations when acting on ByteBufs

2017-08-23 Thread Norman Maurer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norman Maurer updated CASSANDRA-13789:
--
Assignee: Norman Maurer
  Status: Patch Available  (was: Open)

> Reduce memory copies and object creations when acting on  ByteBufs
> --
>
> Key: CASSANDRA-13789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13789
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Norman Maurer
>Assignee: Norman Maurer
> Attachments: 
> 0001-Reduce-memory-copies-and-object-creations-when-actin.patch
>
>
> There are multiple "low-hanging-fruits" when it comes to reduce memory copies 
> and object allocations when acting on ByteBufs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13776) Adding a field to an UDT can corrupte the tables using it

2017-08-23 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-13776:
-
Status: Ready to Commit  (was: Patch Available)

> Adding a field to an UDT can corrupte the tables using it
> -
>
> Key: CASSANDRA-13776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13776
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Critical
>
> Adding a field to an UDT which is used as a {{Set}} element or as a {{Map}} 
> element can corrupt the table.
> The problem can be reproduced using the following test case:
> {code}
> @Test
> public void testReadAfterAlteringUserTypeNestedWithinSet() throws 
> Throwable
> {
> String ut1 = createType("CREATE TYPE %s (a int)");
> String columnType = KEYSPACE + "." + ut1;
> try
> {
> createTable("CREATE TABLE %s (x int PRIMARY KEY, y set columnType + ">>)");
> disableCompaction();
> execute("INSERT INTO %s (x, y) VALUES(1, ?)", set(userType(1), 
> userType(2)));
> assertRows(execute("SELECT * FROM %s"), row(1, set(userType(1), 
> userType(2;
> flush();
> assertRows(execute("SELECT * FROM %s WHERE x = 1"),
>row(1, set(userType(1), userType(2;
> execute("ALTER TYPE " + KEYSPACE + "." + ut1 + " ADD b int");
> execute("UPDATE %s SET y = y + ? WHERE x = 1",
> set(userType(1, 1), userType(1, 2), userType(2, 1)));
> flush();
> assertRows(execute("SELECT * FROM %s WHERE x = 1"),
>row(1, set(userType(1),
>   userType(1, 1),
>   userType(1, 2),
>   userType(2),
>   userType(2, 1;
> compact();
> assertRows(execute("SELECT * FROM %s WHERE x = 1"),
>row(1, set(userType(1),
>   userType(1, 1),
>   userType(1, 2),
>   userType(2),
>   userType(2, 1;
> }
> finally
> {
> enableCompaction();
> }
> }
> {code} 
> There are in fact 2 problems:
> # When the {{sets}} from the 2 versions are merged the {{ColumnDefinition}} 
> being picked up can be the older one. In which case when the tuples are 
> sorted it my lead to an {{IndexOutOfBoundsException}}.
> # During compaction, the old column definition can be the one being kept for 
> the SSTable metadata. If it is the case the SSTable will not be readable any 
> more and will be marked as {{corrupted}}.
> If one of the tables using the type has a Materialized View attached to it, 
> the MV updates can also fail with {{IndexOutOfBoundsException}}.
> This problem can be reproduced using the following test:
> {code}
> @Test
> public void testAlteringUserTypeNestedWithinSetWithView() throws Throwable
> {
> String columnType = typeWithKs(createType("CREATE TYPE %s (a int)"));
> createTable("CREATE TABLE %s (pk int, c int, v int, s set columnType + ">>, PRIMARY KEY (pk, c))");
> execute("CREATE MATERIALIZED VIEW " + keyspace() + ".view1 AS SELECT 
> c, pk, v FROM %s WHERE pk IS NOT NULL AND c IS NOT NULL AND v IS NOT NULL 
> PRIMARY KEY (c, pk)");
> execute("INSERT INTO %s (pk, c, v, s) VALUES(?, ?, ?, ?)", 1, 1, 1, 
> set(userType(1), userType(2)));
> flush();
> execute("ALTER TYPE " + columnType + " ADD b int");
> execute("UPDATE %s SET s = s + ?, v = ? WHERE pk = ? AND c = ?",
> set(userType(1, 1), userType(1, 2), userType(2, 1)), 2, 1, 1);
> assertRows(execute("SELECT * FROM %s WHERE pk = ? AND c = ?", 1, 1),
>row(1, 1, 2, set(userType(1),
> userType(1, 1),
> userType(1, 2),
> userType(2),
> userType(2, 1;
> }
> {code}  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13789) Reduce memory copies and object creations when acting on ByteBufs

2017-08-23 Thread Norman Maurer (JIRA)
Norman Maurer created CASSANDRA-13789:
-

 Summary: Reduce memory copies and object creations when acting on  
ByteBufs
 Key: CASSANDRA-13789
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13789
 Project: Cassandra
  Issue Type: Improvement
Reporter: Norman Maurer
 Attachments: 
0001-Reduce-memory-copies-and-object-creations-when-actin.patch

There are multiple "low-hanging-fruits" when it comes to reduce memory copies 
and object allocations when acting on ByteBufs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2017-08-23 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138350#comment-16138350
 ] 

Sylvain Lebresne commented on CASSANDRA-12373:
--

bq. but in 3.x we started exposing the defined columns as "static" and the 
undefined ones as column1/value.

That's not exactly true. It's how we handle CS tables internally, but it's not 
how they are exposed. That's where CASSANDRA-10857 come in, as it's goal is 
exactly to allow exposing that internal layout and this will work for all 
compact tables, SCF included (and yes, all the data will be exposed this way). 

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x, 3.11.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-10735) Support netty openssl (netty-tcnative) for client encryption

2017-08-23 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown resolved CASSANDRA-10735.
-
Resolution: Done

Changes to SSL were required for CASSANDRA-8457, so this got pulled in, rather 
simply, when I was fixing dtests for CASSANDRA-8457.

> Support netty openssl (netty-tcnative) for client encryption
> 
>
> Key: CASSANDRA-10735
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10735
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Andy Tolbert
>Assignee: Jason Brown
> Fix For: 4.0
>
> Attachments: nettysslbench.png, nettysslbench_small.png, 
> nettyssl-bench.tgz, netty-ssl-trunk.tgz, sslbench12-03.png
>
>
> The java-driver recently added support for using netty openssl via 
> [netty-tcnative|http://netty.io/wiki/forked-tomcat-native.html] in 
> [JAVA-841|https://datastax-oss.atlassian.net/browse/JAVA-841], this shows a 
> very measured improvement (numbers incoming on that ticket).   It seems 
> likely that this can offer improvement if implemented C* side as well.
> Since netty-tcnative has platform specific requirements, this should not be 
> made the default, but rather be an option that one can use.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13719) Potential AssertionError during ReadRepair of range tombstone and partition deletions

2017-08-23 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138323#comment-16138323
 ] 

Sylvain Lebresne commented on CASSANDRA-13719:
--

You are right, those were genuine problems, thanks. I force-pushed an update to 
the same branches to fix and added an additional unit tests for this particular 
problem.

> Potential AssertionError during ReadRepair of range tombstone and partition 
> deletions
> -
>
> Key: CASSANDRA-13719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.11.x
>
>
> When reconciling range tombstones for read repair in 
> {{DataResolver.RepairMergeListener.MergeListener}}, when we check if there is 
> ongoing deletion repair for a source, we don't look for partition level 
> deletions which throw off the logic and can throw an AssertionError.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-10734) Support openssl for internode encryption.

2017-08-23 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown resolved CASSANDRA-10734.
-
   Resolution: Done
Fix Version/s: (was: 4.x)
   4.0

Done as part of CASSANDRA-8457

> Support openssl for internode encryption.
> -
>
> Key: CASSANDRA-10734
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10734
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Andy Tolbert
>Assignee: Jason Brown
> Fix For: 4.0
>
>
> It could be a nice improvement to support using openssl for SSL internode 
> encryption.
> This should not be made the default, but rather be an option that one can use.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13630) support large internode messages with netty

2017-08-23 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138305#comment-16138305
 ] 

Jason Brown commented on CASSANDRA-13630:
-

bq. I thought worst case memory amplification from this NIO approach was 2x 
message size which is worse than our current 1x message size, but it's not, 
it's cluster size * message size if a message is fanned out to all nodes in the 
cluster. 

We do not have 1x amplification in pre-4.0 code; it's always been messageSize 
times the number of target peers. In `OutboundTcpConnector` we wrote into a 
[backing buffer of 
64k|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L457]
 for each outbound peer and flushed when the buffer filled up (see 
`BufferedDataOutputStreamPlus`). The cost of the amplification is hidden by 
that reusable backing buffer, but it's still there.

With CASSANDRA-8457, everything gets it's own distinct buffer, allocated once 
per-message, which is serialized to and then flushed. With this ticket we'll 
move back to the previous model where there's a backing buffer that's used for 
aggregating small messages or chunks of larger messages. That buffer, of 
course, is not reused, but that's because of the asynchronous nature of NIO vs 
blocking IO. 

(FTR, I have thought about moving serialization outside of the "outbound 
connections" (either `OutboundTcpConnection` or netty handlers) - where we 
serialize before sending to the outbound channels and send a slice of a buffer 
to those channels. That way you only serialize once (less repetitive CPU work), 
as well as potentially consume less memory. But I think that's a different 
ticket.)

bq. I really wonder if that be a shared pool of threads and we size it 
generously

yeah, i thought about this. The problem is that because the deserialization is 
blocking, you basically need one thread in the pool for each "blocker"; else 
you starve some deserialization activities. Hence, i just used a background 
thread. Not my favorite choice, but I'm not sure a "well-sized" pool will be 
sufficient. 

Reading over your comments on the code itself this morning.


> support large internode messages with netty
> ---
>
> Key: CASSANDRA-13630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13630
> Project: Cassandra
>  Issue Type: Task
>  Components: Streaming and Messaging
>Reporter: Jason Brown
>Assignee: Jason Brown
> Fix For: 4.0
>
>
> As part of CASSANDRA-8457, we decided to punt on large mesages to reduce the 
> scope of that ticket. However, we still need that functionality to ship a 
> correctly operating internode messaging subsystem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13787) RangeTombstoneMarker and ParitionDeletion is not properly included in MV

2017-08-23 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-13787:
-
Status: Patch Available  (was: In Progress)

> RangeTombstoneMarker and ParitionDeletion is not properly included in MV
> 
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> 2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 1, 1, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 2, 1, 2) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 3, 1, 3) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
> 0, 2, 0) USING TIMESTAMP 5; " +
> "APPLY BATCH");
> assertRowsIgnoringOrder(execute("select * from %s"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> }
> @Test
> public void testRangeDeletionWithFlush() throws Throwable
> {
> testExistingParitionDeletion(true);
> }
> @Test
> public void testRangeDeletionWithoutFlush() throws Throwable
> {
> testExistingParitionDeletion(false);
> }
> public void testExistingParitionDeletion(boolean flush) throws Throwable
> {
> // for partition range deletion, need to know that existing row is 
> shadowed instead of not existed.
> createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
> (a))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("mv_test1",
>"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a 
> IS NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");
> Keyspace ks = Keyspace.open(keyspace());
> ks.getColumnFamilyStore("mv_test1").disableAutoCompaction();
> 

[jira] [Updated] (CASSANDRA-13787) RangeTombstoneMarker and ParitionDeletion is not properly included in MV

2017-08-23 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-13787:
-
Reviewer: Sylvain Lebresne

> RangeTombstoneMarker and ParitionDeletion is not properly included in MV
> 
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> 2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 1, 1, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 2, 1, 2) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 3, 1, 3) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
> 0, 2, 0) USING TIMESTAMP 5; " +
> "APPLY BATCH");
> assertRowsIgnoringOrder(execute("select * from %s"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> }
> @Test
> public void testRangeDeletionWithFlush() throws Throwable
> {
> testExistingParitionDeletion(true);
> }
> @Test
> public void testRangeDeletionWithoutFlush() throws Throwable
> {
> testExistingParitionDeletion(false);
> }
> public void testExistingParitionDeletion(boolean flush) throws Throwable
> {
> // for partition range deletion, need to know that existing row is 
> shadowed instead of not existed.
> createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
> (a))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("mv_test1",
>"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a 
> IS NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");
> Keyspace ks = Keyspace.open(keyspace());
> ks.getColumnFamilyStore("mv_test1").disableAutoCompaction();
> execute("INSERT INTO 

[jira] [Comment Edited] (CASSANDRA-13787) RangeTombstoneMarker and ParitionDeletion is not properly included in MV

2017-08-23 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138159#comment-16138159
 ] 

ZhaoYang edited comment on CASSANDRA-13787 at 8/23/17 11:27 AM:


The first problem is not exactly a MV bug, it's SSTableReader issue of handling 
multiple slices which is probably only used by MV. The existing code consumes 
the {{original RangeTombstoneMarker(eg. Deletion@c1=1@10)}} for the first slice 
to generate a RangeTombstoneMarker(eg. Deletion@c1=1=0@10) with first 
slice's clustering. So there is no tombstones generated for other slices. The 
fix is change the ClusteringPrefix comparison, to make sure tombstones are 
generated for each slices within the range of original RangeTombstoneMarker.

The second problem is that if there is no existing live data, empty existing 
row is given to the ViewUpdateGenerator which may resurrect deleted cells. The 
fix is to include current-open-deletion into the empty existing row. So view 
could shadow the deleted cells.


[Draft|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13787-3.0] 
patch base on cassandra-3.0 . This needs to be fixed in 3.0/3.11/trunk.




was (Author: jasonstack):
The first problem is not exactly a MV bug, it's SSTableReader issue of handling 
multiple slices which is probably only used by MV. The existing code consumes 
the {{original RangeTombstoneMarker(eg. Deletion@c1=1@10)}} for the first slice 
to generate a RangeTombstoneMarker(eg. Deletion@c1=1=0@10) with first 
slice's clustering. So there is no tombstones generated for other slices. The 
fix is to the ClusteringPrefix comparison, to make sure tombstones are 
generated probably for each slices within the range of original 
RangeTombstoneMarker.

The second problem is that if there is no existing live data, empty existing 
row is given to the ViewUpdateGenerator which may resurrect deleted cells. The 
fix is to include current-open-deletion into the empty existing row. So view 
could shadow the deleted cells.


[Draft|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13787-3.0] 
patch base on cassandra-3.0 . This needs to be fixed in 3.0/3.11/trunk.



> RangeTombstoneMarker and ParitionDeletion is not properly included in MV
> 
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> 2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " 

[jira] [Updated] (CASSANDRA-11428) Eliminate Allocations

2017-08-23 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-11428:

Fix Version/s: (was: 3.0.x)
   3.6

> Eliminate Allocations
> -
>
> Key: CASSANDRA-11428
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11428
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Nitsan Wakart
>Priority: Minor
> Fix For: 3.6
>
> Attachments: benchmarks.tar.gz, pom.xml
>
>
> Linking relevant issues under this master ticket.  For small changes I'd like 
> to test and commit these in bulk 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12245) initial view build can be parallel

2017-08-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138233#comment-16138233
 ] 

Andrés de la Peña commented on CASSANDRA-12245:
---

[Here|https://github.com/apache/cassandra/compare/trunk...adelapena:12245-trunk]
 is a new version of the patch addressing the review comments. The updated 
dtests are 
[here|https://github.com/apache/cassandra-dtest/compare/master...adelapena:12245].

bq. It would be nice to maybe try to reuse the {{Splitter}} methods if 
possible, so we can reuse tests, or if that's not straightforward maybe put the 
methods on splitter and add some tests to make sure it's working correctly.

I have moved the [methods to split the 
ranges|https://github.com/adelapena/cassandra/blob/af6b2547b58b8215c65be7546173f3c48b398072/src/java/org/apache/cassandra/dht/Splitter.java#L137-L187]
 to the {{Splitter}}, reusing its 
[{{valueForToken}}|https://github.com/adelapena/cassandra/blob/af6b2547b58b8215c65be7546173f3c48b398072/src/java/org/apache/cassandra/dht/Splitter.java#L47]
 method. Tests 
[here|https://github.com/adelapena/cassandra/blob/af6b2547b58b8215c65be7546173f3c48b398072/test/unit/org/apache/cassandra/dht/SplitterTest.java#L162-L224].

bq. Can probably remove the generation field from the builds in progress table 
and remove this comment

Removed.

bq. {{views_builds_in_progress_v2}} sounds a bit hacky, so perhaps we should 
call it {{system.view_builds_in_progress}} (remove the s) and also add a NOTICE 
entry informing the previous table was replaced and data files can be removed.

Renamed to {{system.view_builds_in_progress}}. Added an [NEWS.txt 
entry|https://github.com/adelapena/cassandra/blob/af6b2547b58b8215c65be7546173f3c48b398072/NEWS.txt#L183-L185]
 informing about the replacement.

bq. I'm a bit concerned about starving the compaction executor for a long 
period during view build of large base tables, so we should probably have 
another option like {{concurret_view_builders}} with a conservative default and 
perhaps control the concurrency at the {{ViewBuilderController}}. WDYT?

Agree. I have added a new dedicated 
[executor|https://github.com/adelapena/cassandra/blob/af6b2547b58b8215c65be7546173f3c48b398072/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L126]
 in the {{CompactionManager}}, similar to the executors used for validation and 
cache cleanup. The concurrency of this executor is determined by the new config 
property 
[{{concurrent_materialized_view_builders}}|https://github.com/adelapena/cassandra/blob/af6b2547b58b8215c65be7546173f3c48b398072/conf/cassandra.yaml#L756],
 which defaults to a perhaps too much conservative value of {{1}}. This 
property can be modified through both JMX and the new 
[{{setconcurrentviewbuilders}}|https://github.com/adelapena/cassandra/blob/af6b2547b58b8215c65be7546173f3c48b398072/src/java/org/apache/cassandra/tools/nodetool/SetConcurrentViewBuilders.java]
 and 
[{{getconcurrentviewbuilders}}|https://github.com/adelapena/cassandra/blob/af6b2547b58b8215c65be7546173f3c48b398072/src/java/org/apache/cassandra/tools/nodetool/GetConcurrentViewBuilders.java]
 nodetool commands. These commands are tested 
[here|https://github.com/adelapena/cassandra-dtest/blob/cf893982c361fd7b6018b2570d3a5a33badd5424/nodetool_test.py#L162-L190].

I'm not sure about if it still makes sense for the builder task to extend 
{{CompactionInfo.Holder}}. If so, I'm neither sure about how to use 
{{prevToken.size(range.right)}} (that returns a {{double}}) to create 
{{CompationInfo}} objects. WDYT?

bq. ViewBuilder seems to be reimplementing some of the logic of 
PartitionRangeReadCommand, so I wonder if we shoud take this chance to simplify 
and use that instead of manually constructing the commands via 
ReducingKeyIterator and multiple SinglePartitionReadCommands? We can totally do 
this in other ticket if you prefer.

I would prefer to do this in another ticket.

bq. Perform view marking on ViewBuilderController instead of ViewBuilder

I have moved the marking of system tables (and the retries in case of failure) 
from the {{ViewBuilderTask}} to the {{ViewBuilder}}, using a callback to do the 
marking. I think the code is clearer this way.

bq. Updating the view built status at every key is perhaps a bit inefficient 
and unnecessary, so perhaps we should update it every 1000 keys or so.

Done, it is updated [every 1000 
keys|https://github.com/adelapena/cassandra/blob/af6b2547b58b8215c65be7546173f3c48b398072/src/java/org/apache/cassandra/db/view/ViewBuilderTask.java#L62].
 It doesn't seem to make a great difference in some small benchmarks that I 
have run.

bq. Would be nice to update the {{interrupt_build_process_test}} to stop 
halfway through the build (instead of the start of the build) and verify it 
it's being resumed correctly with the new changes.

Updated 

[jira] [Commented] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-08-23 Thread Romain GERARD (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138221#comment-16138221
 ] 

Romain GERARD commented on CASSANDRA-13418:
---

{quote}I think so yes. Not because it's my opinion, but because that's the 
general minimalist style of the C* codebase.{quote}
Understood, here the update 
https://github.com/criteo-forks/cassandra/commit/a35b9e818a5294f7fd99588bd407fe909b3402a7

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>  Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13770) AssertionError: Lower bound INCL_START_BOUND during select by index

2017-08-23 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-13770:
-
Description: 
We are getting the following error:

{code}
DEBUG [Native-Transport-Requests-1] 2017-08-17 07:47:01,815 
ReadCallback.java:132 - Failed; received 0 of 1 responses
WARN  [ReadStage-2] 2017-08-17 07:47:01,816 
AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread 
Thread[ReadStage-2,5,main]: {}
java.lang.AssertionError: Lower bound 
[INCL_START_BOUND(00283543383338354144333637373731373445443036424134424442463445393233453846394634283836453642373436354546423435334544363636443236344644313935333032363338314542363200,
 ab570080-831f-11e7-a81f-417b646547c3, , 1x) ]is bigger than first returned 
value [Row: 
partition_key=00283543383338354144333637373731373445443036424134424442463445393233453846394634283836453642373436354546423435334544363636443236344644313935333032363338314542363200,
 version=null, file_path=null, file_name=null | ] for sstable 
/var/lib/cassandra/data/catalog/file-aa90a340831f11e7aca2ed895c1dab3f/.idx_file_path_hash/mc-51-big-Data.db
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:124)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:47)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:374)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:186)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:155)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:500)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator.computeNext(UnfilteredRowIterators.java:360)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.rows.UnfilteredRowIterator.isEmpty(UnfilteredRowIterator.java:67)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.withSSTablesIterated(SinglePartitionReadCommand.java:695)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:639)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:514)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.index.internal.CassandraIndexSearcher.queryIndex(CassandraIndexSearcher.java:81)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.index.internal.CassandraIndexSearcher.search(CassandraIndexSearcher.java:63)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:408) 
~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1882)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2587)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_141]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
 ~[apache-cassandra-3.11.0.jar:3.11.0]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
 [apache-cassandra-3.11.0.jar:3.11.0]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
[apache-cassandra-3.11.0.jar:3.11.0]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
{code}
The related table is:
{code}
CREATE TABLE catalog.file (
path_hash text,
file_hash text,
version timeuuid,
file_path text,
file_name text,
allocations_size bigint,
change_time timestamp,
creation_time timestamp,
dacl frozen,
ea_size bigint,
end_of_file 

[jira] [Comment Edited] (CASSANDRA-13787) RangeTombstoneMarker and ParitionDeletion is not properly included in MV

2017-08-23 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138159#comment-16138159
 ] 

ZhaoYang edited comment on CASSANDRA-13787 at 8/23/17 9:54 AM:
---

The first problem is not exactly a MV bug, it's SSTableReader issue of handling 
multiple slices which is probably only used by MV. The existing code consumes 
the {{original RangeTombstoneMarker(eg. Deletion@c1=1@10)}} for the first slice 
to generate a RangeTombstoneMarker(eg. Deletion@c1=1=0@10) with first 
slice's clustering. So there is no tombstones generated for other slices. The 
fix is to the ClusteringPrefix comparison, to make sure tombstones are 
generated probably for each slices within the range of original 
RangeTombstoneMarker.

The second problem is that if there is no existing live data, empty existing 
row is given to the ViewUpdateGenerator which may resurrect deleted cells. The 
fix is to include current-open-deletion into the empty existing row. So view 
could shadow the deleted cells.


[Draft|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13787-3.0] 
patch base on cassandra-3.0 . This needs to be fixed in 3.0/3.11/trunk.




was (Author: jasonstack):
The first problem is not exactly a MV bug, it's SSTableReader issue of handling 
multiple slices which is probably only used by MV. The existing code consumes 
the original RangeTombstoneMarker for the first slice to generate a 
RangeTombstoneMarker with first slice's clustering. So there is no tombstones 
generated for other slices. The fix is to the ClusteringPrefix comparison, to 
make sure tombstones are generated probably for each slices within the range of 
original RangeTombstoneMarker.

The second problem is that if there is no existing live data, empty existing 
row is given to the ViewUpdateGenerator which may resurrect deleted cells. The 
fix is to include current-open-deletion into the empty existing row. So view 
could shadow the deleted cells.


[Draft|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13787-3.0] 
patch base on cassandra-3.0 . This needs to be fixed in 3.0/3.11/trunk.



> RangeTombstoneMarker and ParitionDeletion is not properly included in MV
> 
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> 2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) 

[jira] [Comment Edited] (CASSANDRA-13787) RangeTombstoneMarker and ParitionDeletion is not properly included in MV

2017-08-23 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138159#comment-16138159
 ] 

ZhaoYang edited comment on CASSANDRA-13787 at 8/23/17 9:47 AM:
---

The first problem is not exactly a MV bug, it's SSTableReader issue of handling 
multiple slices which is probably only used by MV. The existing code consumes 
the original RangeTombstoneMarker for the first slice to generate a 
RangeTombstoneMarker with first slice's clustering. So there is no tombstones 
generated for other slices. The fix is to the ClusteringPrefix comparison, to 
make sure tombstones are generated probably for each slices within the range of 
original RangeTombstoneMarker.

The second problem is that if there is no existing live data, empty existing 
row is given to the ViewUpdateGenerator which may resurrect deleted cells. The 
fix is to include current-open-deletion into the empty existing row. So view 
could shadow the deleted cells.


[Draft|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13787-3.0] 
patch base on cassandra-3.0 . This needs to be fixed in 3.0/3.11/trunk.




was (Author: jasonstack):
The first problem is not exactly a MV bug, it's SSTableReader issue of handling 
multiple slices. The existing code consumes the original RangeTombstoneMarker 
for the first slice to generate a RangeTombstoneMarker with first slice's 
clustering. So there is no tombstones generated for other slices. The fix is to 
the ClusteringPrefix comparison, to make sure tombstones are generated probably 
for each slices within the range of original RangeTombstoneMarker.

The second problem is that if there is no existing live data, empty existing 
row is given to the ViewUpdateGenerator which may resurrect deleted cells. The 
fix is to include current-open-deletion into the empty existing row. So view 
could shadow the deleted cells.


[Draft|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13787-3.0] 
patch base on cassandra-3.0 . This needs to be fixed in 3.0/3.11/trunk.



> RangeTombstoneMarker and ParitionDeletion is not properly included in MV
> 
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> 2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) 

[jira] [Commented] (CASSANDRA-13787) RangeTombstoneMarker and ParitionDeletion is not properly included in MV

2017-08-23 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138159#comment-16138159
 ] 

ZhaoYang commented on CASSANDRA-13787:
--

The first problem is not exactly a MV bug, it's SSTableReader issue of handling 
multiple slices. The existing code consumes the original RangeTombstoneMarker 
for the first slice to generate a RangeTombstoneMarker with first slice's 
clustering. So there is no tombstones generated for other slices. The fix is to 
the ClusteringPrefix comparison, to make sure tombstones are generated probably 
for each slices within the range of original RangeTombstoneMarker.

The second problem is that if there is no existing live data, empty existing 
row is given to the ViewUpdateGenerator which may resurrect deleted cells. The 
fix is to include current-open-deletion into the empty existing row. So view 
could shadow the deleted cells.


[Draft|https://github.com/jasonstack/cassandra/commits/CASSANDRA-13787-3.0] 
patch base on cassandra-3.0 . This needs to be fixed in 3.0/3.11/trunk.



> RangeTombstoneMarker and ParitionDeletion is not properly included in MV
> 
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> 2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 1, 1, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 2, 1, 2) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 3, 1, 3) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
> 0, 2, 0) USING TIMESTAMP 5; " +
> "APPLY BATCH");
> assertRowsIgnoringOrder(execute("select * from %s"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> }
> @Test
> public void testRangeDeletionWithFlush() throws Throwable
> {
> 

[jira] [Updated] (CASSANDRA-13787) RangeTombstoneMarker and ParitionDeletion is not properly included in MV

2017-08-23 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-13787:
-
Component/s: Local Write-Read Paths

> RangeTombstoneMarker and ParitionDeletion is not properly included in MV
> 
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> 2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 1, 1, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 2, 1, 2) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 3, 1, 3) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
> 0, 2, 0) USING TIMESTAMP 5; " +
> "APPLY BATCH");
> assertRowsIgnoringOrder(execute("select * from %s"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> }
> @Test
> public void testRangeDeletionWithFlush() throws Throwable
> {
> testExistingParitionDeletion(true);
> }
> @Test
> public void testRangeDeletionWithoutFlush() throws Throwable
> {
> testExistingParitionDeletion(false);
> }
> public void testExistingParitionDeletion(boolean flush) throws Throwable
> {
> // for partition range deletion, need to know that existing row is 
> shadowed instead of not existed.
> createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
> (a))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("mv_test1",
>"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a 
> IS NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");
> Keyspace ks = Keyspace.open(keyspace());
> ks.getColumnFamilyStore("mv_test1").disableAutoCompaction();
> 

[jira] [Updated] (CASSANDRA-13787) RangeTombstoneMarker and ParitionDeletion is not properly included in MV

2017-08-23 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-13787:
-
Description: 
Found two problems related to MV tombstone. 

1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
base rows are not shadowed in TableViews.

If the range tombstone was not flushed, it was used as deleted row to 
shadow new updates. It works correctly.
After range tombstone was flushed, it was used as RangeTombstoneMarker and 
being skipped after shadowing first update. The bound of RangeTombstoneMarker 
seems wrong, it contained full clustering, but it should contain range or it 
should be multiple RangeTombstoneMarkers for multiple slices(aka. new updates)

2. Partition tombstone is not used when no existing live data, it will 
resurrect deleted cells. It was found in 11500 and included in that patch.


In order not to make 11500 patch more complicated, I will try fix 
range/partition tombstone issue here.


{code:title=Tests to reproduce}
@Test
public void testExistingRangeTombstoneWithFlush() throws Throwable
{
testExistingRangeTombstone(true);
}

@Test
public void testExistingRangeTombstoneWithoutFlush() throws Throwable
{
testExistingRangeTombstone(false);
}

public void testExistingRangeTombstone(boolean flush) throws Throwable
{
createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
PRIMARY KEY (k1, c1, c2))");

execute("USE " + keyspace());
executeNet(protocolVersion, "USE " + keyspace());

createView("view1",
   "CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, c1)");

updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");


if (flush)

Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();

String table = KEYSPACE + "." + currentTable();
updateView("BEGIN BATCH " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
0, 0, 0) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
1, 0, 1) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
0, 1, 0) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
1, 1, 1) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
2, 1, 2) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
3, 1, 3) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
0, 2, 0) USING TIMESTAMP 5; " +
"APPLY BATCH");

assertRowsIgnoringOrder(execute("select * from %s"),
row(1, 0, 0, 0, 0),
row(1, 0, 1, 0, 1),
row(1, 2, 0, 2, 0));
assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
row(1, 0, 0, 0, 0),
row(1, 0, 1, 0, 1),
row(1, 2, 0, 2, 0));
}

@Test
public void testRangeDeletionWithFlush() throws Throwable
{
testExistingParitionDeletion(true);
}

@Test
public void testRangeDeletionWithoutFlush() throws Throwable
{
testExistingParitionDeletion(false);
}

public void testExistingParitionDeletion(boolean flush) throws Throwable
{
// for partition range deletion, need to know that existing row is 
shadowed instead of not existed.
createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
(a))");

execute("USE " + keyspace());
executeNet(protocolVersion, "USE " + keyspace());

createView("mv_test1",
   "CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");

Keyspace ks = Keyspace.open(keyspace());
ks.getColumnFamilyStore("mv_test1").disableAutoCompaction();

execute("INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?) using 
timestamp 0", 1, 1, 1, 1);
if (flush)
FBUtilities.waitOnFutures(ks.flush());

assertRowsIgnoringOrder(execute("SELECT * FROM mv_test1"), row(1, 1, 1, 
1));

// remove view row
updateView("UPDATE %s using timestamp 1 set b = null WHERE a=1");
if (flush)
FBUtilities.waitOnFutures(ks.flush());

assertRowsIgnoringOrder(execute("SELECT * FROM mv_test1"));
// remove base row, no view updated generated.
updateView("DELETE FROM %s using timestamp 2 where a=1");
if (flush)

[jira] [Created] (CASSANDRA-13788) Seemingly valid Java UDF fails compilation with error "type cannot be resolved. It is indirectly referenced from required .class files"

2017-08-23 Thread jaikiran pai (JIRA)
jaikiran pai created CASSANDRA-13788:


 Summary: Seemingly valid Java UDF fails compilation with error 
"type cannot be resolved. It is indirectly referenced from required .class 
files"
 Key: CASSANDRA-13788
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13788
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
 Environment: Cassandra 3.11.0 

java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

Reporter: jaikiran pai


We are moving to Cassandra 3.11.0 from Cassandra 2.x. We have a Java UDF which 
is straightforward and looks something like:

{code}
CREATE FUNCTION utf8_text_size (val TEXT)
CALLED ON NULL INPUT
RETURNS INT
LANGUAGE java
AS 'if (val == null) {return 0;}try {   
 return val.getBytes("UTF-8").length;} catch (Exception 
e) {throw new RuntimeException("Failed to compute size of UTF-8 
text", e);}';
{code}

This works fine in Cassandra 2.x. In Cassandra 3.11.0 when this UDF is being 
created, we keep running into this exception when the UDF is being (internally) 
compiled:

{code}
InvalidRequest: Error from server: code=2200 [Invalid query] message="Java 
source compilation failed:
Line 1: The type java.io.UnsupportedEncodingException cannot be resolved. It is 
indirectly referenced from required .class files
Line 1: The type java.nio.charset.Charset cannot be resolved. It is indirectly 
referenced from required .class files
Line 1: The method getBytes(String) from the type String refers to the missing 
type UnsupportedEncodingException
{code}


I realize there have been changes to the UDF support in Cassandra 3.x and I 
also have read this[1] article related to it. However, I don't see anything 
wrong with the above UDF. In fact, I enabled TRACE logging of 
{{org.apache.cassandra.cql3.functions}} which is where the 
{{JavaBasedUDFunction}} resides to see what the generated source looks like. 
Here's what it looks like (I have modified the classname etc, but nothing else):

{code}
package org.myapp;

import java.nio.ByteBuffer;
import java.util.*;

import org.apache.cassandra.cql3.functions.JavaUDF;
import org.apache.cassandra.cql3.functions.UDFContext;
import org.apache.cassandra.transport.ProtocolVersion;

import com.datastax.driver.core.TypeCodec;
import com.datastax.driver.core.TupleValue;
import com.datastax.driver.core.UDTValue;

public final class CassandraUDFTest extends JavaUDF
{
public CassandraUDFTest(TypeCodec returnCodec, TypeCodec[] 
argCodecs, UDFContext udfContext)
{
super(returnCodec, argCodecs, udfContext);
}

protected ByteBuffer executeImpl(ProtocolVersion protocolVersion, 
List params)
{
Integer result = xsome_keyspace_2eutf8_text_size_3232115_9(
/* parameter 'val' */
(String) super.compose(protocolVersion, 0, params.get(0))
);
return super.decompose(protocolVersion, result);
}

protected Object executeAggregateImpl(ProtocolVersion protocolVersion, 
Object firstParam, List params)
{
Integer result = xsome_keyspace_2eutf8_text_size_3232115_9(
/* parameter 'val' */
(String) firstParam
);
return result;
}

private Integer xsome_keyspace_2eutf8_text_size_3232115_9(String val)
{
if (val == null) {return 0;}try {   
 return val.getBytes("UTF-8").length;} catch (Exception e) 
{throw new RuntimeException("Failed to compute size of UTF-8 
text", e);}
}
}
{code}

I then went ahead and compiled this generated class from command line using the 
(Oracle) Java compiler as follows:

{code}
javac -cp "/opt/cassandra/apache-cassandra-3.11.0/lib/*" 
org/myapp/CassandraUDFTest.java
{code}

and it compiled fine without any errors. 

Looking at the {{JavaBasedUDFunction}} which compiles this UDF at runtime, it's 
using Eclipse JDT compiler. I haven't looked into why it would be running into 
these compilation errors.



[1] http://batey.info/cassandra-udfs.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13788) Seemingly valid Java UDF fails compilation with error "type cannot be resolved. It is indirectly referenced from required .class files"

2017-08-23 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138004#comment-16138004
 ] 

jaikiran pai edited comment on CASSANDRA-13788 at 8/23/17 7:23 AM:
---

Looked at the Cassandra UDF processing code in a bit more detail. So it turns 
out that the {{UDFFunction}} class is backed by a {{UDFClassLoader}} which has 
a whitelist (as well as a blacklist) of classes/packages allowed for use in 
Java UDF [1]. Turns out both:
{code}
java.io.UnsupportedEncodingException
java.nio.charset.Charset
{code}
which is what is being indirectly used in that UDF aren't part of the whitelist 
(they aren't part of the blacklist either). 

Would it be possible to add these to the whitelist or provide a way to somehow 
configure the whitelist (at the server level)? As far as I see, given how 
limited the whitelist is, it's going to be very restrictive to even write 
trivial UDFs in Java (like the one above).

[1] 
https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/src/java/org/apache/cassandra/cql3/functions/UDFunction.java#L80


was (Author: jaikiran):
Looked at the Cassandra UDF processing code in a bit more detail. So it turns 
out that the {{UDFFunction}} class is backed by a {{UDFClassLoader}} which has 
a whitelist (as well as a blacklist) of classes/packages allowed for use in 
Java UDF. Turns out both:
{code}
java.io.UnsupportedEncodingException
java.nio.charset.Charset
{code}
which is what is being indirectly used in that UDF aren't part of the whitelist 
(they aren't part of the blacklist either). 

Would it be possible to add these to the whitelist or provide a way to somehow 
configure the whitelist (at the server level)? As far as I see, given how 
limited the whitelist is, it's going to be very restrictive to even write 
trivial UDFs in Java (like the one above).


> Seemingly valid Java UDF fails compilation with error "type cannot be 
> resolved. It is indirectly referenced from required .class files"
> ---
>
> Key: CASSANDRA-13788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13788
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: Cassandra 3.11.0 
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: jaikiran pai
>
> We are moving to Cassandra 3.11.0 from Cassandra 2.x. We have a Java UDF 
> which is straightforward and looks something like:
> {code}
> CREATE FUNCTION utf8_text_size (val TEXT)
> CALLED ON NULL INPUT
> RETURNS INT
> LANGUAGE java
> AS 'if (val == null) {return 0;}try { 
>return val.getBytes("UTF-8").length;} catch 
> (Exception e) {throw new RuntimeException("Failed to compute 
> size of UTF-8 text", e);}';
> {code}
> This works fine in Cassandra 2.x. In Cassandra 3.11.0 when this UDF is being 
> created, we keep running into this exception when the UDF is being 
> (internally) compiled:
> {code}
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Java 
> source compilation failed:
> Line 1: The type java.io.UnsupportedEncodingException cannot be resolved. It 
> is indirectly referenced from required .class files
> Line 1: The type java.nio.charset.Charset cannot be resolved. It is 
> indirectly referenced from required .class files
> Line 1: The method getBytes(String) from the type String refers to the 
> missing type UnsupportedEncodingException
> {code}
> I realize there have been changes to the UDF support in Cassandra 3.x and I 
> also have read this[1] article related to it. However, I don't see anything 
> wrong with the above UDF. In fact, I enabled TRACE logging of 
> {{org.apache.cassandra.cql3.functions}} which is where the 
> {{JavaBasedUDFunction}} resides to see what the generated source looks like. 
> Here's what it looks like (I have modified the classname etc, but nothing 
> else):
> {code}
> package org.myapp;
> import java.nio.ByteBuffer;
> import java.util.*;
> import org.apache.cassandra.cql3.functions.JavaUDF;
> import org.apache.cassandra.cql3.functions.UDFContext;
> import org.apache.cassandra.transport.ProtocolVersion;
> import com.datastax.driver.core.TypeCodec;
> import com.datastax.driver.core.TupleValue;
> import com.datastax.driver.core.UDTValue;
> public final class CassandraUDFTest extends JavaUDF
> {
> public CassandraUDFTest(TypeCodec returnCodec, 
> TypeCodec[] argCodecs, UDFContext udfContext)
> {
> super(returnCodec, argCodecs, udfContext);
> }
> protected ByteBuffer executeImpl(ProtocolVersion 

[jira] [Commented] (CASSANDRA-13788) Seemingly valid Java UDF fails compilation with error "type cannot be resolved. It is indirectly referenced from required .class files"

2017-08-23 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138005#comment-16138005
 ] 

jaikiran pai commented on CASSANDRA-13788:
--

Based on my previous comment, this really isn't a bug and is behaving as per 
expectations of the UDF handling code, so changing this JIRA type to 
Improvement instead.


> Seemingly valid Java UDF fails compilation with error "type cannot be 
> resolved. It is indirectly referenced from required .class files"
> ---
>
> Key: CASSANDRA-13788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13788
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: Cassandra 3.11.0 
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: jaikiran pai
>
> We are moving to Cassandra 3.11.0 from Cassandra 2.x. We have a Java UDF 
> which is straightforward and looks something like:
> {code}
> CREATE FUNCTION utf8_text_size (val TEXT)
> CALLED ON NULL INPUT
> RETURNS INT
> LANGUAGE java
> AS 'if (val == null) {return 0;}try { 
>return val.getBytes("UTF-8").length;} catch 
> (Exception e) {throw new RuntimeException("Failed to compute 
> size of UTF-8 text", e);}';
> {code}
> This works fine in Cassandra 2.x. In Cassandra 3.11.0 when this UDF is being 
> created, we keep running into this exception when the UDF is being 
> (internally) compiled:
> {code}
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Java 
> source compilation failed:
> Line 1: The type java.io.UnsupportedEncodingException cannot be resolved. It 
> is indirectly referenced from required .class files
> Line 1: The type java.nio.charset.Charset cannot be resolved. It is 
> indirectly referenced from required .class files
> Line 1: The method getBytes(String) from the type String refers to the 
> missing type UnsupportedEncodingException
> {code}
> I realize there have been changes to the UDF support in Cassandra 3.x and I 
> also have read this[1] article related to it. However, I don't see anything 
> wrong with the above UDF. In fact, I enabled TRACE logging of 
> {{org.apache.cassandra.cql3.functions}} which is where the 
> {{JavaBasedUDFunction}} resides to see what the generated source looks like. 
> Here's what it looks like (I have modified the classname etc, but nothing 
> else):
> {code}
> package org.myapp;
> import java.nio.ByteBuffer;
> import java.util.*;
> import org.apache.cassandra.cql3.functions.JavaUDF;
> import org.apache.cassandra.cql3.functions.UDFContext;
> import org.apache.cassandra.transport.ProtocolVersion;
> import com.datastax.driver.core.TypeCodec;
> import com.datastax.driver.core.TupleValue;
> import com.datastax.driver.core.UDTValue;
> public final class CassandraUDFTest extends JavaUDF
> {
> public CassandraUDFTest(TypeCodec returnCodec, 
> TypeCodec[] argCodecs, UDFContext udfContext)
> {
> super(returnCodec, argCodecs, udfContext);
> }
> protected ByteBuffer executeImpl(ProtocolVersion protocolVersion, 
> List params)
> {
> Integer result = xsome_keyspace_2eutf8_text_size_3232115_9(
> /* parameter 'val' */
> (String) super.compose(protocolVersion, 0, params.get(0))
> );
> return super.decompose(protocolVersion, result);
> }
> protected Object executeAggregateImpl(ProtocolVersion protocolVersion, 
> Object firstParam, List params)
> {
> Integer result = xsome_keyspace_2eutf8_text_size_3232115_9(
> /* parameter 'val' */
> (String) firstParam
> );
> return result;
> }
> private Integer xsome_keyspace_2eutf8_text_size_3232115_9(String val)
> {
> if (val == null) {return 0;}try { 
>return val.getBytes("UTF-8").length;} catch (Exception 
> e) {throw new RuntimeException("Failed to compute size of 
> UTF-8 text", e);}
> }
> }
> {code}
> I then went ahead and compiled this generated class from command line using 
> the (Oracle) Java compiler as follows:
> {code}
> javac -cp "/opt/cassandra/apache-cassandra-3.11.0/lib/*" 
> org/myapp/CassandraUDFTest.java
> {code}
> and it compiled fine without any errors. 
> Looking at the {{JavaBasedUDFunction}} which compiles this UDF at runtime, 
> it's using Eclipse JDT compiler. I haven't looked into why it would be 
> running into these compilation errors.
> [1] http://batey.info/cassandra-udfs.html



--
This message was sent by Atlassian JIRA

[jira] [Updated] (CASSANDRA-13788) Seemingly valid Java UDF fails compilation with error "type cannot be resolved. It is indirectly referenced from required .class files"

2017-08-23 Thread jaikiran pai (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jaikiran pai updated CASSANDRA-13788:
-
Issue Type: Improvement  (was: Bug)

> Seemingly valid Java UDF fails compilation with error "type cannot be 
> resolved. It is indirectly referenced from required .class files"
> ---
>
> Key: CASSANDRA-13788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13788
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
> Environment: Cassandra 3.11.0 
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: jaikiran pai
>
> We are moving to Cassandra 3.11.0 from Cassandra 2.x. We have a Java UDF 
> which is straightforward and looks something like:
> {code}
> CREATE FUNCTION utf8_text_size (val TEXT)
> CALLED ON NULL INPUT
> RETURNS INT
> LANGUAGE java
> AS 'if (val == null) {return 0;}try { 
>return val.getBytes("UTF-8").length;} catch 
> (Exception e) {throw new RuntimeException("Failed to compute 
> size of UTF-8 text", e);}';
> {code}
> This works fine in Cassandra 2.x. In Cassandra 3.11.0 when this UDF is being 
> created, we keep running into this exception when the UDF is being 
> (internally) compiled:
> {code}
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Java 
> source compilation failed:
> Line 1: The type java.io.UnsupportedEncodingException cannot be resolved. It 
> is indirectly referenced from required .class files
> Line 1: The type java.nio.charset.Charset cannot be resolved. It is 
> indirectly referenced from required .class files
> Line 1: The method getBytes(String) from the type String refers to the 
> missing type UnsupportedEncodingException
> {code}
> I realize there have been changes to the UDF support in Cassandra 3.x and I 
> also have read this[1] article related to it. However, I don't see anything 
> wrong with the above UDF. In fact, I enabled TRACE logging of 
> {{org.apache.cassandra.cql3.functions}} which is where the 
> {{JavaBasedUDFunction}} resides to see what the generated source looks like. 
> Here's what it looks like (I have modified the classname etc, but nothing 
> else):
> {code}
> package org.myapp;
> import java.nio.ByteBuffer;
> import java.util.*;
> import org.apache.cassandra.cql3.functions.JavaUDF;
> import org.apache.cassandra.cql3.functions.UDFContext;
> import org.apache.cassandra.transport.ProtocolVersion;
> import com.datastax.driver.core.TypeCodec;
> import com.datastax.driver.core.TupleValue;
> import com.datastax.driver.core.UDTValue;
> public final class CassandraUDFTest extends JavaUDF
> {
> public CassandraUDFTest(TypeCodec returnCodec, 
> TypeCodec[] argCodecs, UDFContext udfContext)
> {
> super(returnCodec, argCodecs, udfContext);
> }
> protected ByteBuffer executeImpl(ProtocolVersion protocolVersion, 
> List params)
> {
> Integer result = xsome_keyspace_2eutf8_text_size_3232115_9(
> /* parameter 'val' */
> (String) super.compose(protocolVersion, 0, params.get(0))
> );
> return super.decompose(protocolVersion, result);
> }
> protected Object executeAggregateImpl(ProtocolVersion protocolVersion, 
> Object firstParam, List params)
> {
> Integer result = xsome_keyspace_2eutf8_text_size_3232115_9(
> /* parameter 'val' */
> (String) firstParam
> );
> return result;
> }
> private Integer xsome_keyspace_2eutf8_text_size_3232115_9(String val)
> {
> if (val == null) {return 0;}try { 
>return val.getBytes("UTF-8").length;} catch (Exception 
> e) {throw new RuntimeException("Failed to compute size of 
> UTF-8 text", e);}
> }
> }
> {code}
> I then went ahead and compiled this generated class from command line using 
> the (Oracle) Java compiler as follows:
> {code}
> javac -cp "/opt/cassandra/apache-cassandra-3.11.0/lib/*" 
> org/myapp/CassandraUDFTest.java
> {code}
> and it compiled fine without any errors. 
> Looking at the {{JavaBasedUDFunction}} which compiles this UDF at runtime, 
> it's using Eclipse JDT compiler. I haven't looked into why it would be 
> running into these compilation errors.
> [1] http://batey.info/cassandra-udfs.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional 

[jira] [Commented] (CASSANDRA-13788) Seemingly valid Java UDF fails compilation with error "type cannot be resolved. It is indirectly referenced from required .class files"

2017-08-23 Thread jaikiran pai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16138004#comment-16138004
 ] 

jaikiran pai commented on CASSANDRA-13788:
--

Looked at the Cassandra UDF processing code in a bit more detail. So it turns 
out that the {{UDFFunction}} class is backed by a {{UDFClassLoader}} which has 
a whitelist (as well as a blacklist) of classes/packages allowed for use in 
Java UDF. Turns out both:
{code}
java.io.UnsupportedEncodingException
java.nio.charset.Charset
{code}
which is what is being indirectly used in that UDF aren't part of the whitelist 
(they aren't part of the blacklist either). 

Would it be possible to add these to the whitelist or provide a way to somehow 
configure the whitelist (at the server level)? As far as I see, given how 
limited the whitelist is, it's going to be very restrictive to even write 
trivial UDFs in Java (like the one above).


> Seemingly valid Java UDF fails compilation with error "type cannot be 
> resolved. It is indirectly referenced from required .class files"
> ---
>
> Key: CASSANDRA-13788
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13788
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.11.0 
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: jaikiran pai
>
> We are moving to Cassandra 3.11.0 from Cassandra 2.x. We have a Java UDF 
> which is straightforward and looks something like:
> {code}
> CREATE FUNCTION utf8_text_size (val TEXT)
> CALLED ON NULL INPUT
> RETURNS INT
> LANGUAGE java
> AS 'if (val == null) {return 0;}try { 
>return val.getBytes("UTF-8").length;} catch 
> (Exception e) {throw new RuntimeException("Failed to compute 
> size of UTF-8 text", e);}';
> {code}
> This works fine in Cassandra 2.x. In Cassandra 3.11.0 when this UDF is being 
> created, we keep running into this exception when the UDF is being 
> (internally) compiled:
> {code}
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Java 
> source compilation failed:
> Line 1: The type java.io.UnsupportedEncodingException cannot be resolved. It 
> is indirectly referenced from required .class files
> Line 1: The type java.nio.charset.Charset cannot be resolved. It is 
> indirectly referenced from required .class files
> Line 1: The method getBytes(String) from the type String refers to the 
> missing type UnsupportedEncodingException
> {code}
> I realize there have been changes to the UDF support in Cassandra 3.x and I 
> also have read this[1] article related to it. However, I don't see anything 
> wrong with the above UDF. In fact, I enabled TRACE logging of 
> {{org.apache.cassandra.cql3.functions}} which is where the 
> {{JavaBasedUDFunction}} resides to see what the generated source looks like. 
> Here's what it looks like (I have modified the classname etc, but nothing 
> else):
> {code}
> package org.myapp;
> import java.nio.ByteBuffer;
> import java.util.*;
> import org.apache.cassandra.cql3.functions.JavaUDF;
> import org.apache.cassandra.cql3.functions.UDFContext;
> import org.apache.cassandra.transport.ProtocolVersion;
> import com.datastax.driver.core.TypeCodec;
> import com.datastax.driver.core.TupleValue;
> import com.datastax.driver.core.UDTValue;
> public final class CassandraUDFTest extends JavaUDF
> {
> public CassandraUDFTest(TypeCodec returnCodec, 
> TypeCodec[] argCodecs, UDFContext udfContext)
> {
> super(returnCodec, argCodecs, udfContext);
> }
> protected ByteBuffer executeImpl(ProtocolVersion protocolVersion, 
> List params)
> {
> Integer result = xsome_keyspace_2eutf8_text_size_3232115_9(
> /* parameter 'val' */
> (String) super.compose(protocolVersion, 0, params.get(0))
> );
> return super.decompose(protocolVersion, result);
> }
> protected Object executeAggregateImpl(ProtocolVersion protocolVersion, 
> Object firstParam, List params)
> {
> Integer result = xsome_keyspace_2eutf8_text_size_3232115_9(
> /* parameter 'val' */
> (String) firstParam
> );
> return result;
> }
> private Integer xsome_keyspace_2eutf8_text_size_3232115_9(String val)
> {
> if (val == null) {return 0;}try { 
>return val.getBytes("UTF-8").length;} catch (Exception 
> e) {throw new RuntimeException("Failed to compute size of 
> UTF-8 text", e);}
> }
> }
> {code}

[jira] [Commented] (CASSANDRA-13786) Validation compactions can cause orphan sstable warnings

2017-08-23 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137976#comment-16137976
 ] 

Marcus Eriksson commented on CASSANDRA-13786:
-

Lets just drop it to DEBUG? It is pretty unclear what a user needs to do to 
avoid/fix it but keeping it around to make debugging easier might make sense?

> Validation compactions can cause orphan sstable warnings
> 
>
> Key: CASSANDRA-13786
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13786
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
> Fix For: 4.0
>
>
> I've seen LevelledCompactionStrategy occasionally logging: 
> {quote} from level 0 is not on corresponding level in the 
> leveled manifest. This is not a problem per se, but may indicate an orphaned 
> sstable due to a failed compaction not cleaned up properly."{quote} warnings 
> from a ValidationExecutor thread.
> What's happening here is that a compaction running concurrently with the 
> validation is promoting (or demoting) sstables as part of an incremental 
> repair, and an sstable has changed hands by the time the validation 
> compaction gets around to getting scanners for it. The sstable 
> isolation/synchronization done by validation compactions is a lot looser than 
> normal compactions, so seeing this happen isn't very surprising. Given that 
> it's harmless, and not unexpected, I think it would be best to not log these 
> during validation compactions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13787) RangeTombstoneMarker and ParitionDeletion is not properly included in MV

2017-08-23 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-13787:
-
Description: 
Found two problems related to MV tombstone. 

1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
base rows are not shadowed in TableViews.

If the range tombstone was not flushed, it was used as deleted row to 
shadow new updates. It works correctly.
After range tombstone was flushed, it was used as RangeTombstoneMarker and 
being skipped after shadowing first update. The bound of RangeTombstoneMarker 
seems wrong, it contained full clustering, but it should contain range.

2. Partition tombstone is not used when no existing live data, it will 
resurrect deleted cells. It was found in 11500 and included in that patch.


In order not to make 11500 patch more complicated, I will try fix 
range/partition tombstone issue here.


{code:title=Tests to reproduce}
@Test
public void testExistingRangeTombstoneWithFlush() throws Throwable
{
testExistingRangeTombstone(true);
}

@Test
public void testExistingRangeTombstoneWithoutFlush() throws Throwable
{
testExistingRangeTombstone(false);
}

public void testExistingRangeTombstone(boolean flush) throws Throwable
{
createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
PRIMARY KEY (k1, c1, c2))");

execute("USE " + keyspace());
executeNet(protocolVersion, "USE " + keyspace());

createView("view1",
   "CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, c1)");

updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");


if (flush)

Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();

String table = KEYSPACE + "." + currentTable();
updateView("BEGIN BATCH " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
0, 0, 0) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
1, 0, 1) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
0, 1, 0) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
1, 1, 1) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
2, 1, 2) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
3, 1, 3) USING TIMESTAMP 5; " +
"INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
0, 2, 0) USING TIMESTAMP 5; " +
"APPLY BATCH");

assertRowsIgnoringOrder(execute("select * from %s"),
row(1, 0, 0, 0, 0),
row(1, 0, 1, 0, 1),
row(1, 2, 0, 2, 0));
assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
row(1, 0, 0, 0, 0),
row(1, 0, 1, 0, 1),
row(1, 2, 0, 2, 0));
}

@Test
public void testRangeDeletionWithFlush() throws Throwable
{
testExistingParitionDeletion(true);
}

@Test
public void testRangeDeletionWithoutFlush() throws Throwable
{
testExistingParitionDeletion(false);
}

public void testExistingParitionDeletion(boolean flush) throws Throwable
{
// for partition range deletion, need to know that existing row is 
shadowed instead of not existed.
createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
(a))");

execute("USE " + keyspace());
executeNet(protocolVersion, "USE " + keyspace());

createView("mv_test1",
   "CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");

Keyspace ks = Keyspace.open(keyspace());
ks.getColumnFamilyStore("mv_test1").disableAutoCompaction();

execute("INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?) using 
timestamp 0", 1, 1, 1, 1);
if (flush)
FBUtilities.waitOnFutures(ks.flush());

assertRowsIgnoringOrder(execute("SELECT * FROM mv_test1"), row(1, 1, 1, 
1));

// remove view row
updateView("UPDATE %s using timestamp 1 set b = null WHERE a=1");
if (flush)
FBUtilities.waitOnFutures(ks.flush());

assertRowsIgnoringOrder(execute("SELECT * FROM mv_test1"));
// remove base row, no view updated generated.
updateView("DELETE FROM %s using timestamp 2 where a=1");
if (flush)
FBUtilities.waitOnFutures(ks.flush());

assertRowsIgnoringOrder(execute("SELECT * FROM