[jira] [Commented] (CASSANDRA-14482) ZSTD Compressor support in Cassandra

2018-06-07 Thread Vinay Chella (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505719#comment-16505719
 ] 

Vinay Chella commented on CASSANDRA-14482:
--

[~sushm...@gmail.com] We will also try zstd within C* internally and share our 
results. 

[~zznate] As we discussed in the meetup, can you assign this ticket to me.  I 
am happy to work on this. 

> ZSTD Compressor support in Cassandra
> 
>
> Key: CASSANDRA-14482
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14482
> Project: Cassandra
>  Issue Type: Wish
>  Components: Libraries
>Reporter: Sushma A Devendrappa
>Priority: Major
> Fix For: 3.11.x
>
>
> ZStandard has a great speed and compression ratio tradeoff. 
> ZStandard is open source compression from Facebook.
> More about ZSTD
> [https://github.com/facebook/zstd]
> https://code.facebook.com/posts/1658392934479273/smaller-and-faster-data-compression-with-zstandard/
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14482) ZSTD Compressor support in Cassandra

2018-06-07 Thread Sushma A Devendrappa (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505708#comment-16505708
 ] 

Sushma A Devendrappa commented on CASSANDRA-14482:
--

[~jolynch] Thanks for showing interest, I was planning to move this ticket from 
wish list to actionable item.

Interesting to see there is some implementation already for this. I will soon 
share some bench mark results comparing Deflate and ZSTD on Java using jni.

> ZSTD Compressor support in Cassandra
> 
>
> Key: CASSANDRA-14482
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14482
> Project: Cassandra
>  Issue Type: Wish
>  Components: Libraries
>Reporter: Sushma A Devendrappa
>Priority: Major
> Fix For: 3.11.x
>
>
> ZStandard has a great speed and compression ratio tradeoff. 
> ZStandard is open source compression from Facebook.
> More about ZSTD
> [https://github.com/facebook/zstd]
> https://code.facebook.com/posts/1658392934479273/smaller-and-faster-data-compression-with-zstandard/
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13371) Remove legacy auth tables support

2018-06-07 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-13371:
-
Issue Type: Bug  (was: Improvement)

> Remove legacy auth tables support
> -
>
> Key: CASSANDRA-13371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Auth
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
>  Labels: security
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> Starting with Cassandra 3.0, we include support for converting pre 
> CASSANDRA-7653 user tables, until they will be dropped by the operator. 
> Converting e.g. permissions happens by simply copying all of them from 
> {{permissions}} -> {{role_permissions}}, until the {{permissions}} table has 
> been dropped.
> Upgrading to 4.0 will only be possible from 3.0 upwards, so I think it's safe 
> to assume that the new permissions table has already been populated, whether 
> the old table was dropped or not. Therefor I'd suggest to just get rid of the 
> legacy support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14500) Debian package to include systemd file and conf

2018-06-07 Thread Lerh Chuan Low (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lerh Chuan Low updated CASSANDRA-14500:
---
Status: Patch Available  (was: Awaiting Feedback)

> Debian package to include systemd file and conf
> ---
>
> Key: CASSANDRA-14500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14500
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Lerh Chuan Low
>Assignee: Lerh Chuan Low
>Priority: Minor
>
> I've been testing Cassandra on trunk on Debian stretch, and have been 
> creating my own systemd service files for Cassandra. My Cassandra clusters 
> would sometimes die due to too many open files. 
> As it turns out after some digging, this is because systemd ignores 
> */etc/security/limits.conf.* It relies on a configuration file in 
> .d/.conf. There's more information here: 
> [https://www.freedesktop.org/software/systemd/man/systemd-system.conf.html]. 
> So, for example, for */etc/systemd/system/cassandra.service*, the ulimits are 
> read from */etc/systemd/system/cassandra.service.d/cassandra.conf*. 
> Crosschecking with the limits of my Cassandra process, it looks like the 
> */etc/security/limits.conf* really were not respected. If I make the change 
> above, then it works as expected. */etc/security/limits.conf* is shipped in 
> Cassandra's debian package. 
> Given that there are far more distributions using Systemd (Ubuntu is now as 
> well), I was wondering if it's worth the effort to change Cassandra's debian 
> packaging to use systemd (or at least, include systemd service). I'm not 
> totally familiar with whether it's common or normal to include a service file 
> in packaging so happy to be corrected/cancelled depending on what people 
> think. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14500) Debian package to include systemd file and conf

2018-06-07 Thread Lerh Chuan Low (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lerh Chuan Low updated CASSANDRA-14500:
---
Status: Awaiting Feedback  (was: Open)

> Debian package to include systemd file and conf
> ---
>
> Key: CASSANDRA-14500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14500
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Packaging
>Reporter: Lerh Chuan Low
>Assignee: Lerh Chuan Low
>Priority: Minor
>
> I've been testing Cassandra on trunk on Debian stretch, and have been 
> creating my own systemd service files for Cassandra. My Cassandra clusters 
> would sometimes die due to too many open files. 
> As it turns out after some digging, this is because systemd ignores 
> */etc/security/limits.conf.* It relies on a configuration file in 
> .d/.conf. There's more information here: 
> [https://www.freedesktop.org/software/systemd/man/systemd-system.conf.html]. 
> So, for example, for */etc/systemd/system/cassandra.service*, the ulimits are 
> read from */etc/systemd/system/cassandra.service.d/cassandra.conf*. 
> Crosschecking with the limits of my Cassandra process, it looks like the 
> */etc/security/limits.conf* really were not respected. If I make the change 
> above, then it works as expected. */etc/security/limits.conf* is shipped in 
> Cassandra's debian package. 
> Given that there are far more distributions using Systemd (Ubuntu is now as 
> well), I was wondering if it's worth the effort to change Cassandra's debian 
> packaging to use systemd (or at least, include systemd service). I'm not 
> totally familiar with whether it's common or normal to include a service file 
> in packaging so happy to be corrected/cancelled depending on what people 
> think. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14448) Improve the performance of CAS

2018-06-07 Thread Dikang Gu (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505576#comment-16505576
 ] 

Dikang Gu edited comment on CASSANDRA-14448 at 6/8/18 1:01 AM:
---

An initial patch here: 
[trunk|https://github.com/DikangGu/cassandra/commit/da43bb7fc1336e8f2f7e04d84be1a44271eafba9].
 It introduces a new PAXOS_PREPARE_AND_READ verb, to allow in-place upgrade.

I also ran some tests in our internal test cluster, which across 5 different 
data centers in US, with 1 replica in each data center. The test client was 
doing 10K operations.

I test two types of use cases, non-contention and contention ones. For 
non-contention use case, each operation updates different key. For contention 
use case, there are 5 unique keys in total, each thread picks one to update. 

As the result, there is about +40% latency improvements for non-contention 
test. For contention test, there are some latency improvements as well, and the 
timeouts are much less as well.

 
*non-contention test*

| |C* 3.0.15, without patch| | | | | |fastpaxos, combine prepare and read 
together| |
|10K CAS|sync commit| | |async commit| | |sync commit| | |async commit| |
| |1 thread|5 threads|10 threads|1 thread|5 threads|10 threads|1 thread|5 
threads|10 threads|1 thread|5 threads|10 threads| |
|Total Time 
(ms)|1915163|378791|185424|1464694|291807|140827|1422831|284674|142433|948230|187583|94047|
 |
|mean|192.31|188.99|185.08|146.62|145.8|140.41|142.4|142.46|142.23|94.31|93.96|93.93|
 |
|P75|196.52|195.6|194.99|148.08|147.29|146.68|147.46|147.06|146.81|98.86|98.27|98.2|
 |
|P95|209.68|208.23|207.77|160.8|160.25|158.89|147.98|147.48|147.25|99.51|98.69|98.57|
 |
|P99|211.43|212.21|208.61|161.71|160.94|159.9|148.42|147.85|147.6|100.18|99.12|98.86|
 |
| | | | | | | | | | | | | | |

*contention test*

| |C* 3.0.15, without patch| | | | | |fastpaxos, combine prepare and read 
together| |
|10K CAS|sync commit| | |async commit| | |sync commit| | |async commit| |
| |1 thread|5 threads|10 threads|1 thread|5 threads|10 threads|1 thread|5 
threads|10 threads|1 thread|5 threads|10 threads| |
|Total Time 
(ms)|1886023|364048|478343|1462954|450799|481059|1432742|285417|330649|937701|305705|30|
 |
|mean|193.41|183.24|510.41|148.48|232.41|493.72|143.28|143.91|334.54|95.55|154.45|301.69|
 |
|P75|199.69|186.09|1008.11|150.5|150.49|1010.68|142.28|149.5|432.55|96.113|101.42|418.88|
 |
|P95|200.58|187.09|1160.3|151.03|1029.75|1167.68|151.19|150.27|1076.05|102.11|615.83|1039.38|
 |
|P99|201.17|189.71|1217.71|151.92|1146.28|1228.08|151.66|150.59|1141.52|102.65|1048.34|1129.77|
 |
|Timeouts|0|0|2093|0|443|2282|0|0|863|0|193|752| |
| | | | | | | | | | | | | | |

 


was (Author: dikanggu):
An initial patch here: 
[trunk|https://github.com/DikangGu/cassandra/commit/da43bb7fc1336e8f2f7e04d84be1a44271eafba9].
 It introduces a new PAXOS_PREPARE_AND_READ verb, to allow in-place upgrade.

I also ran some tests in our internal test cluster, which across 5 different 
data centers in US, with 1 replica in each data center. The test client was 
doing 10K operations.

I test two types of use cases, non-contention and contention ones. For 
non-contention use case, each operation updates different key. For contention 
use case, there are 5 unique keys in total, each thread picks one to update. 

As the result, there is about +40% latency improvements for non-contention 
test. For contention test, there are some latency improvements as well, and the 
timeouts are much less as well.

 
 non-contention test
 
| |C* 3.0.15, without patch|fastpaxos, combine prepare and read together| |
|10K CAS|sync commit|async commit|sync commit|async commit| |
| |1 thread|5 threads|10 threads|1 thread|5 threads|10 threads|1 thread|5 
threads|10 threads|1 thread|5 threads|10 threads| |
|Total Time 
(ms)|1915163|378791|185424|1464694|291807|140827|1422831|284674|142433|948230|187583|94047|
 |
|mean|192.31|188.99|185.08|146.62|145.8|140.41|142.4|142.46|142.23|94.31|93.96|93.93|
 |
|P75|196.52|195.6|194.99|148.08|147.29|146.68|147.46|147.06|146.81|98.86|98.27|98.2|
 |
|P95|209.68|208.23|207.77|160.8|160.25|158.89|147.98|147.48|147.25|99.51|98.69|98.57|
 |
|P99|211.43|212.21|208.61|161.71|160.94|159.9|148.42|147.85|147.6|100.18|99.12|98.86|
 |
| | | | | | | | | | | | | | |

contention test
 
| |C* 3.0.15, without patch|fastpaxos, combine prepare and read together| |
|10K CAS|sync commit|async commit|sync commit|async commit| |
| |1 thread|5 threads|10 threads|1 thread|5 threads|10 threads|1 thread|5 
threads|10 threads|1 thread|5 threads|10 threads| |
|Total Time 
(ms)|1886023|364048|478343|1462954|450799|481059|1432742|285417|330649|937701|305705|30|
 |
|mean|193.41|183.24|510.41|148.48|232.41|493.72|143.28|143.91|334.54|95.55|154.45|301.69|
 |
|P75|199.69|186.09|1008.11|150.5|150.49|1010.68|142.28|149.5|432.55|96.113|101.42|418.88|
 |

[jira] [Commented] (CASSANDRA-14448) Improve the performance of CAS

2018-06-07 Thread Dikang Gu (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505576#comment-16505576
 ] 

Dikang Gu commented on CASSANDRA-14448:
---

An initial patch here: 
[trunk|https://github.com/DikangGu/cassandra/commit/da43bb7fc1336e8f2f7e04d84be1a44271eafba9].
 It introduces a new PAXOS_PREPARE_AND_READ verb, to allow in-place upgrade.

I also ran some tests in our internal test cluster, which across 5 different 
data centers in US, with 1 replica in each data center. The test client was 
doing 10K operations.

I test two types of use cases, non-contention and contention ones. For 
non-contention use case, each operation updates different key. For contention 
use case, there are 5 unique keys in total, each thread picks one to update. 

As the result, there is about +40% latency improvements for non-contention 
test. For contention test, there are some latency improvements as well, and the 
timeouts are much less as well.

 
 non-contention test
 
| |C* 3.0.15, without patch|fastpaxos, combine prepare and read together| |
|10K CAS|sync commit|async commit|sync commit|async commit| |
| |1 thread|5 threads|10 threads|1 thread|5 threads|10 threads|1 thread|5 
threads|10 threads|1 thread|5 threads|10 threads| |
|Total Time 
(ms)|1915163|378791|185424|1464694|291807|140827|1422831|284674|142433|948230|187583|94047|
 |
|mean|192.31|188.99|185.08|146.62|145.8|140.41|142.4|142.46|142.23|94.31|93.96|93.93|
 |
|P75|196.52|195.6|194.99|148.08|147.29|146.68|147.46|147.06|146.81|98.86|98.27|98.2|
 |
|P95|209.68|208.23|207.77|160.8|160.25|158.89|147.98|147.48|147.25|99.51|98.69|98.57|
 |
|P99|211.43|212.21|208.61|161.71|160.94|159.9|148.42|147.85|147.6|100.18|99.12|98.86|
 |
| | | | | | | | | | | | | | |

contention test
 
| |C* 3.0.15, without patch|fastpaxos, combine prepare and read together| |
|10K CAS|sync commit|async commit|sync commit|async commit| |
| |1 thread|5 threads|10 threads|1 thread|5 threads|10 threads|1 thread|5 
threads|10 threads|1 thread|5 threads|10 threads| |
|Total Time 
(ms)|1886023|364048|478343|1462954|450799|481059|1432742|285417|330649|937701|305705|30|
 |
|mean|193.41|183.24|510.41|148.48|232.41|493.72|143.28|143.91|334.54|95.55|154.45|301.69|
 |
|P75|199.69|186.09|1008.11|150.5|150.49|1010.68|142.28|149.5|432.55|96.113|101.42|418.88|
 |
|P95|200.58|187.09|1160.3|151.03|1029.75|1167.68|151.19|150.27|1076.05|102.11|615.83|1039.38|
 |
|P99|201.17|189.71|1217.71|151.92|1146.28|1228.08|151.66|150.59|1141.52|102.65|1048.34|1129.77|
 |
|Timeouts|0|0|2093|0|443|2282|0|0|863|0|193|752| |
| | | | | | | | | | | | | | |


 
 

> Improve the performance of CAS
> --
>
> Key: CASSANDRA-14448
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14448
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Dikang Gu
>Assignee: Dikang Gu
>Priority: Major
>
> I'm working on some performance improvements of the lightweight transitions 
> (compare and set).
>  
> As you know, current CAS requires 4 round trips to finish, which is not 
> efficient, especially in cross DC case.
> 1) Prepare
> 2) Quorum read current value
> 3) Propose new value
> 4) Commit
>  
> I'm proposing the following improvements to reduce it to 2 round trips, which 
> is:
> 1) Combine prepare and quorum read together, use only one round trip to 
> decide the ballot and also piggyback the current value in response.
> 2) Propose new value, and then send out the commit request asynchronously, so 
> client will not wait for the ack of the commit. In case of commit failures, 
> we should still have chance to retry/repair it through hints or following 
> read/cas events.
>  
> After the improvement, we should be able to finish the CAS operation using 2 
> rounds trips. There can be following improvements as well, and this can be a 
> start point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13981) Enable Cassandra for Persistent Memory

2018-06-07 Thread shylaja kokoori (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1650#comment-1650
 ] 

shylaja kokoori commented on CASSANDRA-13981:
-

Uploading patch version 2.1. A branch with this patch applied is at 
https://github.com/shyla226/cassandra/tree/13981. This patch ties up work that 
was in progress to improve performance and persistent memory footprint of the 
PCJ-based design.

We are now exploring the design & sketch that Jason Brown has proposed.

[^in-mem-cassandra-2.1.patch]

[^readme2.1.txt]

> Enable Cassandra for Persistent Memory 
> ---
>
> Key: CASSANDRA-13981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13981
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Preetika Tyagi
>Assignee: Preetika Tyagi
>Priority: Major
> Fix For: 4.x
>
> Attachments: in-mem-cassandra-1.0.patch, in-mem-cassandra-2.0.patch, 
> in-mem-cassandra-2.1.patch, readme.txt, readme2.1.txt, readme2_0.txt
>
>
> Currently, Cassandra relies on disks for data storage and hence it needs data 
> serialization, compaction, bloom filters and partition summary/index for 
> speedy access of the data. However, with persistent memory, data can be 
> stored directly in the form of Java objects and collections, which can 
> greatly simplify the retrieval mechanism of the data. What we are proposing 
> is to make use of faster and scalable B+ tree-based data collections built 
> for persistent memory in Java (PCJ: https://github.com/pmem/pcj) and enable a 
> complete in-memory version of Cassandra, while still keeping the data 
> persistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13981) Enable Cassandra for Persistent Memory

2018-06-07 Thread shylaja kokoori (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shylaja kokoori updated CASSANDRA-13981:

Attachment: readme2.1.txt

> Enable Cassandra for Persistent Memory 
> ---
>
> Key: CASSANDRA-13981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13981
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Preetika Tyagi
>Assignee: Preetika Tyagi
>Priority: Major
> Fix For: 4.x
>
> Attachments: in-mem-cassandra-1.0.patch, in-mem-cassandra-2.0.patch, 
> in-mem-cassandra-2.1.patch, readme.txt, readme2.1.txt, readme2_0.txt
>
>
> Currently, Cassandra relies on disks for data storage and hence it needs data 
> serialization, compaction, bloom filters and partition summary/index for 
> speedy access of the data. However, with persistent memory, data can be 
> stored directly in the form of Java objects and collections, which can 
> greatly simplify the retrieval mechanism of the data. What we are proposing 
> is to make use of faster and scalable B+ tree-based data collections built 
> for persistent memory in Java (PCJ: https://github.com/pmem/pcj) and enable a 
> complete in-memory version of Cassandra, while still keeping the data 
> persistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13981) Enable Cassandra for Persistent Memory

2018-06-07 Thread shylaja kokoori (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shylaja kokoori updated CASSANDRA-13981:

Attachment: in-mem-cassandra-2.1.patch

> Enable Cassandra for Persistent Memory 
> ---
>
> Key: CASSANDRA-13981
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13981
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Preetika Tyagi
>Assignee: Preetika Tyagi
>Priority: Major
> Fix For: 4.x
>
> Attachments: in-mem-cassandra-1.0.patch, in-mem-cassandra-2.0.patch, 
> in-mem-cassandra-2.1.patch, readme.txt, readme2_0.txt
>
>
> Currently, Cassandra relies on disks for data storage and hence it needs data 
> serialization, compaction, bloom filters and partition summary/index for 
> speedy access of the data. However, with persistent memory, data can be 
> stored directly in the form of Java objects and collections, which can 
> greatly simplify the retrieval mechanism of the data. What we are proposing 
> is to make use of faster and scalable B+ tree-based data collections built 
> for persistent memory in Java (PCJ: https://github.com/pmem/pcj) and enable a 
> complete in-memory version of Cassandra, while still keeping the data 
> persistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14491) Determine how to test cqlsh in a Python 2.7 environment, including dtests

2018-06-07 Thread Patrick Bannister (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505513#comment-16505513
 ] 

Patrick Bannister commented on CASSANDRA-14491:
---

It's not critical for the full set of dtests to run under Python 2.7, but we 
need the cqlsh tests to run if we're going to maintain support for cqlsh to run 
on Python 2.7. The cqlsh dtests provide a significant portion of the test 
coverage of cqlsh.

I think the dtests should stay on Python 3. The goal is for the cqlsh tests to 
be runnable on Python 2.7, not to revert.

Ideally, I'll make minimal modifications that will be Python 2/3 cross 
compatible in a single implementation. If the resulting product is good then 
we'll get to enjoy a more complete test environment; if it's awful then we can 
stay on Python 3.

For what it's worth: I just spent about ninety minutes and changed a few dozen 
lines of code, and I already have cqlsh_tests/cqlsh_tests.py mostly running 
under Python 2.7. Hopefully the delta to run on Python 2.7 will be 
insignificant.

> Determine how to test cqlsh in a Python 2.7 environment, including dtests
> -
>
> Key: CASSANDRA-14491
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14491
> Project: Cassandra
>  Issue Type: Sub-task
> Environment:  
>  
> We need to test with at least two versions of Python:
>  * Python 2.7
>  * Python 3.x (need to determine what versions of Python 3 are available by 
> default on Ubuntu and RHEL/CentOS)
> Additionally, it is recommended to test on at least three platforms:
>  * Ubuntu or other Debian derivative
>  * RHEL, CentOS, or other Red Hat derivative
>  * Windows (unless a consensus has formed around not testing on Windows?)
>Reporter: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, test
> Fix For: 4.x
>
>
> It appears that a consensus is forming around maintaining Python 2.7 
> compatibility for cqlsh. However, the dtests now run in a Python 3 
> environment. We need to identify an option for testing infrastructure for 
> testing cqlsh on Python 2.7, including the dtests.
> Based on experience updating the cqlsh dtests, it is strongly recommended to 
> test in more than one environment - for example, for Linux, we should test on 
> a Debian derivative as well as a Red Hat derivative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14491) Determine how to test cqlsh in a Python 2.7 environment, including dtests

2018-06-07 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505459#comment-16505459
 ] 

Jason Brown commented on CASSANDRA-14491:
-

[~mkjellman] spent a lot of time to get dtests onto 3.x. How critical is it 
that dtests (the full set) need to run on 2.7?

> Determine how to test cqlsh in a Python 2.7 environment, including dtests
> -
>
> Key: CASSANDRA-14491
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14491
> Project: Cassandra
>  Issue Type: Sub-task
> Environment:  
>  
> We need to test with at least two versions of Python:
>  * Python 2.7
>  * Python 3.x (need to determine what versions of Python 3 are available by 
> default on Ubuntu and RHEL/CentOS)
> Additionally, it is recommended to test on at least three platforms:
>  * Ubuntu or other Debian derivative
>  * RHEL, CentOS, or other Red Hat derivative
>  * Windows (unless a consensus has formed around not testing on Windows?)
>Reporter: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, test
> Fix For: 4.x
>
>
> It appears that a consensus is forming around maintaining Python 2.7 
> compatibility for cqlsh. However, the dtests now run in a Python 3 
> environment. We need to identify an option for testing infrastructure for 
> testing cqlsh on Python 2.7, including the dtests.
> Based on experience updating the cqlsh dtests, it is strongly recommended to 
> test in more than one environment - for example, for Linux, we should test on 
> a Debian derivative as well as a Red Hat derivative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14491) Determine how to test cqlsh in a Python 2.7 environment, including dtests

2018-06-07 Thread Patrick Bannister (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505374#comment-16505374
 ] 

Patrick Bannister commented on CASSANDRA-14491:
---

I'm starting work to try and get the cqlsh_tests dtests working on Python 2.7, 
so I can run them against the Python 2/3 cqlsh port in a Python 2.7 environment.

> Determine how to test cqlsh in a Python 2.7 environment, including dtests
> -
>
> Key: CASSANDRA-14491
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14491
> Project: Cassandra
>  Issue Type: Sub-task
> Environment:  
>  
> We need to test with at least two versions of Python:
>  * Python 2.7
>  * Python 3.x (need to determine what versions of Python 3 are available by 
> default on Ubuntu and RHEL/CentOS)
> Additionally, it is recommended to test on at least three platforms:
>  * Ubuntu or other Debian derivative
>  * RHEL, CentOS, or other Red Hat derivative
>  * Windows (unless a consensus has formed around not testing on Windows?)
>Reporter: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, test
> Fix For: 4.x
>
>
> It appears that a consensus is forming around maintaining Python 2.7 
> compatibility for cqlsh. However, the dtests now run in a Python 3 
> environment. We need to identify an option for testing infrastructure for 
> testing cqlsh on Python 2.7, including the dtests.
> Based on experience updating the cqlsh dtests, it is strongly recommended to 
> test in more than one environment - for example, for Linux, we should test on 
> a Debian derivative as well as a Red Hat derivative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14482) ZSTD Compressor support in Cassandra

2018-06-07 Thread Joseph Lynch (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505322#comment-16505322
 ] 

Joseph Lynch commented on CASSANDRA-14482:
--

[~sushm...@gmail.com] this is a really interesting idea especially if the 
[benchmarks|https://github.com/facebook/zstd#benchmarks] are true as it would 
mean that zstd should strictly dominate deflate (3x faster and better 
compression ratio); sort of how LZ4 strictly dominates Snappy these days. It 
looks like someone has already created a zstd compressor [implementation for 
Cassandra|https://github.com/MatejTymes/cassandra-zstd#installation], I'd be 
curious how that benchmarks against deflate.

> ZSTD Compressor support in Cassandra
> 
>
> Key: CASSANDRA-14482
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14482
> Project: Cassandra
>  Issue Type: Wish
>  Components: Libraries
>Reporter: Sushma A Devendrappa
>Priority: Major
> Fix For: 3.11.x
>
>
> ZStandard has a great speed and compression ratio tradeoff. 
> ZStandard is open source compression from Facebook.
> More about ZSTD
> [https://github.com/facebook/zstd]
> https://code.facebook.com/posts/1658392934479273/smaller-and-faster-data-compression-with-zstandard/
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14505) Removal of last element on a List deletes the entire row

2018-06-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Paris updated CASSANDRA-14505:

Fix Version/s: (was: 3.11.x)

> Removal of last element on a List deletes the entire row
> 
>
> Key: CASSANDRA-14505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: * Java: 1.8.0_171
>  * SO: Ubuntu 18.04 LTS
>  * Cassandra: 3.11.2 
>Reporter: André Paris
>Priority: Major
>
> The behavior of an element removal from a list by an UPDATE differs by how 
> the row was created:
> Given the table
> {{CREATE TABLE table_test (}}
>  {{    id int PRIMARY KEY,}}
>  {{    list list}}
>  {{)}}
> If the row is created by an INSERT, the row remains after the UPDATE to 
> remove the last element on the list:
> {{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +}}
>      1   | ['foo'] 
> {{(1 rows)}}
>  {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +--}}
>  {{  1 | null}}
> {{(1 rows)}}
>  
> But, if the row is created by an UPDATE, the row is deleted after the UPDATE 
> to remove the last element on the list:
> {{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +-}}
>        2 | ['foo']
> {{(1 rows)}}
>  {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +--}}
> {{(0 rows)}}
>  
> Thanks in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14470) Repair validation failed/unable to create merkle tree

2018-06-07 Thread Harry Hough (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505205#comment-16505205
 ] 

Harry Hough edited comment on CASSANDRA-14470 at 6/7/18 8:30 PM:
-

CPU of all nodes immediately spikes to 100% so this sounds like it is this same 
issue.


{code:java}
root@gra-c-3:/var/log/cassandra# nodetool compactionstats
pending tasks: 202
{code}

All the tasks are validation compactions. I ran this after it had mostly calmed 
down so I assume there were more at the start of the repair.



was (Author: ozzieisaacs):
CPU of all nodes immediately spikes to 100% so this sounds like it is this same 
issue.


{code:java}
root@gra-c-3:/var/log/cassandra# nodetool compactionstats
pending tasks: 202
{code}

All the tasks are validation compactions.


> Repair validation failed/unable to create merkle tree
> -
>
> Key: CASSANDRA-14470
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14470
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Harry Hough
>Priority: Major
>
> I had trouble repairing with a full repair across all nodes and keyspaces so 
> I swapped to doing table by table. This table will not repair even after 
> scrub/restart of all nodes. I am using command:
> {code:java}
> nodetool repair -full -seq keyspace table
> {code}
> {code:java}
> [2018-05-25 19:26:36,525] Repair session 0198ee50-6050-11e8-a3b7-9d0793eab507 
> for range [(165598500763544933,166800441975877433], 
> (-5455068259072262254,-5445777107512274819], 
> (-4614366950466274594,-4609359222424798148], 
> (3417371506258365094,3421921915575816226], 
> (5221788898381458942,5222846663270250559], 
> (3421921915575816226,3429175540277204991], 
> (3276484330153091115,3282213186258578546], 
> (-3306169730424140596,-3303439264231406101], 
> (5228704360821395206,5242415853745535023], 
> (5808045095951939338,5808562658315740708], 
> (-3303439264231406101,-3302592736123212969]] finished (progress: 1%)
> [2018-05-25 19:27:23,848] Repair session 0180f980-6050-11e8-a3b7-9d0793eab507 
> for range [(-8495158945319933291,-8482949618583319581], 
> (1803296697741516342,1805330812863783941], 
> (8633191319643427141,8637771071728131257], 
> (2214097236323810344,2218253238829661319], 
> (8637771071728131257,8639627594735133685], 
> (2195525904029414718,2214097236323810344], 
> (-8500127431270773970,-8495158945319933291], 
> (7151693083782264341,7152162989417914407], 
> (-8482949618583319581,-8481973749935314249]] finished (progress: 1%)
> [2018-05-25 19:30:32,590] Repair session 01ac9d62-6050-11e8-a3b7-9d0793eab507 
> for range [(7887346492105510731,7893062759268864220], 
> (-153277717939330979,-151986584968539220], 
> (-6351665356961460262,-6336288442758847669], 
> (7881942012672602731,7887346492105510731], 
> (-5884528383037906783,-5878097817437987368], 
> (6054625594262089428,6060773114960761336], 
> (-6354401100436622515,-6351665356961460262], 
> (3358411934943460772,336336663817876], 
> (6255644242745576360,6278718135193665575], 
> (-6321106762570843270,-6316788220143151823], 
> (1754319239259058661,1759314644652031521], 
> (7893062759268864220,7894890594190784729], 
> (-8012293411840276426,-8011781808288431224]] failed with error [repair 
> #01ac9d62-6050-11e8-a3b7-9d0793eab507 on keyspace/table, 
> [(7887346492105510731,7893062759268864220], 
> (-153277717939330979,-151986584968539220], 
> (-6351665356961460262,-6336288442758847669], 
> (7881942012672602731,7887346492105510731],
> (-5884528383037906783,-5878097817437987368], 
> (6054625594262089428,6060773114960761336], 
> (-6354401100436622515,-6351665356961460262], 
> (3358411934943460772,336336663817876], 
> (6255644242745576360,6278718135193665575], 
> (-6321106762570843270,-6316788220143151823], 
> (1754319239259058661,1759314644652031521], 
> (7893062759268864220,7894890594190784729], 
> (-8012293411840276426,-8011781808288431224]]] Validation failed in 
> /192.168.8.64 (progress: 1%)
> [2018-05-25 19:30:38,744] Repair session 01ab16c1-6050-11e8-a3b7-9d0793eab507 
> for range [(4474598255414218354,4477186372547790770], 
> (-8368931070988054567,-8367389908801757978], 
> (4445104759712094068,4445123832517144036], 
> (6749641233379918040,6749879473217708908], 
> (717627050679001698,729408043324000761], 
> (8984622403893999385,8990662643404904110], 
> (4457612694557846994,4474598255414218354], 
> (5589049422573545528,5593079877787783784], 
> (3609693317839644945,3613727999875360405], 
> (8499016262183246473,8504603366117127178], 
> (-5421277973540712245,-5417725796037372830], 
> (5586405751301680690,5589049422573545528], 
> (-2611069890590917549,-2603911539353128123], 
> (2424772330724108233,2427564448454334730], 
> (3172651438220766183,3175226710613527829], 
> (4445123832517144036,4457612694557846994], 
> 

[jira] [Comment Edited] (CASSANDRA-14470) Repair validation failed/unable to create merkle tree

2018-06-07 Thread Harry Hough (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505205#comment-16505205
 ] 

Harry Hough edited comment on CASSANDRA-14470 at 6/7/18 8:27 PM:
-

CPU of all nodes immediately spikes to 100% so this sounds like it is this same 
issue.


{code:java}
root@gra-c-3:/var/log/cassandra# nodetool compactionstats
pending tasks: 202
{code}

All the tasks are validation compactions.



was (Author: ozzieisaacs):
CPU of all nodes immediately spikes to 100% so this sounds like it is this same 
issue.

> Repair validation failed/unable to create merkle tree
> -
>
> Key: CASSANDRA-14470
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14470
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Harry Hough
>Priority: Major
>
> I had trouble repairing with a full repair across all nodes and keyspaces so 
> I swapped to doing table by table. This table will not repair even after 
> scrub/restart of all nodes. I am using command:
> {code:java}
> nodetool repair -full -seq keyspace table
> {code}
> {code:java}
> [2018-05-25 19:26:36,525] Repair session 0198ee50-6050-11e8-a3b7-9d0793eab507 
> for range [(165598500763544933,166800441975877433], 
> (-5455068259072262254,-5445777107512274819], 
> (-4614366950466274594,-4609359222424798148], 
> (3417371506258365094,3421921915575816226], 
> (5221788898381458942,5222846663270250559], 
> (3421921915575816226,3429175540277204991], 
> (3276484330153091115,3282213186258578546], 
> (-3306169730424140596,-3303439264231406101], 
> (5228704360821395206,5242415853745535023], 
> (5808045095951939338,5808562658315740708], 
> (-3303439264231406101,-3302592736123212969]] finished (progress: 1%)
> [2018-05-25 19:27:23,848] Repair session 0180f980-6050-11e8-a3b7-9d0793eab507 
> for range [(-8495158945319933291,-8482949618583319581], 
> (1803296697741516342,1805330812863783941], 
> (8633191319643427141,8637771071728131257], 
> (2214097236323810344,2218253238829661319], 
> (8637771071728131257,8639627594735133685], 
> (2195525904029414718,2214097236323810344], 
> (-8500127431270773970,-8495158945319933291], 
> (7151693083782264341,7152162989417914407], 
> (-8482949618583319581,-8481973749935314249]] finished (progress: 1%)
> [2018-05-25 19:30:32,590] Repair session 01ac9d62-6050-11e8-a3b7-9d0793eab507 
> for range [(7887346492105510731,7893062759268864220], 
> (-153277717939330979,-151986584968539220], 
> (-6351665356961460262,-6336288442758847669], 
> (7881942012672602731,7887346492105510731], 
> (-5884528383037906783,-5878097817437987368], 
> (6054625594262089428,6060773114960761336], 
> (-6354401100436622515,-6351665356961460262], 
> (3358411934943460772,336336663817876], 
> (6255644242745576360,6278718135193665575], 
> (-6321106762570843270,-6316788220143151823], 
> (1754319239259058661,1759314644652031521], 
> (7893062759268864220,7894890594190784729], 
> (-8012293411840276426,-8011781808288431224]] failed with error [repair 
> #01ac9d62-6050-11e8-a3b7-9d0793eab507 on keyspace/table, 
> [(7887346492105510731,7893062759268864220], 
> (-153277717939330979,-151986584968539220], 
> (-6351665356961460262,-6336288442758847669], 
> (7881942012672602731,7887346492105510731],
> (-5884528383037906783,-5878097817437987368], 
> (6054625594262089428,6060773114960761336], 
> (-6354401100436622515,-6351665356961460262], 
> (3358411934943460772,336336663817876], 
> (6255644242745576360,6278718135193665575], 
> (-6321106762570843270,-6316788220143151823], 
> (1754319239259058661,1759314644652031521], 
> (7893062759268864220,7894890594190784729], 
> (-8012293411840276426,-8011781808288431224]]] Validation failed in 
> /192.168.8.64 (progress: 1%)
> [2018-05-25 19:30:38,744] Repair session 01ab16c1-6050-11e8-a3b7-9d0793eab507 
> for range [(4474598255414218354,4477186372547790770], 
> (-8368931070988054567,-8367389908801757978], 
> (4445104759712094068,4445123832517144036], 
> (6749641233379918040,6749879473217708908], 
> (717627050679001698,729408043324000761], 
> (8984622403893999385,8990662643404904110], 
> (4457612694557846994,4474598255414218354], 
> (5589049422573545528,5593079877787783784], 
> (3609693317839644945,3613727999875360405], 
> (8499016262183246473,8504603366117127178], 
> (-5421277973540712245,-5417725796037372830], 
> (5586405751301680690,5589049422573545528], 
> (-2611069890590917549,-2603911539353128123], 
> (2424772330724108233,2427564448454334730], 
> (3172651438220766183,3175226710613527829], 
> (4445123832517144036,4457612694557846994], 
> (-6827531712183440570,-6800863837312326365], 
> (5593079877787783784,5596020904874304252], 
> (716705770783505310,717627050679001698], 
> (115377252345874298,119626359210683992], 
> (239394377432130766,240250561347730054]] failed with error [repair 
> 

[jira] [Commented] (CASSANDRA-14470) Repair validation failed/unable to create merkle tree

2018-06-07 Thread Harry Hough (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505205#comment-16505205
 ] 

Harry Hough commented on CASSANDRA-14470:
-

CPU of all nodes immediately spikes to 100% so this sounds like it is this same 
issue.

> Repair validation failed/unable to create merkle tree
> -
>
> Key: CASSANDRA-14470
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14470
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Harry Hough
>Priority: Major
>
> I had trouble repairing with a full repair across all nodes and keyspaces so 
> I swapped to doing table by table. This table will not repair even after 
> scrub/restart of all nodes. I am using command:
> {code:java}
> nodetool repair -full -seq keyspace table
> {code}
> {code:java}
> [2018-05-25 19:26:36,525] Repair session 0198ee50-6050-11e8-a3b7-9d0793eab507 
> for range [(165598500763544933,166800441975877433], 
> (-5455068259072262254,-5445777107512274819], 
> (-4614366950466274594,-4609359222424798148], 
> (3417371506258365094,3421921915575816226], 
> (5221788898381458942,5222846663270250559], 
> (3421921915575816226,3429175540277204991], 
> (3276484330153091115,3282213186258578546], 
> (-3306169730424140596,-3303439264231406101], 
> (5228704360821395206,5242415853745535023], 
> (5808045095951939338,5808562658315740708], 
> (-3303439264231406101,-3302592736123212969]] finished (progress: 1%)
> [2018-05-25 19:27:23,848] Repair session 0180f980-6050-11e8-a3b7-9d0793eab507 
> for range [(-8495158945319933291,-8482949618583319581], 
> (1803296697741516342,1805330812863783941], 
> (8633191319643427141,8637771071728131257], 
> (2214097236323810344,2218253238829661319], 
> (8637771071728131257,8639627594735133685], 
> (2195525904029414718,2214097236323810344], 
> (-8500127431270773970,-8495158945319933291], 
> (7151693083782264341,7152162989417914407], 
> (-8482949618583319581,-8481973749935314249]] finished (progress: 1%)
> [2018-05-25 19:30:32,590] Repair session 01ac9d62-6050-11e8-a3b7-9d0793eab507 
> for range [(7887346492105510731,7893062759268864220], 
> (-153277717939330979,-151986584968539220], 
> (-6351665356961460262,-6336288442758847669], 
> (7881942012672602731,7887346492105510731], 
> (-5884528383037906783,-5878097817437987368], 
> (6054625594262089428,6060773114960761336], 
> (-6354401100436622515,-6351665356961460262], 
> (3358411934943460772,336336663817876], 
> (6255644242745576360,6278718135193665575], 
> (-6321106762570843270,-6316788220143151823], 
> (1754319239259058661,1759314644652031521], 
> (7893062759268864220,7894890594190784729], 
> (-8012293411840276426,-8011781808288431224]] failed with error [repair 
> #01ac9d62-6050-11e8-a3b7-9d0793eab507 on keyspace/table, 
> [(7887346492105510731,7893062759268864220], 
> (-153277717939330979,-151986584968539220], 
> (-6351665356961460262,-6336288442758847669], 
> (7881942012672602731,7887346492105510731],
> (-5884528383037906783,-5878097817437987368], 
> (6054625594262089428,6060773114960761336], 
> (-6354401100436622515,-6351665356961460262], 
> (3358411934943460772,336336663817876], 
> (6255644242745576360,6278718135193665575], 
> (-6321106762570843270,-6316788220143151823], 
> (1754319239259058661,1759314644652031521], 
> (7893062759268864220,7894890594190784729], 
> (-8012293411840276426,-8011781808288431224]]] Validation failed in 
> /192.168.8.64 (progress: 1%)
> [2018-05-25 19:30:38,744] Repair session 01ab16c1-6050-11e8-a3b7-9d0793eab507 
> for range [(4474598255414218354,4477186372547790770], 
> (-8368931070988054567,-8367389908801757978], 
> (4445104759712094068,4445123832517144036], 
> (6749641233379918040,6749879473217708908], 
> (717627050679001698,729408043324000761], 
> (8984622403893999385,8990662643404904110], 
> (4457612694557846994,4474598255414218354], 
> (5589049422573545528,5593079877787783784], 
> (3609693317839644945,3613727999875360405], 
> (8499016262183246473,8504603366117127178], 
> (-5421277973540712245,-5417725796037372830], 
> (5586405751301680690,5589049422573545528], 
> (-2611069890590917549,-2603911539353128123], 
> (2424772330724108233,2427564448454334730], 
> (3172651438220766183,3175226710613527829], 
> (4445123832517144036,4457612694557846994], 
> (-6827531712183440570,-6800863837312326365], 
> (5593079877787783784,5596020904874304252], 
> (716705770783505310,717627050679001698], 
> (115377252345874298,119626359210683992], 
> (239394377432130766,240250561347730054]] failed with error [repair 
> #01ab16c1-6050-11e8-a3b7-9d0793eab507 on keyspace/table, 
> [(4474598255414218354,4477186372547790770], 
> (-8368931070988054567,-8367389908801757978], 
> (4445104759712094068,4445123832517144036], 
> (6749641233379918040,6749879473217708908], 
> (717627050679001698,729408043324000761], 
> 

[jira] [Updated] (CASSANDRA-14505) Removal of last element on a List deletes the entire row

2018-06-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Paris updated CASSANDRA-14505:

Description: 
The behavior of an element removal from a list by an UPDATE differs by how the 
row was created:

Given the table

{{CREATE TABLE table_test (}}
 {{    id int PRIMARY KEY,}}
 {{    list list}}
 {{)}}

If the row is created by an INSERT, the row remains after the UPDATE to remove 
the last element on the list:

{{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +}}
     1   | ['foo'] 

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +--}}
 {{  1 | null}}

{{(1 rows)}}

 

But, if the row is created by an UPDATE, the row is deleted after the UPDATE to 
remove the last element on the list:

{{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +-}}
       2 | ['foo']

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +--}}

{{(0 rows)}}

 

Thanks in advance.

  was:
The behavior of an element removal from a list by an UPDATE differs by how the 
row was created:

Given the table

{{CREATE TABLE table_test (}}
 {{    id int PRIMARY KEY,}}
 {{    list list}}
 {{)}}

If the row is created by an INSERT, the row remains after the UPDATE to remove 
the last element on the list:

{{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ ---+}}
     1   | ['foo'] 

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +--}}
 {{  1 | null}}

{{(1 rows)}}

 

But, if the row is created by an UPDATE, the row is deleted after the UPDATE to 
remove the last element on the list:

{{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +-}}
       2 | ['foo']

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +--}}

{{(0 rows)}}

 

Thanks in advance.


> Removal of last element on a List deletes the entire row
> 
>
> Key: CASSANDRA-14505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: * Java: 1.8.0_171
>  * SO: Ubuntu 18.04 LTS
>  * Cassandra: 3.11.2 
>Reporter: André Paris
>Priority: Major
> Fix For: 3.11.x
>
>
> The behavior of an element removal from a list by an UPDATE differs by how 
> the row was created:
> Given the table
> {{CREATE TABLE table_test (}}
>  {{    id int PRIMARY KEY,}}
>  {{    list list}}
>  {{)}}
> If the row is created by an INSERT, the row remains after the UPDATE to 
> remove the last element on the list:
> {{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +}}
>      1   | ['foo'] 
> {{(1 rows)}}
>  {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +--}}
>  {{  1 | null}}
> {{(1 rows)}}
>  
> But, if the row is created by an UPDATE, the row is deleted after the UPDATE 
> to remove the last element on the list:
> {{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +-}}
>        2 | ['foo']
> {{(1 rows)}}
>  {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +--}}
> {{(0 rows)}}
>  
> Thanks in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14505) Removal of last element on a List deletes the entire row

2018-06-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Paris updated CASSANDRA-14505:

Description: 
The behavior of an element removal from a list by an UPDATE differs by how the 
row was created:

Given the table

{{CREATE TABLE table_test (}}
 {{    id int PRIMARY KEY,}}
 {{    list list}}
 {{)}}

If the row is created by an INSERT, the row remains after the UPDATE to remove 
the last element on the list:

{{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ ---+}}
     1   | ['foo'] 

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +--}}
 {{  1 | null}}

{{(1 rows)}}

 

But, if the row is created by an UPDATE, the row is deleted after the UPDATE to 
remove the last element on the list:

{{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +-}}
       2 | ['foo']

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +--}}

{{(0 rows)}}

 

Thanks in advance.

  was:
The behavior of an element removal from a list by an UPDATE differs by how the 
row was created:

Given the table

{{CREATE TABLE table_test (}}
 {{    id int PRIMARY KEY,}}
 {{    list list}}
 {{)}}

If the row is created by an INSERT, the row remains after the UPDATE to remove 
the last element on the list:

{{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ ---+}}
    1   | ['foo'] 

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +--}}
 {{  1 | null}}

{{(1 rows)}}

 

But, if the row is created by an UPDATE, the row is deleted after the UPDATE to 
remove the last element on the list:

{{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +-}}
      2 | ['foo']

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{+--}}

{{(0 rows)}}

 

Thans in advance.


> Removal of last element on a List deletes the entire row
> 
>
> Key: CASSANDRA-14505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: * Java: 1.8.0_171
>  * SO: Ubuntu 18.04 LTS
>  * Cassandra: 3.11.2 
>Reporter: André Paris
>Priority: Major
> Fix For: 3.11.x
>
>
> The behavior of an element removal from a list by an UPDATE differs by how 
> the row was created:
> Given the table
> {{CREATE TABLE table_test (}}
>  {{    id int PRIMARY KEY,}}
>  {{    list list}}
>  {{)}}
> If the row is created by an INSERT, the row remains after the UPDATE to 
> remove the last element on the list:
> {{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ ---+}}
>      1   | ['foo'] 
> {{(1 rows)}}
>  {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +--}}
>  {{  1 | null}}
> {{(1 rows)}}
>  
> But, if the row is created by an UPDATE, the row is deleted after the UPDATE 
> to remove the last element on the list:
> {{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +-}}
>        2 | ['foo']
> {{(1 rows)}}
>  {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +--}}
> {{(0 rows)}}
>  
> Thanks in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14505) Removal of last element on a List deletes the entire row

2018-06-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Paris updated CASSANDRA-14505:

Description: 
The behavior of an element removal from a list by an UPDATE differs by how the 
row was created:

Given the table

{{CREATE TABLE table_test (}}
 {{    id int PRIMARY KEY,}}
 {{    list list}}
 {{)}}

If the row is created by an INSERT, the row remains after the UPDATE to remove 
the last element on the list:

{{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ ---+}}
    1   | ['foo'] 

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +--}}
 {{  1 | null}}

{{(1 rows)}}

 

But, if the row is created by an UPDATE, the row is deleted after the UPDATE to 
remove the last element on the list:

{{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{ +-}}
      2 | ['foo']

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{+--}}

{{(0 rows)}}

 

Thans in advance.

  was:
The behavior of an element removal from a list by an UPDATE differs by how the 
row was created:

Given the table

{{CREATE TABLE table_test (}}
 {{    id int PRIMARY KEY,}}
 {{    list list}}
 {{)}}

If the row is created by an INSERT, the row remains after the UPDATE to remove 
the last element on the list:

{{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{+-}}
 {{  1 | ['foo'] }}

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{+--}}
 {{  1 | null}}

{{(1 rows)}}

 

But, if the row is created by an UPDATE, the row is deleted after the UPDATE to 
remove the last element on the list:

{{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{+-}}
 {{  2 | ['foo'] }}

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{+--}}

{{(0 rows)}}

 

Thans in advance.


> Removal of last element on a List deletes the entire row
> 
>
> Key: CASSANDRA-14505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: * Java: 1.8.0_171
>  * SO: Ubuntu 18.04 LTS
>  * Cassandra: 3.11.2 
>Reporter: André Paris
>Priority: Major
> Fix For: 3.11.x
>
>
> The behavior of an element removal from a list by an UPDATE differs by how 
> the row was created:
> Given the table
> {{CREATE TABLE table_test (}}
>  {{    id int PRIMARY KEY,}}
>  {{    list list}}
>  {{)}}
> If the row is created by an INSERT, the row remains after the UPDATE to 
> remove the last element on the list:
> {{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ ---+}}
>     1   | ['foo'] 
> {{(1 rows)}}
>  {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +--}}
>  {{  1 | null}}
> {{(1 rows)}}
>  
> But, if the row is created by an UPDATE, the row is deleted after the UPDATE 
> to remove the last element on the list:
> {{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{ +-}}
>       2 | ['foo']
> {{(1 rows)}}
>  {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{+--}}
> {{(0 rows)}}
>  
> Thans in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14505) Removal of last element on a List deletes the entire row

2018-06-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Paris updated CASSANDRA-14505:

Description: 
The behavior of an element removal from a list by an UPDATE differs by how the 
row was created:

Given the table

{{CREATE TABLE table_test (}}
 {{    id int PRIMARY KEY,}}
 {{    list list}}
 {{)}}

If the row is created by an INSERT, the row remains after the UPDATE to remove 
the last element on the list:

{{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{+-}}
 {{  1 | ['foo'] }}

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{+--}}
 {{  1 | null}}

{{(1 rows)}}

 

But, if the row is created by an UPDATE, the row is deleted after the UPDATE to 
remove the last element on the list:

{{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{+-}}
 {{  2 | ['foo'] }}

{{(1 rows)}}
 {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
 {{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
 {{+--}}

{{(0 rows)}}

 

Thans in advance.

  was:
The behavior of an element removal from a list by an UPDATE differs by how the 
row was created:

Given the table

{{CREATE TABLE table_test (}}
{{    id int PRIMARY KEY,}}
{{    list list}}
{{)}}

If the row is created by an INSERT, the row remains after the UPDATE to remove 
the last element on the list:

{{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
{{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
{{+-}}
{{  1 | ['foo'] }}

{{(1 rows)}}
{{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
{{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
{{+--}}
{{  1 | null}}

{{(1 rows)}}

 

But, if the row is created by an UPDATE, the row is deleted after the UPDATE to 
remove the last element on the list:

{{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
{{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
{{+-}}
{{  2 | ['foo'] }}

{{(1 rows)}}
{{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
{{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
{{+--}}

{{(0 rows)}}


> Removal of last element on a List deletes the entire row
> 
>
> Key: CASSANDRA-14505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: * Java: 1.8.0_171
>  * SO: Ubuntu 18.04 LTS
>  * Cassandra: 3.11.2 
>Reporter: André Paris
>Priority: Major
> Fix For: 3.11.x
>
>
> The behavior of an element removal from a list by an UPDATE differs by how 
> the row was created:
> Given the table
> {{CREATE TABLE table_test (}}
>  {{    id int PRIMARY KEY,}}
>  {{    list list}}
>  {{)}}
> If the row is created by an INSERT, the row remains after the UPDATE to 
> remove the last element on the list:
> {{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{+-}}
>  {{  1 | ['foo'] }}
> {{(1 rows)}}
>  {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{+--}}
>  {{  1 | null}}
> {{(1 rows)}}
>  
> But, if the row is created by an UPDATE, the row is deleted after the UPDATE 
> to remove the last element on the list:
> {{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{+-}}
>  {{  2 | ['foo'] }}
> {{(1 rows)}}
>  {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
>  {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
>  {{+--}}
> {{(0 rows)}}
>  
> Thans in advance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14505) Removal of last element on a List deletes the entire row

2018-06-07 Thread JIRA
André Paris created CASSANDRA-14505:
---

 Summary: Removal of last element on a List deletes the entire row
 Key: CASSANDRA-14505
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14505
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: * Java: 1.8.0_171
 * SO Ubuntu: 18.04 LTS
 * Cassandra: 3.11.2 
Reporter: André Paris
 Fix For: 3.11.x


The behavior of an element removal from a list by an UPDATE differs by how the 
row was created:

Given the table

{{CREATE TABLE table_test (}}
{{    id int PRIMARY KEY,}}
{{    list list}}
{{)}}

If the row is created by an INSERT, the row remains after the UPDATE to remove 
the last element on the list:

{{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
{{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
{{+-}}
{{  1 | ['foo'] }}

{{(1 rows)}}
{{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
{{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
{{+--}}
{{  1 | null}}

{{(1 rows)}}

 

But, if the row is created by an UPDATE, the row is deleted after the UPDATE to 
remove the last element on the list:

{{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
{{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
{{+-}}
{{  2 | ['foo'] }}

{{(1 rows)}}
{{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
{{cqlsh:ks_test> SELECT * FROM table_test;}}

{{ id | list}}
{{+--}}

{{(0 rows)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14505) Removal of last element on a List deletes the entire row

2018-06-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Paris updated CASSANDRA-14505:

Environment: 
* Java: 1.8.0_171
 * SO: Ubuntu 18.04 LTS
 * Cassandra: 3.11.2 

  was:
* Java: 1.8.0_171
 * SO Ubuntu: 18.04 LTS
 * Cassandra: 3.11.2 


> Removal of last element on a List deletes the entire row
> 
>
> Key: CASSANDRA-14505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: * Java: 1.8.0_171
>  * SO: Ubuntu 18.04 LTS
>  * Cassandra: 3.11.2 
>Reporter: André Paris
>Priority: Major
> Fix For: 3.11.x
>
>
> The behavior of an element removal from a list by an UPDATE differs by how 
> the row was created:
> Given the table
> {{CREATE TABLE table_test (}}
> {{    id int PRIMARY KEY,}}
> {{    list list}}
> {{)}}
> If the row is created by an INSERT, the row remains after the UPDATE to 
> remove the last element on the list:
> {{cqlsh:ks_test> INSERT INTO table_test (id, list ) VALUES ( 1, ['foo']) ;}}
> {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
> {{+-}}
> {{  1 | ['foo'] }}
> {{(1 rows)}}
> {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=1;}}
> {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
> {{+--}}
> {{  1 | null}}
> {{(1 rows)}}
>  
> But, if the row is created by an UPDATE, the row is deleted after the UPDATE 
> to remove the last element on the list:
> {{cqlsh:ks_test> UPDATE table_test SET list = list + ['foo'] WHERE id=2;}}
> {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
> {{+-}}
> {{  2 | ['foo'] }}
> {{(1 rows)}}
> {{cqlsh:ks_test> UPDATE table_test SET list = list - ['foo'] WHERE id=2;}}
> {{cqlsh:ks_test> SELECT * FROM table_test;}}
> {{ id | list}}
> {{+--}}
> {{(0 rows)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14504) fqltool should open chronicle queue read only and a GC bug

2018-06-07 Thread Ariel Weisberg (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-14504:
---
Status: Patch Available  (was: In Progress)

[Fixes|https://github.com/apache/cassandra/compare/trunk...aweisberg:cassandra-14504-trunk?expand=1]
[CircleCI|https://circleci.com/gh/aweisberg/cassandra/tree/cassandra-14504-trunk]

> fqltool should open chronicle queue read only and a GC bug
> --
>
> Key: CASSANDRA-14504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14504
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> There are two issues with fqltool.
> The first is that it doesn't open the chronicle queue read only so it won't 
> work if it doesn't have write permissions and it's not clear if it's safe to 
> open the queue to write if the server is also still appending.
> The next issue is that NativeBytesStore.toTemporaryDirectByteBuffer() returns 
> a ByteBuffer that doesn't strongly reference the memory it refers to 
> resulting it in sometimes being reclaimed and containing the wrong data when 
> we go to read from it. At least that is the theory. Simple solution is to use 
> toByteArray() and that seems to make it work consistently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14504) fqltool should open chronicle queue read only and a GC bug

2018-06-07 Thread Ariel Weisberg (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-14504:
---
Reviewer: Sam Tunnicliffe

> fqltool should open chronicle queue read only and a GC bug
> --
>
> Key: CASSANDRA-14504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14504
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> There are two issues with fqltool.
> The first is that it doesn't open the chronicle queue read only so it won't 
> work if it doesn't have write permissions and it's not clear if it's safe to 
> open the queue to write if the server is also still appending.
> The next issue is that NativeBytesStore.toTemporaryDirectByteBuffer() returns 
> a ByteBuffer that doesn't strongly reference the memory it refers to 
> resulting it in sometimes being reclaimed and containing the wrong data when 
> we go to read from it. At least that is the theory. Simple solution is to use 
> toByteArray() and that seems to make it work consistently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-06-07 Thread Jay Zhuang (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505023#comment-16505023
 ] 

Jay Zhuang commented on CASSANDRA-13929:


Thanks [~jasobrown] for the review and fix.

Yes, null out the {{values}} array before reusing is a good practice. +1 on 
that. Seems the dTest is failed for 3.11 branch, but I don't think it's caused 
by this patch. Please let me know if I could commit.

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.3
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> cassandra_heapcpu_memleak_patching_test_30d.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14504) fqltool should open chronicle queue read only and a GC bug

2018-06-07 Thread Ariel Weisberg (JIRA)
Ariel Weisberg created CASSANDRA-14504:
--

 Summary: fqltool should open chronicle queue read only and a GC bug
 Key: CASSANDRA-14504
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14504
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 4.0


There are two issues with fqltool.

The first is that it doesn't open the chronicle queue read only so it won't 
work if it doesn't have write permissions and it's not clear if it's safe to 
open the queue to write if the server is also still appending.

The next issue is that NativeBytesStore.toTemporaryDirectByteBuffer() returns a 
ByteBuffer that doesn't strongly reference the memory it refers to resulting it 
in sometimes being reclaimed and containing the wrong data when we go to read 
from it. At least that is the theory. Simple solution is to use toByteArray() 
and that seems to make it work consistently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14466) Enable Direct I/O

2018-06-07 Thread Robert Stupp (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504956#comment-16504956
 ] 

Robert Stupp commented on CASSANDRA-14466:
--

[~mulugetam], let's wait with a patch for review until CASSANDRA-9608 is 
actually in or nearly ready. For C* 4.0, we'd need a solution that works with 
both Java 8 _and_ 11 - i.e. works as now for Java 8 and can use direct-I/O w/ 
Java 11 (see the recent discussion on the dev-ML). I think, a multi-release-JAR 
is probably the easiest way forward and I have something lying around but not 
yet ready - but the final solution obviously depends on whether reflection or 
separate classes for 8 + 11 are easier to implement.

> Enable Direct I/O 
> --
>
> Key: CASSANDRA-14466
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14466
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Local Write-Read Paths
>Reporter: Mulugeta Mammo
>Priority: Major
> Attachments: direct_io.patch
>
>
> Hi,
> JDK 10 introduced a new API for Direct IO that enables applications to bypass 
> the file system cache and potentially improve performance. Details of this 
> feature can be found at [https://bugs.openjdk.java.net/browse/JDK-8164900].
> This patch uses the JDK 10 API to enable Direct IO for the Cassandra read 
> path. By default, we have disabled this feature; but it can be enabled using 
> a  new configuration parameter, enable_direct_io_for_read_path. We have 
> conducted a Cassandra read-only stress test and measured a throughput gain of 
> up to 60% on flash drives.
> The patch requires JDK 10 Cassandra Support - 
> https://issues.apache.org/jira/browse/CASSANDRA-9608 
> Please review the patch and let us know your feedback.
> Thanks,
> [^direct_io.patch]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14466) Enable Direct I/O

2018-06-07 Thread Mulugeta Mammo (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504903#comment-16504903
 ] 

Mulugeta Mammo commented on CASSANDRA-14466:


[~aweisberg] Just created two pull requests - one for experimental JDK 9 and 10 
support and the other for O_DIRECT support (based on reflection).

> Enable Direct I/O 
> --
>
> Key: CASSANDRA-14466
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14466
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Local Write-Read Paths
>Reporter: Mulugeta Mammo
>Priority: Major
> Attachments: direct_io.patch
>
>
> Hi,
> JDK 10 introduced a new API for Direct IO that enables applications to bypass 
> the file system cache and potentially improve performance. Details of this 
> feature can be found at [https://bugs.openjdk.java.net/browse/JDK-8164900].
> This patch uses the JDK 10 API to enable Direct IO for the Cassandra read 
> path. By default, we have disabled this feature; but it can be enabled using 
> a  new configuration parameter, enable_direct_io_for_read_path. We have 
> conducted a Cassandra read-only stress test and measured a throughput gain of 
> up to 60% on flash drives.
> The patch requires JDK 10 Cassandra Support - 
> https://issues.apache.org/jira/browse/CASSANDRA-9608 
> Please review the patch and let us know your feedback.
> Thanks,
> [^direct_io.patch]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-06-07 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504890#comment-16504890
 ] 

Jason Brown commented on CASSANDRA-13929:
-

[~tsteinmaurer] ok, not a problem. As long as you don't see streaming getting 
obviously slower (1 hour -> 2 hours, for example), I'll take that as a data 
point.

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.3
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> cassandra_heapcpu_memleak_patching_test_30d.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-06-07 Thread Thomas Steinmaurer (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504873#comment-16504873
 ] 

Thomas Steinmaurer commented on CASSANDRA-13929:


[~jasobrown], sorry I don't have any streaming benchmarks before/after.

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.3
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> cassandra_heapcpu_memleak_patching_test_30d.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14466) Enable Direct I/O

2018-06-07 Thread Brian O'Neill (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504784#comment-16504784
 ] 

Brian O'Neill edited comment on CASSANDRA-14466 at 6/7/18 3:57 PM:
---

Although I don't know how Cassandra behaves in this case, my own experiments 
with O_DIRECT have shown that it only improves performance in very few 
applications. In particular, applications which do very little computation (low 
CPU load) and are accessing only the fastest available SSDs. Cassandra tends to 
be a bit heavy on the CPU load, and so I'm skeptical that switching to O_DIRECT 
by itself is the reason performance improved. In addition, I've only seen 
significant improvement with O_DIRECT when bypassing the file system and 
accessing the block device directly.

As was already suggested, switching off read-ahead might be the reason why 
you're seeing improved performance. Although the file system is generally able 
to adapt to usage patterns, I've found that explicitly setting 
POSIX_FADV_RANDOM really helps for random access workloads. This behavior is 
implicit with O_DIRECT. Although I see that Cassandra has utility code to call 
fadvise, only the POSIX_FADV_DONTNEED option ever appears to be used.

Considering that the test machine had 128GB of RAM, you really want the OS to 
manage the cache instead of Cassandra. How large was the JVM heap when running 
the test? Far larger data sizes, caching more data in the Java heap will lead 
to GC problems.


was (Author: bronee):
Although I don't know how Cassandra behaves in this case, my own experiments 
with O_DIRECT have show that it only improves performance in very few 
applications. In particular, applications which do very little computation (low 
CPU load) and are accessing only the fastest available SSDs. Cassandra tends to 
be a bit heavy on the CPU load, and so I'm skeptical that switching to O_DIRECT 
by itself is the reason performance improved. In addition, I've only seen 
significant improvement with O_DIRECT when bypassing the file system and 
accessing the block device directly.

As was already suggested, switching off read-ahead might be the reason why 
you're seeing improved performance. Although the file system is generally able 
to adapt to usage patterns, I've found that explicitly setting 
POSIX_FADV_RANDOM really helps for random access workloads. This behavior is 
implicit with O_DIRECT. Although I see that Cassandra has utility code to call 
fadvise, only the POSIX_FADV_DONTNEED option ever appears to be used.

Considering that the test machine had 128GB of RAM, you really want the OS to 
manage the cache instead of Cassandra. How large was the JVM heap when running 
the test? Far larger data sizes, caching more data in the Java heap will lead 
to GC problems.

> Enable Direct I/O 
> --
>
> Key: CASSANDRA-14466
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14466
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Local Write-Read Paths
>Reporter: Mulugeta Mammo
>Priority: Major
> Attachments: direct_io.patch
>
>
> Hi,
> JDK 10 introduced a new API for Direct IO that enables applications to bypass 
> the file system cache and potentially improve performance. Details of this 
> feature can be found at [https://bugs.openjdk.java.net/browse/JDK-8164900].
> This patch uses the JDK 10 API to enable Direct IO for the Cassandra read 
> path. By default, we have disabled this feature; but it can be enabled using 
> a  new configuration parameter, enable_direct_io_for_read_path. We have 
> conducted a Cassandra read-only stress test and measured a throughput gain of 
> up to 60% on flash drives.
> The patch requires JDK 10 Cassandra Support - 
> https://issues.apache.org/jira/browse/CASSANDRA-9608 
> Please review the patch and let us know your feedback.
> Thanks,
> [^direct_io.patch]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-06-07 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504842#comment-16504842
 ] 

Jason Brown commented on CASSANDRA-13929:
-

running [~jay.zhuang]'s patch with a few minor cleanups:

||3.11||trunk||
|[branch|https://github.com/jasobrown/cassandra/tree/13929-3.11]|[branch|https://github.com/jasobrown/cassandra/tree/13929-trunk]|
|[utests  
dtests|https://circleci.com/gh/jasobrown/workflows/cassandra/tree/13929-3.11]|[utests
  
dtests|https://circleci.com/gh/jasobrown/workflows/cassandra/tree/13929-trunk]|
||


> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.3
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> cassandra_heapcpu_memleak_patching_test_30d.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14238) Flaky Unittest: org.apache.cassandra.db.compaction.BlacklistingCompactionsTest

2018-06-07 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14238:

   Resolution: Fixed
Fix Version/s: (was: 2.2.x)
   2.2.13
   Status: Resolved  (was: Patch Available)

committed as {{fc7a69b65399597a2bf9c6025f035f8fe26724c7}} to 2.2 and merged up 
with -s ours, thanks!

> Flaky Unittest: org.apache.cassandra.db.compaction.BlacklistingCompactionsTest
> --
>
> Key: CASSANDRA-14238
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14238
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Jay Zhuang
>Assignee: Marcus Eriksson
>Priority: Minor
>  Labels: testing
> Fix For: 2.2.13
>
>
> The unittest is flaky
> {noformat}
> [junit] Testcase: 
> testBlacklistingWithSizeTieredCompactionStrategy(org.apache.cassandra.db.compaction.BlacklistingCompactionsTest):
>  FAILED
> [junit] expected:<8> but was:<25>
> [junit] junit.framework.AssertionFailedError: expected:<8> but was:<25>
> [junit] at 
> org.apache.cassandra.db.compaction.BlacklistingCompactionsTest.testBlacklisting(BlacklistingCompactionsTest.java:170)
> [junit] at 
> org.apache.cassandra.db.compaction.BlacklistingCompactionsTest.testBlacklistingWithSizeTieredCompactionStrategy(BlacklistingCompactionsTest.java:71)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[02/10] cassandra git commit: Fix flaky BlacklistingCompactionsTest

2018-06-07 Thread marcuse
Fix flaky BlacklistingCompactionsTest

Patch by marcuse; reviewed by Jay Zhuang for CASSANDRA-14238


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc7a69b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc7a69b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc7a69b6

Branch: refs/heads/cassandra-3.0
Commit: fc7a69b65399597a2bf9c6025f035f8fe26724c7
Parents: 81b6c9e
Author: Marcus Eriksson 
Authored: Thu May 31 16:27:46 2018 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 7 08:35:42 2018 -0700

--
 .../cassandra/db/compaction/BlacklistingCompactionsTest.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7a69b6/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java 
b/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
index 2b6a62a..c482116 100644
--- 
a/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
+++ 
b/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
@@ -48,6 +48,8 @@ public class BlacklistingCompactionsTest
 {
 private static final String KEYSPACE1 = "BlacklistingCompactionsTest";
 private static final String CF_STANDARD1 = "Standard1";
+// seed hardcoded to one we know works:
+private static final Random random = new Random(1);
 
 @After
 public void leakDetect() throws InterruptedException
@@ -142,7 +144,7 @@ public class BlacklistingCompactionsTest
 raf = new RandomAccessFile(sstable.getFilename(), "rw");
 assertNotNull(raf);
 assertTrue(raf.length() > 20);
-raf.seek(new Random().nextInt((int)(raf.length() - 20)));
+raf.seek(random.nextInt((int)(raf.length() - 20)));
 // We want to write something large enough that the corruption 
cannot get undetected
 // (even without compression)
 byte[] corruption = new byte[20];


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[04/10] cassandra git commit: Fix flaky BlacklistingCompactionsTest

2018-06-07 Thread marcuse
Fix flaky BlacklistingCompactionsTest

Patch by marcuse; reviewed by Jay Zhuang for CASSANDRA-14238


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc7a69b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc7a69b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc7a69b6

Branch: refs/heads/trunk
Commit: fc7a69b65399597a2bf9c6025f035f8fe26724c7
Parents: 81b6c9e
Author: Marcus Eriksson 
Authored: Thu May 31 16:27:46 2018 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 7 08:35:42 2018 -0700

--
 .../cassandra/db/compaction/BlacklistingCompactionsTest.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7a69b6/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java 
b/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
index 2b6a62a..c482116 100644
--- 
a/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
+++ 
b/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
@@ -48,6 +48,8 @@ public class BlacklistingCompactionsTest
 {
 private static final String KEYSPACE1 = "BlacklistingCompactionsTest";
 private static final String CF_STANDARD1 = "Standard1";
+// seed hardcoded to one we know works:
+private static final Random random = new Random(1);
 
 @After
 public void leakDetect() throws InterruptedException
@@ -142,7 +144,7 @@ public class BlacklistingCompactionsTest
 raf = new RandomAccessFile(sstable.getFilename(), "rw");
 assertNotNull(raf);
 assertTrue(raf.length() > 20);
-raf.seek(new Random().nextInt((int)(raf.length() - 20)));
+raf.seek(random.nextInt((int)(raf.length() - 20)));
 // We want to write something large enough that the corruption 
cannot get undetected
 // (even without compression)
 byte[] corruption = new byte[20];


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-06-07 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b1748198
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b1748198
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b1748198

Branch: refs/heads/trunk
Commit: b1748198eb278f9965da0196317803e8e3eda7a7
Parents: 7bb88de cce9ab2
Author: Marcus Eriksson 
Authored: Thu Jun 7 08:40:10 2018 -0700
Committer: Marcus Eriksson 
Committed: Thu Jun 7 08:40:10 2018 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-06-07 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cce9ab23
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cce9ab23
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cce9ab23

Branch: refs/heads/cassandra-3.0
Commit: cce9ab2362d5c32d503eea2cbfd192685161203b
Parents: b8fb29a fc7a69b
Author: Marcus Eriksson 
Authored: Thu Jun 7 08:39:56 2018 -0700
Committer: Marcus Eriksson 
Committed: Thu Jun 7 08:39:56 2018 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[10/10] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-06-07 Thread marcuse
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/800f0b39
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/800f0b39
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/800f0b39

Branch: refs/heads/trunk
Commit: 800f0b3948aa0a9095f78ed29d8d9cbc23a57581
Parents: 84fb7fa b174819
Author: Marcus Eriksson 
Authored: Thu Jun 7 08:40:20 2018 -0700
Committer: Marcus Eriksson 
Committed: Thu Jun 7 08:40:20 2018 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-06-07 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cce9ab23
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cce9ab23
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cce9ab23

Branch: refs/heads/trunk
Commit: cce9ab2362d5c32d503eea2cbfd192685161203b
Parents: b8fb29a fc7a69b
Author: Marcus Eriksson 
Authored: Thu Jun 7 08:39:56 2018 -0700
Committer: Marcus Eriksson 
Committed: Thu Jun 7 08:39:56 2018 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-06-07 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cce9ab23
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cce9ab23
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cce9ab23

Branch: refs/heads/cassandra-3.11
Commit: cce9ab2362d5c32d503eea2cbfd192685161203b
Parents: b8fb29a fc7a69b
Author: Marcus Eriksson 
Authored: Thu Jun 7 08:39:56 2018 -0700
Committer: Marcus Eriksson 
Committed: Thu Jun 7 08:39:56 2018 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[01/10] cassandra git commit: Fix flaky BlacklistingCompactionsTest

2018-06-07 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 81b6c9e99 -> fc7a69b65
  refs/heads/cassandra-3.0 b8fb29a6b -> cce9ab236
  refs/heads/cassandra-3.11 7bb88deb4 -> b1748198e
  refs/heads/trunk 84fb7fa6f -> 800f0b394


Fix flaky BlacklistingCompactionsTest

Patch by marcuse; reviewed by Jay Zhuang for CASSANDRA-14238


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc7a69b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc7a69b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc7a69b6

Branch: refs/heads/cassandra-2.2
Commit: fc7a69b65399597a2bf9c6025f035f8fe26724c7
Parents: 81b6c9e
Author: Marcus Eriksson 
Authored: Thu May 31 16:27:46 2018 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 7 08:35:42 2018 -0700

--
 .../cassandra/db/compaction/BlacklistingCompactionsTest.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7a69b6/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java 
b/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
index 2b6a62a..c482116 100644
--- 
a/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
+++ 
b/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
@@ -48,6 +48,8 @@ public class BlacklistingCompactionsTest
 {
 private static final String KEYSPACE1 = "BlacklistingCompactionsTest";
 private static final String CF_STANDARD1 = "Standard1";
+// seed hardcoded to one we know works:
+private static final Random random = new Random(1);
 
 @After
 public void leakDetect() throws InterruptedException
@@ -142,7 +144,7 @@ public class BlacklistingCompactionsTest
 raf = new RandomAccessFile(sstable.getFilename(), "rw");
 assertNotNull(raf);
 assertTrue(raf.length() > 20);
-raf.seek(new Random().nextInt((int)(raf.length() - 20)));
+raf.seek(random.nextInt((int)(raf.length() - 20)));
 // We want to write something large enough that the corruption 
cannot get undetected
 // (even without compression)
 byte[] corruption = new byte[20];


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[03/10] cassandra git commit: Fix flaky BlacklistingCompactionsTest

2018-06-07 Thread marcuse
Fix flaky BlacklistingCompactionsTest

Patch by marcuse; reviewed by Jay Zhuang for CASSANDRA-14238


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc7a69b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc7a69b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc7a69b6

Branch: refs/heads/cassandra-3.11
Commit: fc7a69b65399597a2bf9c6025f035f8fe26724c7
Parents: 81b6c9e
Author: Marcus Eriksson 
Authored: Thu May 31 16:27:46 2018 +0200
Committer: Marcus Eriksson 
Committed: Thu Jun 7 08:35:42 2018 -0700

--
 .../cassandra/db/compaction/BlacklistingCompactionsTest.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc7a69b6/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java 
b/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
index 2b6a62a..c482116 100644
--- 
a/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
+++ 
b/test/unit/org/apache/cassandra/db/compaction/BlacklistingCompactionsTest.java
@@ -48,6 +48,8 @@ public class BlacklistingCompactionsTest
 {
 private static final String KEYSPACE1 = "BlacklistingCompactionsTest";
 private static final String CF_STANDARD1 = "Standard1";
+// seed hardcoded to one we know works:
+private static final Random random = new Random(1);
 
 @After
 public void leakDetect() throws InterruptedException
@@ -142,7 +144,7 @@ public class BlacklistingCompactionsTest
 raf = new RandomAccessFile(sstable.getFilename(), "rw");
 assertNotNull(raf);
 assertTrue(raf.length() > 20);
-raf.seek(new Random().nextInt((int)(raf.length() - 20)));
+raf.seek(random.nextInt((int)(raf.length() - 20)));
 // We want to write something large enough that the corruption 
cannot get undetected
 // (even without compression)
 byte[] corruption = new byte[20];


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-06-07 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b1748198
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b1748198
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b1748198

Branch: refs/heads/cassandra-3.11
Commit: b1748198eb278f9965da0196317803e8e3eda7a7
Parents: 7bb88de cce9ab2
Author: Marcus Eriksson 
Authored: Thu Jun 7 08:40:10 2018 -0700
Committer: Marcus Eriksson 
Committed: Thu Jun 7 08:40:10 2018 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14466) Enable Direct I/O

2018-06-07 Thread Brian O'Neill (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504784#comment-16504784
 ] 

Brian O'Neill commented on CASSANDRA-14466:
---

Although I don't know how Cassandra behaves in this case, my own experiments 
with O_DIRECT have show that it only improves performance in very few 
applications. In particular, applications which do very little computation (low 
CPU load) and are accessing only the fastest available SSDs. Cassandra tends to 
be a bit heavy on the CPU load, and so I'm skeptical that switching to O_DIRECT 
by itself is the reason performance improved. In addition, I've only seen 
significant improvement with O_DIRECT when bypassing the file system and 
accessing the block device directly.

As was already suggested, switching off read-ahead might be the reason why 
you're seeing improved performance. Although the file system is generally able 
to adapt to usage patterns, I've found that explicitly setting 
POSIX_FADV_RANDOM really helps for random access workloads. This behavior is 
implicit with O_DIRECT. Although I see that Cassandra has utility code to call 
fadvise, only the POSIX_FADV_DONTNEED option ever appears to be used.

Considering that the test machine had 128GB of RAM, you really want the OS to 
manage the cache instead of Cassandra. How large was the JVM heap when running 
the test? Far larger data sizes, caching more data in the Java heap will lead 
to GC problems.

> Enable Direct I/O 
> --
>
> Key: CASSANDRA-14466
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14466
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Local Write-Read Paths
>Reporter: Mulugeta Mammo
>Priority: Major
> Attachments: direct_io.patch
>
>
> Hi,
> JDK 10 introduced a new API for Direct IO that enables applications to bypass 
> the file system cache and potentially improve performance. Details of this 
> feature can be found at [https://bugs.openjdk.java.net/browse/JDK-8164900].
> This patch uses the JDK 10 API to enable Direct IO for the Cassandra read 
> path. By default, we have disabled this feature; but it can be enabled using 
> a  new configuration parameter, enable_direct_io_for_read_path. We have 
> conducted a Cassandra read-only stress test and measured a throughput gain of 
> up to 60% on flash drives.
> The patch requires JDK 10 Cassandra Support - 
> https://issues.apache.org/jira/browse/CASSANDRA-9608 
> Please review the patch and let us know your feedback.
> Thanks,
> [^direct_io.patch]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-06-07 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504733#comment-16504733
 ] 

Jason Brown commented on CASSANDRA-13929:
-

[~tsteinmaurer] if you've been running with some version of this patch (where 
recycling is not used), do you see any degradation in streaming?

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.3
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> cassandra_heapcpu_memleak_patching_test_30d.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-06-07 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504704#comment-16504704
 ] 

Jason Brown edited comment on CASSANDRA-13929 at 6/7/18 2:03 PM:
-

On the whole, this looks pretty good. I'm running tests locally now, and will 
run in circleci shortly. 

[~jay.zhuang] There's one thing I'm wondering: in {{#reuse{}}}, we do not 
explicitly null out the {{values}} array, like we used to do in {{#cleanup()}}:

{code}
Arrays.fill(values, null);
{code}

While every access of {{values}} seems safe, and uses the {{count}} to ensure 
proper sizing of any built {{Object[]}}, I'd like to be over-cautious and null 
out the array to prevent the possibility of data leak wherein a reuse of 
Builder erroneously gets extra data from the last use. This error doesn't seem 
like it's possible in the current code, but I'd like to future-proof this - it 
would be a complete nightmare to debug. 

wdyt?




was (Author: jasobrown):
On the whole, this looks pretty good. I'm running tests locally now, and will 
run in circleci shortly. 

[~jay.zhuang] There's one thing I'm wondering: in {{#reuse{}}}, we do not 
explicitly null out the {{values}} array, like we used to do in {{#cleanup()}}:

{code}
Arrays.fill(values, null);
{code}

While every access of {{values}} seems safe, and uses the {{count}} to ensure 
proper sizing of any built {{Object[]}}, I'd like to be over-cautious and null 
out the array to prevent the possibility of data leak wherein a reuse of 
Builder erroneously gets extra data from the last use. This error doesn't seem 
like it's possible in the current code, but I'd like to future-proof this - I'd 
be a complete nightmare to debug. 

wdyt?



> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.3
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> cassandra_heapcpu_memleak_patching_test_30d.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-06-07 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504704#comment-16504704
 ] 

Jason Brown commented on CASSANDRA-13929:
-

On the whole, this looks pretty good. I'm running tests locally now, and will 
run in circleci shortly. 

[~jay.zhuang] There's one thing I'm wondering: in {{#reuse{}}}, we do not 
explicitly null out the {{values}} array, like we used to do in {{#cleanup()}}:

{code}
Arrays.fill(values, null);
{code}

While every access of {{values}} seems safe, and uses the {{count}} to ensure 
proper sizing of any built {{Object[]}}, I'd like to be over-cautious and null 
out the array to prevent the possibility of data leak wherein a reuse of 
Builder erroneously gets extra data from the last use. This error doesn't seem 
like it's possible in the current code, but I'd like to future-proof this - I'd 
be a complete nightmare to debug. 

wdyt?



> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.3
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> cassandra_heapcpu_memleak_patching_test_30d.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory

2018-06-07 Thread Jason Brown (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-13929:

Reviewer: Jason Brown

> BTree$Builder / io.netty.util.Recycler$Stack leaking memory
> ---
>
> Key: CASSANDRA-13929
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13929
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Thomas Steinmaurer
>Assignee: Jay Zhuang
>Priority: Major
> Fix For: 3.11.3
>
> Attachments: cassandra_3.11.0_min_memory_utilization.jpg, 
> cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, 
> cassandra_3.11.1_mat_dominator_classes.png, 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png, 
> cassandra_3.11.1_snapshot_heaputilization.png, 
> cassandra_3.11.1_vs_3.11.2recyclernullingpatch.png, 
> cassandra_heapcpu_memleak_patching_test_30d.png, 
> dtest_example_80_request.png, dtest_example_80_request_fix.png, 
> dtest_example_heap.png, memleak_heapdump_recyclerstack.png
>
>
> Different to CASSANDRA-13754, there seems to be another memory leak in 
> 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack.
> * heap utilization increase after upgrading to 3.11.0 => 
> cassandra_3.11.0_min_memory_utilization.jpg
> * No difference after upgrading to 3.11.1 (snapshot build) => 
> cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing 
> CASSANDRA-13754, more visible now
> * MAT shows io.netty.util.Recycler$Stack as top contributing class => 
> cassandra_3.11.1_mat_dominator_classes.png
> * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart 
> after ~ 72 hours
> Verified the following fix, namely explicitly unreferencing the 
> _recycleHandle_ member (making it non-final). In 
> _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_
> {code}
> public void recycle()
> {
> if (recycleHandle != null)
> {
> this.cleanup();
> builderRecycler.recycle(this, recycleHandle);
> recycleHandle = null; // ADDED
> }
> }
> {code}
> Patched a single node in our loadtest cluster with this change and after ~ 10 
> hours uptime, no sign of the previously offending class in MAT anymore => 
> cassandra_3.11.1_mat_dominator_classes_FIXED.png
> Can' say if this has any other side effects etc., but I doubt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14503) Internode connection management is race-prone

2018-06-07 Thread Sergio Bossa (JIRA)
Sergio Bossa created CASSANDRA-14503:


 Summary: Internode connection management is race-prone
 Key: CASSANDRA-14503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14503
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
Reporter: Sergio Bossa


Following CASSANDRA-8457, internode connection management has been rewritten to 
rely on Netty, but the new implementation in {{OutboundMessagingConnection}} 
seems quite race prone to me, in particular on those two cases:

* {{#finishHandshake()}} racing with {{#close()}}: i.e. in such case the former 
could run into an NPE if the latter nulls the {{channelWriter}} (but this is 
just an example, other conflicts might happen).
* Connection timeout and retry racing with state changing methods: 
{{connectionRetryFuture}} and {{connectionTimeoutFuture}} are cancelled when 
handshaking or closing, but there's no guarantee those will be actually 
cancelled (as they might be already running), so they might end up changing the 
connection state concurrently with other methods (i.e. by unexpectedly closing 
the channel or clearing the backlog).

Overall, the thread safety of {{OutboundMessagingConnection}} is very difficult 
to assess given the current implementation: I would suggest to refactor it into 
a single-thread model, where all connection state changing actions are enqueued 
on a single threaded scheduler, so that state transitions can be clearly 
defined and checked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13698) Reinstate or get rid of unit tests with multiple compaction strategies

2018-06-07 Thread Paulo Motta (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504537#comment-16504537
 ] 

Paulo Motta commented on CASSANDRA-13698:
-

Oops, my bad, indeed it seems I applied the 3.0 patch to 3.11 and forgot to 
verify the merge - sorry about that. Fixed with 
7bb88deb4c6387fd67114543986774c903860de9. Thanks for the heads up!

> Reinstate or get rid of unit tests with multiple compaction strategies
> --
>
> Key: CASSANDRA-13698
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13698
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Paulo Motta
>Assignee: Lerh Chuan Low
>Priority: Minor
>  Labels: lhf
> Fix For: 4.0, 3.0.17, 3.11.3
>
> Attachments: 13698-3.0.txt, 13698-3.11.txt, 13698-hotfix.txt, 
> 13698-trunk.txt
>
>
> At some point there were (anti-)compaction tests with multiple compaction 
> strategy classes, but now it's only tested with {{STCS}}:
> * 
> [AnticompactionTest|https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java#L247]
> * 
> [CompactionsTest|https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java#L85]
> We should either reinstate these tests or decide they are not important and 
> remove the unused parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/3] cassandra git commit: ninja: Fix bad CASSANDRA-13698 merge from 3.0 to 3.11

2018-06-07 Thread paulo
ninja: Fix bad CASSANDRA-13698 merge from 3.0 to 3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7bb88deb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7bb88deb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7bb88deb

Branch: refs/heads/trunk
Commit: 7bb88deb4c6387fd67114543986774c903860de9
Parents: 02e9ddf
Author: Paulo Motta 
Authored: Thu Jun 7 07:47:21 2018 -0300
Committer: Paulo Motta 
Committed: Thu Jun 7 07:51:09 2018 -0300

--
 .../org/apache/cassandra/db/compaction/CompactionsTest.java  | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7bb88deb/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
index ad138bf..c1bddd1 100644
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
@@ -336,8 +336,8 @@ public class CompactionsTest
 {
 RowUpdateBuilder deletedRowUpdateBuilder = new 
RowUpdateBuilder(table, 1, Util.dk(Integer.toString(dk)));
 deletedRowUpdateBuilder.clustering("01").add("val", "a"); //Range 
tombstone covers this (timestamp 2 > 1)
-Clustering startClustering = new 
Clustering(ByteBufferUtil.bytes("0"));
-Clustering endClustering = new 
Clustering(ByteBufferUtil.bytes("b"));
+Clustering startClustering = 
Clustering.make(ByteBufferUtil.bytes("0"));
+Clustering endClustering = 
Clustering.make(ByteBufferUtil.bytes("b"));
 deletedRowUpdateBuilder.addRangeTombstone(new 
RangeTombstone(Slice.make(startClustering, endClustering), new DeletionTime(2, 
(int) (System.currentTimeMillis() / 1000;
 deletedRowUpdateBuilder.build().applyUnsafe();
 
@@ -389,8 +389,8 @@ public class CompactionsTest
 {
 k.add(p.partitionKey());
 final SinglePartitionReadCommand command = 
SinglePartitionReadCommand.create(cfs.metadata, FBUtilities.nowInSeconds(), 
ColumnFilter.all(cfs.metadata), RowFilter.NONE, DataLimits.NONE, 
p.partitionKey(), new ClusteringIndexSliceFilter(Slices.ALL, false));
-try (ReadOrderGroup orderGroup = command.startOrderGroup();
- PartitionIterator iterator = 
command.executeInternal(orderGroup))
+try (ReadExecutionController executionController = 
command.executionController();
+ PartitionIterator iterator = 
command.executeInternal(executionController))
 {
 try (RowIterator rowIterator = iterator.next())
 {


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/3] cassandra git commit: ninja: Fix bad CASSANDRA-13698 merge from 3.0 to 3.11

2018-06-07 Thread paulo
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 02e9ddfad -> 7bb88deb4
  refs/heads/trunk d3b6a67bb -> 84fb7fa6f


ninja: Fix bad CASSANDRA-13698 merge from 3.0 to 3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7bb88deb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7bb88deb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7bb88deb

Branch: refs/heads/cassandra-3.11
Commit: 7bb88deb4c6387fd67114543986774c903860de9
Parents: 02e9ddf
Author: Paulo Motta 
Authored: Thu Jun 7 07:47:21 2018 -0300
Committer: Paulo Motta 
Committed: Thu Jun 7 07:51:09 2018 -0300

--
 .../org/apache/cassandra/db/compaction/CompactionsTest.java  | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7bb88deb/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
index ad138bf..c1bddd1 100644
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
@@ -336,8 +336,8 @@ public class CompactionsTest
 {
 RowUpdateBuilder deletedRowUpdateBuilder = new 
RowUpdateBuilder(table, 1, Util.dk(Integer.toString(dk)));
 deletedRowUpdateBuilder.clustering("01").add("val", "a"); //Range 
tombstone covers this (timestamp 2 > 1)
-Clustering startClustering = new 
Clustering(ByteBufferUtil.bytes("0"));
-Clustering endClustering = new 
Clustering(ByteBufferUtil.bytes("b"));
+Clustering startClustering = 
Clustering.make(ByteBufferUtil.bytes("0"));
+Clustering endClustering = 
Clustering.make(ByteBufferUtil.bytes("b"));
 deletedRowUpdateBuilder.addRangeTombstone(new 
RangeTombstone(Slice.make(startClustering, endClustering), new DeletionTime(2, 
(int) (System.currentTimeMillis() / 1000;
 deletedRowUpdateBuilder.build().applyUnsafe();
 
@@ -389,8 +389,8 @@ public class CompactionsTest
 {
 k.add(p.partitionKey());
 final SinglePartitionReadCommand command = 
SinglePartitionReadCommand.create(cfs.metadata, FBUtilities.nowInSeconds(), 
ColumnFilter.all(cfs.metadata), RowFilter.NONE, DataLimits.NONE, 
p.partitionKey(), new ClusteringIndexSliceFilter(Slices.ALL, false));
-try (ReadOrderGroup orderGroup = command.startOrderGroup();
- PartitionIterator iterator = 
command.executeInternal(orderGroup))
+try (ReadExecutionController executionController = 
command.executionController();
+ PartitionIterator iterator = 
command.executeInternal(executionController))
 {
 try (RowIterator rowIterator = iterator.next())
 {


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-06-07 Thread paulo
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/84fb7fa6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/84fb7fa6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/84fb7fa6

Branch: refs/heads/trunk
Commit: 84fb7fa6f1b2f2771ba77a64f398cddd00bac914
Parents: d3b6a67 7bb88de
Author: Paulo Motta 
Authored: Thu Jun 7 07:51:44 2018 -0300
Committer: Paulo Motta 
Committed: Thu Jun 7 07:51:44 2018 -0300

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14502) toDate() CQL function is instantiated for wrong argument type

2018-06-07 Thread Piotr Sarna (JIRA)
Piotr Sarna created CASSANDRA-14502:
---

 Summary: toDate() CQL function is instantiated for wrong argument 
type
 Key: CASSANDRA-14502
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14502
 Project: Cassandra
  Issue Type: Bug
  Components: CQL
Reporter: Piotr Sarna
 Fix For: 4.0.x


{{toDate()}} function is instantiated to work for {{timeuuid}} and {{date}} 
types passed as an argument, instead of {{timeuuid}} and {{timestamp}}, as 
stated in this documentation: 
[http://cassandra.apache.org/doc/latest/cql/functions.html#datetime-functions]

As a result it's possible to convert a {{date}} into {{date}}, but not a 
{{timestamp}} into {{date}}, which is probably what was meant.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14498) Audit log does not include statements on some system keyspaces

2018-06-07 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504296#comment-16504296
 ] 

Per Otterström commented on CASSANDRA-14498:


bq. are there any use cases where you would to audit system keyspaces?

One use case would be to get audit logs on all operations from selected users.

bq. auditing these generate lot of noise as C* calls system keyspaces in many 
places

Internal calls in C* will not come through the audit logger. Right? I've 
observed that client drivers will emit some queries on their own. This 
typically happens when a user login or when there are schema changes. But that 
only represents a fraction of all operations coming from a client.

The problem I see with a hard coded filter is that it will not only filter out 
queries from the driver, but also any query issued by the client application on 
those keyspaces.

The decision should be with the administrator of the cluster and it will still 
be possible to whitelist these queries with configuration. We could add some 
documentation on this so that users will not get surprised when they see 
queries in the log that they didn't expect.

> Audit log does not include statements on some system keyspaces
> --
>
> Key: CASSANDRA-14498
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14498
> Project: Cassandra
>  Issue Type: Bug
>  Components: Auth
>Reporter: Per Otterström
>Priority: Major
>  Labels: audit, lhf, security
> Fix For: 4.0
>
>
> Audit logs does not include statements on the "system" and "system_schema" 
> keyspace.
> It may be a common use case to whitelist queries on these keyspaces, but 
> Cassandra should not make assumptions. Users who don't want these statements 
> in their audit log are still able to whitelist them with configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org