[jira] [Commented] (CASSANDRA-15460) Fix missing call to enable RPC after native transport is started in in-jvm dtests

2019-12-19 Thread Yifan Cai (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000572#comment-17000572
 ] 

Yifan Cai commented on CASSANDRA-15460:
---

Thanks [~drohrer]. The patch LGTM. +1

> Fix missing call to enable RPC after native transport is started in in-jvm 
> dtests
> -
>
> Key: CASSANDRA-15460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15460
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Doug Rohrer
>Assignee: Doug Rohrer
>Priority: Normal
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When starting the native transport, the original patch missed the step of 
> calling {{StorageService.instance.setRpcReady(true);}}. This appears to only 
> be required for counter columns, but without it you can't update a counter 
> value.
> We should add this call after starting up the native transport, and set it to 
> {{false}} during the shutdown sequence to mimic the production code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15460) Fix missing call to enable RPC after native transport is started in in-jvm dtests

2019-12-19 Thread Yifan Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifan Cai updated CASSANDRA-15460:
--
Reviewers: Alex Petrov, Yifan Cai  (was: Alex Petrov)

> Fix missing call to enable RPC after native transport is started in in-jvm 
> dtests
> -
>
> Key: CASSANDRA-15460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15460
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Doug Rohrer
>Assignee: Doug Rohrer
>Priority: Normal
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When starting the native transport, the original patch missed the step of 
> calling {{StorageService.instance.setRpcReady(true);}}. This appears to only 
> be required for counter columns, but without it you can't update a counter 
> value.
> We should add this call after starting up the native transport, and set it to 
> {{false}} during the shutdown sequence to mimic the production code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-12-19 Thread Benedict Elliott Smith (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000505#comment-17000505
 ] 

Benedict Elliott Smith commented on CASSANDRA-15397:


The code looks pretty simple and clean, though I will need to look more 
carefully before we consider merging. We would want to rename the class, since 
it's no longer a tree, and we would probably want to avoid the extra work of 
going via streams (which looks like it would allocate O(n) extra data).

As it stands, I would probably want to see more performance comparisons, 
particularly out to more outlandish numbers of sstables (at least one million), 
and also for excessively skewed distributions. Since the benefits shown in your 
graphs are modest in absolute terms, and the potential algorithmic harms of a 
linear scan could have significant downside risk. So we need to be sure we have 
properly established what that might be.

There are some potential simple improvements to this approach that would make 
it more desirable: instead of maintaining two lists of interval objects, we 
could maintain four {{long[]}}; two matches pairs of {{long[]}} each 
representing the two current sorted lists. One would be used for binary search, 
the other for the linear scan. This should dramatically improve the constant 
factors, only needing to support post-filtering for e.g. {{RandomPartitioner}}, 
as we would only be able to filter on a prefix for those tokens > 8 bytes.

Although this approach would require slightly more involved modifications, and 
a bit more verification, the win should be much more pronounced.  Is this 
something you'd be willing to try?

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>Reporter: Chandrasekhar Thumuluru
>Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Comment Edited] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-12-19 Thread Benedict Elliott Smith (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000505#comment-17000505
 ] 

Benedict Elliott Smith edited comment on CASSANDRA-15397 at 12/20/19 12:21 AM:
---

The code looks pretty simple and clean, though I will need to look more 
carefully before we consider merging. We would want to rename the class, since 
it's no longer a tree, and we would probably want to avoid the extra work of 
going via streams (which looks like it would allocate O(\n) extra data).

As it stands, I would probably want to see more performance comparisons, 
particularly out to more outlandish numbers of sstables (at least one million), 
and also for excessively skewed distributions. Since the benefits shown in your 
graphs are modest in absolute terms, and the potential algorithmic harms of a 
linear scan could have significant downside risk. So we need to be sure we have 
properly established what that might be.

There are some potential simple improvements to this approach that would make 
it more desirable: instead of maintaining two lists of interval objects, we 
could maintain four {{long[]}}; two matches pairs of {{long[]}} each 
representing the two current sorted lists. One would be used for binary search, 
the other for the linear scan. This should dramatically improve the constant 
factors, only needing to support post-filtering for e.g. {{RandomPartitioner}}, 
as we would only be able to filter on a prefix for those tokens > 8 bytes.

Although this approach would require slightly more involved modifications, and 
a bit more verification, the win should be much more pronounced.  Is this 
something you'd be willing to try?


was (Author: benedict):
The code looks pretty simple and clean, though I will need to look more 
carefully before we consider merging. We would want to rename the class, since 
it's no longer a tree, and we would probably want to avoid the extra work of 
going via streams (which looks like it would allocate O(n) extra data).

As it stands, I would probably want to see more performance comparisons, 
particularly out to more outlandish numbers of sstables (at least one million), 
and also for excessively skewed distributions. Since the benefits shown in your 
graphs are modest in absolute terms, and the potential algorithmic harms of a 
linear scan could have significant downside risk. So we need to be sure we have 
properly established what that might be.

There are some potential simple improvements to this approach that would make 
it more desirable: instead of maintaining two lists of interval objects, we 
could maintain four {{long[]}}; two matches pairs of {{long[]}} each 
representing the two current sorted lists. One would be used for binary search, 
the other for the linear scan. This should dramatically improve the constant 
factors, only needing to support post-filtering for e.g. {{RandomPartitioner}}, 
as we would only be able to filter on a prefix for those tokens > 8 bytes.

Although this approach would require slightly more involved modifications, and 
a bit more verification, the win should be much more pronounced.  Is this 
something you'd be willing to try?

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>Reporter: Chandrasekhar Thumuluru
>Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap 

[jira] [Created] (CASSANDRA-15463) Fix in-jvm dtest java 11 compatibility

2019-12-19 Thread Blake Eggleston (Jira)
Blake Eggleston created CASSANDRA-15463:
---

 Summary: Fix in-jvm dtest java 11 compatibility
 Key: CASSANDRA-15463
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15463
 Project: Cassandra
  Issue Type: Bug
  Components: Test/dtest
Reporter: Blake Eggleston
Assignee: Blake Eggleston


The url classloader used by the in jvm dtests is not accessible by default in 
java 11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15463) Fix in-jvm dtest java 11 compatibility

2019-12-19 Thread Blake Eggleston (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-15463:

 Bug Category: Parent values: Code(13163)Level 1 values: Bug - Unclear 
Impact(13164)
   Complexity: Low Hanging Fruit
Discovered By: User Report
Fix Version/s: 4.0-alpha
 Severity: Low
   Status: Open  (was: Triage Needed)

> Fix in-jvm dtest java 11 compatibility
> --
>
> Key: CASSANDRA-15463
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15463
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Normal
> Fix For: 4.0-alpha
>
>
> The url classloader used by the in jvm dtests is not accessible by default in 
> java 11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14306) Single config variable to specify logs path

2019-12-19 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-14306:
---
Authors: Angelo Polo
Test and Documentation Plan: ??
 Status: Patch Available  (was: Open)

> Single config variable to specify logs path
> ---
>
> Key: CASSANDRA-14306
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14306
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Priority: Low
> Attachments: unified_logs_dir.patch
>
>
> Motivation: All configuration should take place in bin/cassandra.in.sh (for 
> non-Windows) and the various conf/ files. In particular, bin/cassandra should 
> not need to be modified upon installation. In many installs, $CASSANDRA_HOME 
> is not a writable location, the yaml setting 'data_file_directories' is being 
> set to a non-default location, etc. It would be good to have a single 
> variable in an explicit conf file to specify where logs should be written.
> For non-Windows installs, there are currently two places where the log 
> directory is set: in conf/cassandra-env.sh and in bin/cassandra. The defaults 
> for these are both $CASSANDRA_HOME/logs. These can be unified to a single 
> variable CASSANDRA_LOGS that is set in conf/cassandra-env.sh, with the 
> intention that it would be modified once there (if not set in the 
> environment) by a user running a custom installation. Then include a check in 
> bin/cassandra that CASSANDRA_LOGS is set in case conf/cassandra-env.sh 
> doesn't get sourced on startup, and provide a default value if not. For the 
> scenario that a user would prefer different paths for the logback logs and 
> the GC logs, they can still go into bin/cassandra to set the second path, 
> just as they would do currently. See "unified_logs_dir.patch" for a proposed 
> patch. 
> No change seems necessary for the Windows scripts. The two uses of 
> $CASSANDRA_HOME/logs are in the same script conf/cassandra-env.ps1 within 
> scrolling distance of each other (lines 278-301). They haven't been combined 
> I suppose because of the different path separators in the two usages.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14306) Single config variable to specify logs path

2019-12-19 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-14306:
---
Reviewers: Michael Semb Wever, Michael Semb Wever  (was: Michael Semb Wever)
   Michael Semb Wever, Michael Semb Wever
   Status: Review In Progress  (was: Patch Available)

> Single config variable to specify logs path
> ---
>
> Key: CASSANDRA-14306
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14306
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Angelo Polo
>Priority: Low
> Attachments: unified_logs_dir.patch
>
>
> Motivation: All configuration should take place in bin/cassandra.in.sh (for 
> non-Windows) and the various conf/ files. In particular, bin/cassandra should 
> not need to be modified upon installation. In many installs, $CASSANDRA_HOME 
> is not a writable location, the yaml setting 'data_file_directories' is being 
> set to a non-default location, etc. It would be good to have a single 
> variable in an explicit conf file to specify where logs should be written.
> For non-Windows installs, there are currently two places where the log 
> directory is set: in conf/cassandra-env.sh and in bin/cassandra. The defaults 
> for these are both $CASSANDRA_HOME/logs. These can be unified to a single 
> variable CASSANDRA_LOGS that is set in conf/cassandra-env.sh, with the 
> intention that it would be modified once there (if not set in the 
> environment) by a user running a custom installation. Then include a check in 
> bin/cassandra that CASSANDRA_LOGS is set in case conf/cassandra-env.sh 
> doesn't get sourced on startup, and provide a default value if not. For the 
> scenario that a user would prefer different paths for the logback logs and 
> the GC logs, they can still go into bin/cassandra to set the second path, 
> just as they would do currently. See "unified_logs_dir.patch" for a proposed 
> patch. 
> No change seems necessary for the Windows scripts. The two uses of 
> $CASSANDRA_HOME/logs are in the same script conf/cassandra-env.ps1 within 
> scrolling distance of each other (lines 278-301). They haven't been combined 
> I suppose because of the different path separators in the two usages.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14688) Update protocol spec and class level doc with protocol checksumming details

2019-12-19 Thread Chris Splinter (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000224#comment-17000224
 ] 

Chris Splinter commented on CASSANDRA-14688:


[~samt] is there anything else needed here to get this in? 

> Update protocol spec and class level doc with protocol checksumming details
> ---
>
> Key: CASSANDRA-14688
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14688
> Project: Cassandra
>  Issue Type: Task
>  Components: Legacy/Documentation and Website
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
>  Labels: protocolv5
> Fix For: 4.0, 4.0-beta
>
>
> CASSANDRA-13304 provides an option to add checksumming to the frame body of 
> native protocol messages. The native protocol spec needs to be updated to 
> reflect this ASAP. We should also verify that the javadoc comments describing 
> the on-wire format in 
> {{o.a.c.transport.frame.checksum.ChecksummingTransformer}} are up to date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-12-19 Thread Chandrasekhar Thumuluru (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000219#comment-17000219
 ] 

Chandrasekhar Thumuluru commented on CASSANDRA-15397:
-

[~benedict] — I posted the changes to my branch and created a 
[PR|https://github.com/apache/cassandra/pull/400]. Please provide your comments 
when you find free time. Thanks. 

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>Reporter: Chandrasekhar Thumuluru
>Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-12-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated CASSANDRA-15397:
---
Labels: pull-request-available  (was: )

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>Reporter: Chandrasekhar Thumuluru
>Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13938) Default repair is broken, crashes other nodes participating in repair (in trunk)

2019-12-19 Thread Aleksey Yeschenko (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000203#comment-17000203
 ] 

Aleksey Yeschenko commented on CASSANDRA-13938:
---

As multiple folks have noticed, {{current}} tracking is indeed busted. And, 
also, as noticed, we don’t really need to track {{current}} - it can always be 
inferred from the current chunk offset into the partition position, plus 
position in the current buffer. So what we need to track is only that offset of 
the actual buffer itself.

Tracking it is actually rather trivial, since there are two distinct cases when 
we move to the next chunk:

1. A {{reBuffer()}} call upon exhaustion of the previous buffer, while still 
not done with the current partition position range. In this case we are moving 
to the adjacent compressed chunk, and we should bump offset by uncompressed 
chunk length, which is a fixed value for a given session;
2. We are skipping to the next partition position range, via {{position(long 
position)}} call; it might be in the current chunk, or it might skip 1..n 
compressed chunks. In either case, the new offset and buffer position are 
derived from the new {{position}} argument only

There is another bug in the current implementation: if the compressed buffer 
exceeds length of {{info.parameters.chunkLength()}} (if data was poorly 
compressible, for example, and upon compression blowed up in size instead of 
shrinking), then read code in {{Reader#runMayThrow()}} wouldn’t be able to read 
that chunk fully into the temporary byte array.

Speaking of that array: the slow-path copy happens to be the one we use, and it 
involves a redundant copy into this temporary array, followed by a copy of that 
array into the destination {{ByteBuffer}}. That can be trivially eliminated by 
adding a {{readFully(ByteBuffer)}} method to {{RebufferingInputStream}}.

I’ve also realised that for no good reason we have an extra thread whose only 
job is to read the chunks off input into chunk-size byte buffers and put the on 
a queue for {{CompressedInputStream}} to later consume. There is absolutely no 
reason for that and the complexity it introduces. It’s not the place to handle 
prefetching, nor does it make the input stream non-blocking, nor is it an issue 
if it were, given that streaming utilises a separate event loop group from 
messaging.

It’s also very much unnecessary to allocate a whole new direct {{ByteBuffer}} 
for every chunk. A single {{ByteBuffer}} for the compressed chunk, reused, is 
all we need.

Also, we’ve had no test coverage for {{min_compress_ratio}}, introduced by 
CASSANDRA-10520.

And, one last thing/nit: while looking at the write side, I spotted some 
unnecessary garbage creation in 
{{CassandraCompressedStreamWriter#getTransferSections()}} when extending 
current section, that can and should be easily avoided. Also the use of 
{{SSTableReader.PartitionPositionBounds}} class in place of {{Pair}}, when we 
did that refactor recently, is semantically incorrect: 
{{PartitionPositionBounds}} as a class represents partition bound in the 
uncompressed stream, and not chunk bounds in the compressed one.

I’ve addressed all these points and a bit more in [this 
branch|https://github.com/iamaleksey/cassandra/commits/13938-4.0].

> Default repair is broken, crashes other nodes participating in repair (in 
> trunk)
> 
>
> Key: CASSANDRA-13938
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13938
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Repair
>Reporter: Nate McCall
>Assignee: Aleksey Yeschenko
>Priority: Urgent
> Fix For: 4.0-alpha
>
> Attachments: 13938.yaml, test.sh
>
>
> Running through a simple scenario to test some of the new repair features, I 
> was not able to make a repair command work. Further, the exception seemed to 
> trigger a nasty failure state that basically shuts down the netty connections 
> for messaging *and* CQL on the nodes transferring back data to the node being 
> repaired. The following steps reproduce this issue consistently.
> Cassandra stress profile (probably not necessary, but this one provides a 
> really simple schema and consistent data shape):
> {noformat}
> keyspace: standard_long
> keyspace_definition: |
>   CREATE KEYSPACE standard_long WITH replication = {'class':'SimpleStrategy', 
> 'replication_factor':3};
> table: test_data
> table_definition: |
>   CREATE TABLE test_data (
>   key text,
>   ts bigint,
>   val text,
>   PRIMARY KEY (key, ts)
>   ) WITH COMPACT STORAGE AND
>   CLUSTERING ORDER BY (ts DESC) AND
>   bloom_filter_fp_chance=0.01 AND
>   caching={'keys':'ALL', 'rows_per_partition':'NONE'} AND
>   comment='' AND
>   dclocal_read_repair_chance=0.00 

[jira] [Comment Edited] (CASSANDRA-15461) Legacy counter shards can cause false positives in repaired data tracking

2019-12-19 Thread Sam Tunnicliffe (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1754#comment-1754
 ] 

Sam Tunnicliffe edited comment on CASSANDRA-15461 at 12/19/19 1:23 PM:
---

The patch abstracts the digest generation, wrapping the underlying {{Hasher}} 
with a new {{Digest}} class. Specializations of {{Digest}} can incorporate 
logic depending on their specific usage; e.g. when generating a digest for 
repaired data tracking, don't include legacy counter shards. This also enables 
a lot of {{ByteBuffer}} duplication to be removed.

||branch||CI||
|[15461-4.0|https://github.com/beobal/cassandra/tree/15461-4.0]|[circle|https://circleci.com/gh/beobal/workflows/cassandra/tree/15461-4.0]|



was (Author: beobal):
||branch||CI||
|[15461-4.0|https://github.com/beobal/cassandra/tree/15461-4.0]|[circle|https://circleci.com/gh/beobal/workflows/cassandra/tree/15461-4.0]|


> Legacy counter shards can cause false positives in repaired data tracking
> -
>
> Key: CASSANDRA-15461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15461
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
> Fix For: 4.0-beta
>
>
> It is expected that the serialization of legacy (pre-2.1) counter cells may 
> differ across replicas due to the remote vs local designation of the shards. 
> This will cause the repaired data digests calculated at read time to differ 
> where certain legacy shards are encountered. This does not, however, indicate 
> corruption of the repaired dataset and there isn't any action that operators 
> can take in this scenario. Excluding counter cells which contain legacy 
> shards from the digest calculation will avoid false positives.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15462) Purgable tombstones can cause false positives in repaired data tracking

2019-12-19 Thread Sam Tunnicliffe (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-15462:

 Bug Category: Parent values: Correctness(12982)
   Complexity: Normal
Discovered By: Code Inspection
Fix Version/s: 4.0-beta
 Severity: Normal
   Status: Open  (was: Triage Needed)

The linked branch is based on the fix for CASSANDRA-15461. We can't actually 
purge the eligible tombstones when generating the repaired data digest, as this 
happens before the repaired and unrepaired data is merged. Instead, we supply a 
{{PurgeFunction}} to the digest generating function, which uses it to test 
which tombstones are going to be purged from the final result, excluding them 
from the digest if necessary. The main complication is that we don't know 
whether a partition will be completely purged until after it has been 
processed. o work around this, we create an overall digest for the entire 
resultset and a temporary digest of the current partition, only adding the 
temporary digest to the overall if the partition is not fully purged.

||branch||CI||

|[15462-4.0|https://github.com/beobal/cassandra/tree/15462-4.0]|[circle|https://circleci.com/gh/beobal/workflows/cassandra/tree/15462-4.0]|

> Purgable tombstones can cause false positives in repaired data tracking
> ---
>
> Key: CASSANDRA-15462
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15462
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
> Fix For: 4.0-beta
>
>
> Calculating the repaired data digest on the read path (for the purposes of 
> detecting mismatches in the repaired data across replicas) is done before 
> purging any tombstones due to gcgs or last repaired time. This causes false 
> positives when repaired sstables include GC-able tombstones on some replicas 
> but not others. 
> Validation compactions do purge tombstones so it's perfectly possible for 
> sstables to mismatch in this way without causing any streaming during repair.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15462) Purgable tombstones can cause false positives in repaired data tracking

2019-12-19 Thread Sam Tunnicliffe (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-15462:

Test and Documentation Plan: Updated unit and in-jvm dtests
 Status: Patch Available  (was: Open)

> Purgable tombstones can cause false positives in repaired data tracking
> ---
>
> Key: CASSANDRA-15462
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15462
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
> Fix For: 4.0-beta
>
>
> Calculating the repaired data digest on the read path (for the purposes of 
> detecting mismatches in the repaired data across replicas) is done before 
> purging any tombstones due to gcgs or last repaired time. This causes false 
> positives when repaired sstables include GC-able tombstones on some replicas 
> but not others. 
> Validation compactions do purge tombstones so it's perfectly possible for 
> sstables to mismatch in this way without causing any streaming during repair.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15461) Legacy counter shards can cause false positives in repaired data tracking

2019-12-19 Thread Sam Tunnicliffe (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-15461:

Test and Documentation Plan: Updated unit tests
 Status: Patch Available  (was: Open)

||branch||CI||
|[15461-4.0|https://github.com/beobal/cassandra/tree/15461-4.0]|[circle|https://circleci.com/gh/beobal/workflows/cassandra/tree/15461-4.0]|


> Legacy counter shards can cause false positives in repaired data tracking
> -
>
> Key: CASSANDRA-15461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15461
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
> Fix For: 4.0-beta
>
>
> It is expected that the serialization of legacy (pre-2.1) counter cells may 
> differ across replicas due to the remote vs local designation of the shards. 
> This will cause the repaired data digests calculated at read time to differ 
> where certain legacy shards are encountered. This does not, however, indicate 
> corruption of the repaired dataset and there isn't any action that operators 
> can take in this scenario. Excluding counter cells which contain legacy 
> shards from the digest calculation will avoid false positives.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15461) Legacy counter shards can cause false positives in repaired data tracking

2019-12-19 Thread Sam Tunnicliffe (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-15461:

 Bug Category: Parent values: Correctness(12982)
   Complexity: Normal
Discovered By: Code Inspection
Fix Version/s: 4.0-beta
 Severity: Normal
   Status: Open  (was: Triage Needed)

> Legacy counter shards can cause false positives in repaired data tracking
> -
>
> Key: CASSANDRA-15461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15461
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability/Metrics
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Normal
> Fix For: 4.0-beta
>
>
> It is expected that the serialization of legacy (pre-2.1) counter cells may 
> differ across replicas due to the remote vs local designation of the shards. 
> This will cause the repaired data digests calculated at read time to differ 
> where certain legacy shards are encountered. This does not, however, indicate 
> corruption of the repaired dataset and there isn't any action that operators 
> can take in this scenario. Excluding counter cells which contain legacy 
> shards from the digest calculation will avoid false positives.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15462) Purgable tombstones can cause false positives in repaired data tracking

2019-12-19 Thread Sam Tunnicliffe (Jira)
Sam Tunnicliffe created CASSANDRA-15462:
---

 Summary: Purgable tombstones can cause false positives in repaired 
data tracking
 Key: CASSANDRA-15462
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15462
 Project: Cassandra
  Issue Type: Bug
  Components: Observability/Metrics
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe


Calculating the repaired data digest on the read path (for the purposes of 
detecting mismatches in the repaired data across replicas) is done before 
purging any tombstones due to gcgs or last repaired time. This causes false 
positives when repaired sstables include GC-able tombstones on some replicas 
but not others. 

Validation compactions do purge tombstones so it's perfectly possible for 
sstables to mismatch in this way without causing any streaming during repair.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15461) Legacy counter shards can cause false positives in repaired data tracking

2019-12-19 Thread Sam Tunnicliffe (Jira)
Sam Tunnicliffe created CASSANDRA-15461:
---

 Summary: Legacy counter shards can cause false positives in 
repaired data tracking
 Key: CASSANDRA-15461
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15461
 Project: Cassandra
  Issue Type: Bug
  Components: Observability/Metrics
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe


It is expected that the serialization of legacy (pre-2.1) counter cells may 
differ across replicas due to the remote vs local designation of the shards. 
This will cause the repaired data digests calculated at read time to differ 
where certain legacy shards are encountered. This does not, however, indicate 
corruption of the repaired dataset and there isn't any action that operators 
can take in this scenario. Excluding counter cells which contain legacy shards 
from the digest calculation will avoid false positives.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15460) Fix missing call to enable RPC after native transport is started in in-jvm dtests

2019-12-19 Thread Doug Rohrer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doug Rohrer updated CASSANDRA-15460:

Test and Documentation Plan: Patch in Github PR #399 includes new unit test 
for counter updates.
 Status: Patch Available  (was: Open)

> Fix missing call to enable RPC after native transport is started in in-jvm 
> dtests
> -
>
> Key: CASSANDRA-15460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15460
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Doug Rohrer
>Assignee: Doug Rohrer
>Priority: Normal
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When starting the native transport, the original patch missed the step of 
> calling {{StorageService.instance.setRpcReady(true);}}. This appears to only 
> be required for counter columns, but without it you can't update a counter 
> value.
> We should add this call after starting up the native transport, and set it to 
> {{false}} during the shutdown sequence to mimic the production code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15460) Fix missing call to enable RPC after native transport is started in in-jvm dtests

2019-12-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated CASSANDRA-15460:
---
Labels: pull-request-available  (was: )

> Fix missing call to enable RPC after native transport is started in in-jvm 
> dtests
> -
>
> Key: CASSANDRA-15460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15460
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Doug Rohrer
>Assignee: Doug Rohrer
>Priority: Normal
>  Labels: pull-request-available
>
> When starting the native transport, the original patch missed the step of 
> calling {{StorageService.instance.setRpcReady(true);}}. This appears to only 
> be required for counter columns, but without it you can't update a counter 
> value.
> We should add this call after starting up the native transport, and set it to 
> {{false}} during the shutdown sequence to mimic the production code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15460) Fix missing call to enable RPC after native transport is started in in-jvm dtests

2019-12-19 Thread Doug Rohrer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doug Rohrer updated CASSANDRA-15460:

 Bug Category: Parent values: Correctness(12982)Level 1 values: Test 
Failure(12990)
   Complexity: Low Hanging Fruit
Discovered By: Unit Test
Reviewers: Alex Petrov
 Severity: Normal
 Assignee: Doug Rohrer
   Status: Open  (was: Triage Needed)

> Fix missing call to enable RPC after native transport is started in in-jvm 
> dtests
> -
>
> Key: CASSANDRA-15460
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15460
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Doug Rohrer
>Assignee: Doug Rohrer
>Priority: Normal
>
> When starting the native transport, the original patch missed the step of 
> calling {{StorageService.instance.setRpcReady(true);}}. This appears to only 
> be required for counter columns, but without it you can't update a counter 
> value.
> We should add this call after starting up the native transport, and set it to 
> {{false}} during the shutdown sequence to mimic the production code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15460) Fix missing call to enable RPC after native transport is started in in-jvm dtests

2019-12-19 Thread Doug Rohrer (Jira)
Doug Rohrer created CASSANDRA-15460:
---

 Summary: Fix missing call to enable RPC after native transport is 
started in in-jvm dtests
 Key: CASSANDRA-15460
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15460
 Project: Cassandra
  Issue Type: Bug
  Components: Test/dtest
Reporter: Doug Rohrer


When starting the native transport, the original patch missed the step of 
calling {{StorageService.instance.setRpcReady(true);}}. This appears to only be 
required for counter columns, but without it you can't update a counter value.

We should add this call after starting up the native transport, and set it to 
{{false}} during the shutdown sequence to mimic the production code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13330) /var/run/cassandra directory not created on Centos7

2019-12-19 Thread Nicolas Chauvet (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999896#comment-16999896
 ] 

Nicolas Chauvet commented on CASSANDRA-13330:
-

For information, I've submitted a PR with a tmpfile as part of the systemd 
service introduction:
https://issues.apache.org/jira/browse/CASSANDRA-13148

> /var/run/cassandra directory not created on Centos7
> ---
>
> Key: CASSANDRA-13330
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13330
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
> Environment: CentOS Linux release 7.3.1611 (Core) (x86_64)
> datastax-ddc.noarch 3.9.0-1  
> datastax-ddc-tools.noarch   3.9.0-1   
> java-1.8.0-openjdk.x86_64   1:1.8.0.121-0.b13.el7_3  
> java-1.8.0-openjdk-headless.x86_64  1:1.8.0.121-0.b13.el7_3 
>Reporter: Tobi
>Priority: Low
>
> After updating cassandra from 3.4 to 3.9 via the datastax repo the startup 
> script /etc/init.d/cassandra was unable to stop cassandra. After checking the 
> startscript we found that it's tried to read pid file from 
> /var/run/cassandra/cassandra.pid
> But as this path does not exist, the cassandra process could not be stopped. 
> As a fix we created /usr/lib/tmpfiles.d/cassandra.conf with this content
> d /var/run/cassandra  0750  cassandra cassandra   -   
> -
> After that the directory was created on boot and the pid file could we 
> written in it



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13148) Systemd support for RPM

2019-12-19 Thread Nicolas Chauvet (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999884#comment-16999884
 ] 

Nicolas Chauvet commented on CASSANDRA-13148:
-

Hello,

I have created a pull request at [https://github.com/apache/cassandra/pull/398] 
with a basic systemd unit file along RPM support to install the files as 
appropriate.

This is targetted as 3.11 (instead of trunk) because most (CentOS 7) end-users 
are already affected since a recent change in system prevented to use the 
current initscript. (see also 
https://issues.apache.org/jira/browse/CASSANDRA-15273 )

About using sd_notify, I agree it would be a nice improvement, but so far isn't 
needed for a basic systemd file as a start.

Please review.
(side note, this is my first PR for the project).

> Systemd support for RPM
> ---
>
> Key: CASSANDRA-13148
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13148
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Packaging
>Reporter: Spiros Ioannou
>Priority: Normal
>
> Since CASSANDRA-12967 will provide an RPM file, it would be greatly 
> beneficial if this package included systemd startup unit configuration 
> instead of the current traditional init-based, which misbehaves on 
> RHEL7/CentOS7.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org