[jira] [Commented] (CASSANDRA-13885) Allow to run full repairs in 3.0 without additional cost of anti-compaction

2017-09-20 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172893#comment-16172893
 ] 

Stefan Podkowinski commented on CASSANDRA-13885:


If you really want to stop doing anti-compaction for full repairs, you'd also 
have to prevent users from running both full and incremental repairs during 
their repair schedules. Or at least make sure that incremental repairs - if run 
at all - will be run at least once before gc_grace.

What we need to avoid here is to end up with a tombstone in the repaired set 
and the corresponding data in unrepaired. Assuming gc_grace has passed and both 
have already been compacted on the other replicas, running incremental would 
zombie the data back to the replicas, as incremental is only working on the 
unrepaired set, while the local tombstone is in the repaired set and thus won't 
be transfered or considered during MT creation.

Really -1 on any changes to fundamental repair assumptions and paradigms in 
3.0, if not for really critical bug fixing



> Allow to run full repairs in 3.0 without additional cost of anti-compaction
> ---
>
> Key: CASSANDRA-13885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13885
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Thomas Steinmaurer
>
> This ticket is basically the result of the discussion in Cassandra user list: 
> https://www.mail-archive.com/user@cassandra.apache.org/msg53562.html
> I was asked to open a ticket by Paulo Motta to think about back-porting 
> running full repairs without the additional cost of anti-compaction.
> Basically there is no way in 3.0 to run full repairs from several nodes 
> concurrently without troubles caused by (overlapping?) anti-compactions. 
> Coming from 2.1 this is a major change from an operational POV, basically 
> breaking any e.g. cron job based solution kicking off -pr based repairs on 
> several nodes concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13787) RangeTombstoneMarker and PartitionDeletion is not properly included in MV

2017-09-20 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-13787:
--
Status: Ready to Commit  (was: Patch Available)

> RangeTombstoneMarker and PartitionDeletion is not properly included in MV
> -
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> -2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.- 
> (Merged in CASSANDRA-11500)
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 1, 1, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 2, 1, 2) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 3, 1, 3) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
> 0, 2, 0) USING TIMESTAMP 5; " +
> "APPLY BATCH");
> assertRowsIgnoringOrder(execute("select * from %s"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> }
> @Test
> public void testExistingParitionDeletionWithFlush() throws Throwable
> {
> testExistingParitionDeletion(true);
> }
> @Test
> public void testExistingParitionDeletionWithoutFlush() throws Throwable
> {
> testExistingParitionDeletion(false);
> }
> public void testExistingParitionDeletion(boolean flush) throws Throwable
> {
> // for partition range deletion, need to know that existing row is 
> shadowed instead of not existed.
> createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
> (a))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("mv_test1",
>"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a 
> IS NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");
> Keyspace ks = Keyspace.open(keyspace());
> ks.getColumn

[jira] [Commented] (CASSANDRA-13787) RangeTombstoneMarker and PartitionDeletion is not properly included in MV

2017-09-20 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172920#comment-16172920
 ] 

Sylvain Lebresne commented on CASSANDRA-13787:
--

+1, nothing seems unrelated to this here.

> RangeTombstoneMarker and PartitionDeletion is not properly included in MV
> -
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> -2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.- 
> (Merged in CASSANDRA-11500)
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 1, 1, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 2, 1, 2) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 3, 1, 3) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
> 0, 2, 0) USING TIMESTAMP 5; " +
> "APPLY BATCH");
> assertRowsIgnoringOrder(execute("select * from %s"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> }
> @Test
> public void testExistingParitionDeletionWithFlush() throws Throwable
> {
> testExistingParitionDeletion(true);
> }
> @Test
> public void testExistingParitionDeletionWithoutFlush() throws Throwable
> {
> testExistingParitionDeletion(false);
> }
> public void testExistingParitionDeletion(boolean flush) throws Throwable
> {
> // for partition range deletion, need to know that existing row is 
> shadowed instead of not existed.
> createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
> (a))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("mv_test1",
>"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a 
> IS NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");
> K

[jira] [Created] (CASSANDRA-13886) OOM put node in limbo

2017-09-20 Thread Marcus Olsson (JIRA)
Marcus Olsson created CASSANDRA-13886:
-

 Summary: OOM put node in limbo
 Key: CASSANDRA-13886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13886
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra version 2.2.10
Reporter: Marcus Olsson
Priority: Minor


In one of our test clusters we have had some issues with OOM. While working on 
fixing this it was discovered that one of the nodes that got OOM actually 
wasn't shut down properly. Instead it went into a half-up-state where the 
affected node considered itself up while all other nodes considered it as down.

The following stacktrace was observed which seems to be the cause of this:
{noformat}
java.lang.NoClassDefFoundError: Could not initialize class java.lang.UNIXProcess
at java.lang.ProcessImpl.start(ProcessImpl.java:130) ~[na:1.8.0_131]
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) 
~[na:1.8.0_131]
at java.lang.Runtime.exec(Runtime.java:620) ~[na:1.8.0_131]
at java.lang.Runtime.exec(Runtime.java:485) ~[na:1.8.0_131]
at 
org.apache.cassandra.utils.HeapUtils.generateHeapDump(HeapUtils.java:88) 
~[apache-cassandra-2.2.10.jar:2.2.10]
at 
org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:56)
 ~[apache-cassandra-2.2.10.jar:2.2.10]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:168)
 ~[apache-cassandra-2.2.10.jar:2.2.10]
at 
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
 ~[apache-cassandra-2.2.10.jar:2.2.10]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
~[apache-cassandra-2.2.10.jar:2.2.10]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
{noformat}

It seems that if an unexpected exception/error is thrown inside 
JVMStabilityInspector.inspectThrowable the JVM is not actually shut down but 
instead keeps on running. My expectation is that the JVM should shut down in 
case OOM is thrown.

Potential workaround is to add:
{noformat}
JVM_OPTS="$JVM_OPTS -XX:+ExitOnOutOfMemoryError"
{noformat}
to cassandra-env.sh.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13885) Allow to run full repairs in 3.0 without additional cost of anti-compaction

2017-09-20 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172945#comment-16172945
 ] 

Paulo Motta commented on CASSANDRA-13885:
-

bq. What we need to avoid here is to end up with a tombstone in the repaired 
set and the corresponding data in unrepaired.

Given that anti-compaction is non-deterministic on 3.0 due to CASSANDRA-9143, 
you can't guarantee both the data and the tombstone will be marked as repaired 
after incremental repair so this will be always a potential problem whether or 
not you run anti-compaction after full-repairs. I don't see how running 
anti-compaction after full repairs can improve this since it's still subject to 
the same limitations. Since I might be missing some edge case here, would you 
mind giving an example where skipping anti-compaction after full repair could 
be a problem when mixing with incremental repairs?

bq. Or at least make sure that incremental repairs - if run at all - will be 
run at least once before gc_grace.

This is a basic requirement of repair, so if you don't do that you're basically 
accepting the risk of data resurrection - whether or nor anti-compaction is run 
after full repairs.

bq. Really -1 on any changes to fundamental repair assumptions and paradigms in 
3.0, if not for really critical bug fixing

I'd agree with that if we had reliable incremental repairs which is not the 
case on 3.0, and we were just fully conscious about its limitations quite late 
on 3.0 line, but some users are just starting to adopt 3.0, so it's fair to 
give them an option to stick with non-incremental repairs if they prefer so for 
operational reasons. Perhaps we could just add a {{\-\-skip-anticompaction}} 
flag which can be used together with {{--full}} to skip anti-compactions?

> Allow to run full repairs in 3.0 without additional cost of anti-compaction
> ---
>
> Key: CASSANDRA-13885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13885
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Thomas Steinmaurer
>
> This ticket is basically the result of the discussion in Cassandra user list: 
> https://www.mail-archive.com/user@cassandra.apache.org/msg53562.html
> I was asked to open a ticket by Paulo Motta to think about back-porting 
> running full repairs without the additional cost of anti-compaction.
> Basically there is no way in 3.0 to run full repairs from several nodes 
> concurrently without troubles caused by (overlapping?) anti-compactions. 
> Coming from 2.1 this is a major change from an operational POV, basically 
> breaking any e.g. cron job based solution kicking off -pr based repairs on 
> several nodes concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13883) StrictLiveness for view row is not handled in AbstractRow

2017-09-20 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172606#comment-16172606
 ] 

ZhaoYang edited comment on CASSANDRA-13883 at 9/20/17 9:59 AM:
---

| source | unit | dtest |
| 
[trunk|https://github.com/apache/cassandra/compare/trunk...jasonstack:CASSANDRA-13883-trunk?expand=1]|
 [passed|https://circleci.com/gh/jasonstack/cassandra/627] |   
repair_tests.repair_test.TestRepair.dc_parallel_repair_test
repair_tests.repair_test.TestRepair.dc_repair_test
repair_tests.repair_test.TestRepair.local_dc_repair_test
repair_tests.repair_test.TestRepair.simple_parallel_repair_test
repair_tests.repair_test.TestRepair.thread_count_repair_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test
auth_test.TestAuth.system_auth_ks_is_alterable_test
disk_balance_test.TestDiskBalance.disk_balance_decommission_test
cdc_test.TestCDC.test_insertion_and_commitlog_behavior_after_reaching_cdc_total_space
cdc_test.TestCDC.test_insertion_and_commitlog_behavior_after_reaching_cdc_total_space|
| 
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...jasonstack:CASSANDRA-13883-3.11?expand=1]
 | [passed|https://circleci.com/gh/jasonstack/cassandra/625] | 
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test
auth_test.TestAuth.system_auth_ks_is_alterable_test |
| 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...jasonstack:CASSANDRA-13883-3.0?expand=1]
 | [passed|https://circleci.com/gh/jasonstack/cassandra/628]| 
global_row_key_cache_test.TestGlobalRowKeyCache.functional_test
repair_tests.incremental_repair_test.TestIncRepair.multiple_repair_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test |
| 
[dtest|https://github.com/apache/cassandra-dtest/compare/master...jasonstack:CASSANDRA-13883?expand=1]
 |

CI looks good, I will restart a few flaky ones.

{code}
Changes:
1. Change {{AbstractRow.hasLiveData}} to check {{enforceStrictLiveness}}:  
if livenessInfo is not live and enforceStrictLiveness, then there is not live 
data.
2. For SPRC.group, use the first command to get {{enforceStrictLiveness}} 
since each command should be the same except for key.
{code}


was (Author: jasonstack):
| source | unit | dtest |
| 
[trunk|https://github.com/apache/cassandra/compare/trunk...jasonstack:CASSANDRA-13883-trunk?expand=1]|
 [passed|https://circleci.com/gh/jasonstack/cassandra/627] |   
repair_tests.repair_test.TestRepair.dc_parallel_repair_test
repair_tests.repair_test.TestRepair.dc_repair_test
repair_tests.repair_test.TestRepair.local_dc_repair_test
repair_tests.repair_test.TestRepair.simple_parallel_repair_test
repair_tests.repair_test.TestRepair.thread_count_repair_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test
auth_test.TestAuth.system_auth_ks_is_alterable_test
disk_balance_test.TestDiskBalance.disk_balance_decommission_test
cdc_test.TestCDC.test_insertion_and_commitlog_behavior_after_reaching_cdc_total_space
cdc_test.TestCDC.test_insertion_and_commitlog_behavior_after_reaching_cdc_total_space|
| 
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...jasonstack:CASSANDRA-13883-3.11?expand=1]
 | [passed|https://circleci.com/gh/jasonstack/cassandra/625] | 
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test
auth_test.TestAuth.system_auth_ks_is_alterable_test |
| 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...jasonstack:CASSANDRA-13883-3.0?expand=1]
 | [running|https://circleci.com/gh/jasonstack/cassandra/628]| 
global_row_key_cache_test.TestGlobalRowKeyCache.functional_test
repair_tests.incremental_repair_test.TestIncRepair.multiple_repair_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test |
| 
[dtest|https://github.com/apache/cassandra-dtest/compare/master...jasonstack:CASSANDRA-13883?expand=1]
 |

CI looks good, I will restart a few flaky ones.

{code}
Changes:
1. Change {{AbstractRow.hasLiveData}} to check {{enforceStrictLiveness}}:  
if livenessInfo is not live and enforceStrictLiveness, then there is not live 
data.
2. For SPRC.group, use the first command to get {{enforceStrictLiveness}} 
since each command should be the same except for key.
{code}

> StrictLiveness for view row is not handled in AbstractRow
> -
>
> Key: CASSANDRA-13883
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13883
> Project: Cassandra
>  Issue Type: Bug
>  Components

[jira] [Comment Edited] (CASSANDRA-13883) StrictLiveness for view row is not handled in AbstractRow

2017-09-20 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172606#comment-16172606
 ] 

ZhaoYang edited comment on CASSANDRA-13883 at 9/20/17 10:00 AM:


| source | unit | dtest |
| 
[trunk|https://github.com/apache/cassandra/compare/trunk...jasonstack:CASSANDRA-13883-trunk?expand=1]|
 [passed|https://circleci.com/gh/jasonstack/cassandra/627] |   
repair_tests.repair_test.TestRepair.dc_parallel_repair_test
repair_tests.repair_test.TestRepair.dc_repair_test
repair_tests.repair_test.TestRepair.local_dc_repair_test
repair_tests.repair_test.TestRepair.simple_parallel_repair_test
repair_tests.repair_test.TestRepair.thread_count_repair_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test
auth_test.TestAuth.system_auth_ks_is_alterable_test
disk_balance_test.TestDiskBalance.disk_balance_decommission_test
cdc_test.TestCDC.test_insertion_and_commitlog_behavior_after_reaching_cdc_total_space
cdc_test.TestCDC.test_insertion_and_commitlog_behavior_after_reaching_cdc_total_space|
| 
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...jasonstack:CASSANDRA-13883-3.11?expand=1]
 | [passed|https://circleci.com/gh/jasonstack/cassandra/625] | 
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test
auth_test.TestAuth.system_auth_ks_is_alterable_test |
| 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...jasonstack:CASSANDRA-13883-3.0?expand=1]
 | [passed|https://circleci.com/gh/jasonstack/cassandra/628]| 
global_row_key_cache_test.TestGlobalRowKeyCache.functional_test
repair_tests.incremental_repair_test.TestIncRepair.multiple_repair_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test |
| 
[dtest|https://github.com/apache/cassandra-dtest/compare/master...jasonstack:CASSANDRA-13883?expand=1]
 |

CI looks good, failure seems unrelated.

{code}
Changes:
1. Change {{AbstractRow.hasLiveData}} to check {{enforceStrictLiveness}}:  
if livenessInfo is not live and enforceStrictLiveness, then there is not live 
data.
2. For SPRC.group, use the first command to get {{enforceStrictLiveness}} 
since each command should be the same except for key.
{code}


was (Author: jasonstack):
| source | unit | dtest |
| 
[trunk|https://github.com/apache/cassandra/compare/trunk...jasonstack:CASSANDRA-13883-trunk?expand=1]|
 [passed|https://circleci.com/gh/jasonstack/cassandra/627] |   
repair_tests.repair_test.TestRepair.dc_parallel_repair_test
repair_tests.repair_test.TestRepair.dc_repair_test
repair_tests.repair_test.TestRepair.local_dc_repair_test
repair_tests.repair_test.TestRepair.simple_parallel_repair_test
repair_tests.repair_test.TestRepair.thread_count_repair_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test
auth_test.TestAuth.system_auth_ks_is_alterable_test
disk_balance_test.TestDiskBalance.disk_balance_decommission_test
cdc_test.TestCDC.test_insertion_and_commitlog_behavior_after_reaching_cdc_total_space
cdc_test.TestCDC.test_insertion_and_commitlog_behavior_after_reaching_cdc_total_space|
| 
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...jasonstack:CASSANDRA-13883-3.11?expand=1]
 | [passed|https://circleci.com/gh/jasonstack/cassandra/625] | 
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test
auth_test.TestAuth.system_auth_ks_is_alterable_test |
| 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...jasonstack:CASSANDRA-13883-3.0?expand=1]
 | [passed|https://circleci.com/gh/jasonstack/cassandra/628]| 
global_row_key_cache_test.TestGlobalRowKeyCache.functional_test
repair_tests.incremental_repair_test.TestIncRepair.multiple_repair_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_30_test |
| 
[dtest|https://github.com/apache/cassandra-dtest/compare/master...jasonstack:CASSANDRA-13883?expand=1]
 |

CI looks good, I will restart a few flaky ones.

{code}
Changes:
1. Change {{AbstractRow.hasLiveData}} to check {{enforceStrictLiveness}}:  
if livenessInfo is not live and enforceStrictLiveness, then there is not live 
data.
2. For SPRC.group, use the first command to get {{enforceStrictLiveness}} 
since each command should be the same except for key.
{code}

> StrictLiveness for view row is not handled in AbstractRow
> -
>
> Key: CASSANDRA-13883
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13883
> Project: Cassandra
>  Issue Type: Bug
>  Components: Mater

[jira] [Commented] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2017-09-20 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172976#comment-16172976
 ] 

Sylvain Lebresne commented on CASSANDRA-12373:
--

Thanks for the changes and you patience on this. My main remaining remark is 
that I don't think we should include the SC key and value columns to 
{{partitionColumns}} (in {{CFMetaData.rebuild}}).

{{PartitionColumns}} is meant and used for the columns of the internal storage 
engine, but the SC key and value columns are "fake" columns used for the CQL 
translation, they will never have values internally, and so should never reach 
deep in the storage engine.  Which means they shouldn't be in 
{{PartitionColumns}}. In fact, I suspect that's why you needed to have special 
code in {{Columns}} and {{SerializationHeader}}, which feels wrong because you 
shouldn't ever encounter those definitions that deep in the storage engine.

Don't get me wrong, I'm sure there may be a few places in the CQL layers where 
we rely on {{CFMetaData.partitionColumns()}} and need those columns, and that's 
probably why you did that, but we imo need to identify those places and special 
case them.
 
Related to this (because due to this), I think the change in 
{{ColumnFamilyStoreCQLHelperTest}} is incorrect: it would be appropriate for 
{{ColumnFamilyStoreCQLHelper}} to either display the storage schema (so no 
"column2" nor "value"), or the CQL one (so no SCF empty-named map), but 
something is between is not consistent. Anyway, mainly pointing that we really 
need to remove those columns from {{partitionColumns}} and revert the change in 
{{ColumnFamilyStoreCQLHelperTest}}.


Other than that, only a few minor remarks:
* In {{CFMetaData.renameColumn}}, in the case of updating the SC key or value 
column, I believe we should be updating {{columnMetadata}} as well since those 
columns are listed in it, but that doesn't seem to be the case (not sure how 
important it is, it might be a following call to {{rebuild}} fixes that in 
practice, but since the method doesn't call {{rebuild}} itself, probably better 
to make sure we handle it).
* In {{CFMetaData.makeLegacyDefaultValidator}}, compact tables with counter 
will now return {{BytesType}} instead of {{CounterColumnType}}, which is kind 
of technically incorrect. To be entirely honest, this doesn't matter currently 
because that method isn't ever called for non-compact tables (and at this 
point, probably never will), but if we're going to rely on this, I'd rather 
make it an assertion than returning something somewhat wrong. Personally, I'd 
just keep the counter special case and move on, as this has nothing to do with 
this ticket, but if you prefer transforming it to a {{assert 
!isCompactTable()}}, no complain.
* Nit: in {{CFMetaData.renameColumn}}, the comment "SuperColumn tables allow 
renaming all columns" doesn't match the code entirely anymore. 
* Nit: in {{CassandraServer.makeColumnFilter}}, it would be more readable to 
just cut the method short if {{metadata.isDense()}} before the loop, with maybe 
a comment explaining why it's ok to do so ("Dense table only have dynamic 
columns").
* Nit: in {{SuperColumnCompatibility.getSuperCfKeyColumn}}, I don't think the 
"3.x created supercolumn family" comment is accurate anymore since in 
{{ThriftConversion}} you now add the 2nd clustering column (which, in itself, 
lgtm). It might be we need to preserve that branch in 
{{SuperColumnCompatibility.getSuperCfKeyColumn}} for some upgrade path, and 
happy to do so, but should update the comment.


> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x, 3.11.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2017-09-20 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173016#comment-16173016
 ] 

Alex Petrov commented on CASSANDRA-12373:
-

bq. Thanks for the changes and you patience on this. My main remaining remark 
is that I don't think we should include the SC key and value columns to 
partitionColumns (in CFMetaData.rebuild).

This was surprisingly simple to do.

bq. In CFMetaData.renameColumn, in the case of updating the SC key or value 
column, I believe we should be updating columnMetadata as well since those 
columns are listed in it, but that doesn't seem to be the case (not sure how 
important it is, it might be a following call to rebuild fixes that in 
practice, but since the method doesn't call rebuild itself, probably better to 
make sure we handle it).

I can't see how this can be helpful because of the subsequent {{rebuild}} call, 
but this also doesn't break anything, so I went ahead and changed it.

bq. In CFMetaData.makeLegacyDefaultValidator, compact tables with counter will 
now return BytesType instead of CounterColumnType, which is kind of technically 
incorrect. To be entirely honest, this doesn't matter currently because that 
method isn't ever called for non-compact tables (and at this point, probably 
never will), but if we're going to rely on this, I'd rather make it an 
assertion than returning something somewhat wrong. Personally, I'd just keep 
the counter special case and move on, as this has nothing to do with this 
ticket, but if you prefer transforming it to a assert !isCompactTable(), no 
complain.

I've added the {{isCounter}} back, no strong opinion here, too.

bq. Nit: in CFMetaData.renameColumn, the comment "SuperColumn tables allow 
renaming all columns" doesn't match the code entirely anymore.

Yeah, I was implying dense ones, but I don't think this comment is of much use 
here anyways.

bq. Nit: in SuperColumnCompatibility.getSuperCfKeyColumn, I don't think the 
"3.x created supercolumn family" comment is accurate anymore since in 
ThriftConversion you now add the 2nd clustering column (which, in itself, lgtm).

It's still true for pre-12373 3.x thrift-created supercolumn family tables. 
We've discussed this offline shortly: there was no good way to force the table 
update to make all the table look completely the same, so this is the only 
place we still have to special-case. I've added the {{pre 12373}} remark and 
hope it's clearer now.

I've committed the change only to 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...ifesdjeen:12373-3.0]
 for now, will rebase and update the rest of the branches later today.

> 3.0 breaks CQL compatibility with super columns families
> 
>
> Key: CASSANDRA-12373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12373
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.0.x, 3.11.x
>
>
> This is a follow-up to CASSANDRA-12335 to fix the CQL side of super column 
> compatibility.
> The details and a proposed solution can be found in the comments of 
> CASSANDRA-12335 but the crux of the issue is that super column famillies show 
> up differently in CQL in 3.0.x/3.x compared to 2.x, hence breaking backward 
> compatibilty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13215) Cassandra nodes startup time 20x more after upgarding to 3.x

2017-09-20 Thread Corentin Chary (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173029#comment-16173029
 ] 

Corentin Chary commented on CASSANDRA-13215:


Cool, will be happy to test it and report performance improvements (mostly 
during startup)

> Cassandra nodes startup time 20x more after upgarding to 3.x
> 
>
> Key: CASSANDRA-13215
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13215
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Cluster setup: two datacenters (dc-main, dc-backup).
> dc-main - 9 servers, no vnodes
> dc-backup - 6 servers, vnodes
>Reporter: Viktor Kuzmin
>Assignee: Marcus Eriksson
> Attachments: simple-cache.patch
>
>
> CompactionStrategyManage.getCompactionStrategyIndex is called on each sstable 
> at startup. And this function calls StorageService.getDiskBoundaries. And 
> getDiskBoundaries calls AbstractReplicationStrategy.getAddressRanges.
> It appears that last function can be really slow. In our environment we have 
> 1545 tokens and with NetworkTopologyStrategy it can make 1545*1545 
> computations in worst case (maybe I'm wrong, but it really takes lot's of 
> cpu).
> Also this function can affect runtime later, cause it is called not only 
> during startup.
> I've tried to implement simple cache for getDiskBoundaries results and now 
> startup time is about one minute instead of 25m, but I'm not sure if it's a 
> good solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13885) Allow to run full repairs in 3.0 without additional cost of anti-compaction

2017-09-20 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173034#comment-16173034
 ] 

Stefan Podkowinski commented on CASSANDRA-13885:


It's always a potential problem before CASSANDRA-9143, yes. But since only 
unrepaired data is affected, running incremental repairs often enough before 
gc_grace will minimize the chance that a sstable would be skipped from 
anti-compaction and remain in the unrepaired set afterwards. And that's what 
incremental repairs are designed for anyways, to be run regularly on new data. 
The important thing is that at the end, all data needs to be successfully 
promoted to the repaired set before gc_grace. Why is that important? Because 
after gc_grace, deleted data may be compacted away on replicas. But this will 
not happen in case the tombstone and corresponding data will be in different 
repaired/unrepaired sets, as those will not be compacted together. Also 
remember that incremental will only validate sstables in unrepaired. As a 
consequence, after the next incremental repair, the data from the unrepaired 
set (but not the tombstone from repaired set) will be transferred to the other 
replicas, where the data already had been compacted away before. 

So how would this situation change if we'd not run anti-compaction (promote to 
repaired) after full repairs at all? In this case we'd just let the unrepaired 
set grow, which should not be a problem on its own. But the operator would be 
responsible to schedule incremental repairs often enough to make sure the 
promotion process is happening before gc_grace, to avoid the potential data 
inconsistency issues describe above. The only other way to avoid these would be 
not to run incremental repairs at all anymore, which would be fine, too. So 
yes, I guess we could agree in this ticket under which situations it would be 
acceptable to run full repairs with a --skip-anticompaction flag, but I'd also 
like to hear how to communicate the correct scheduling to users, without just 
handing them a loaded gun. Because currently you can't do wrong by mixing full 
and incremental (as far as I can tell) and we can get away by telling people to 
run any kind of repair at least once before gc_grace, e.g. weekly incremental 
with every n-th as a full repair.

Exclusively running full repairs, even with included anti-compaction at the 
end, is btw not as broken as you may thing. In that situation you simply don't 
care about the unrepaired set. The anti-compaction at the end of the repair is 
a waste, yes, but it's not so bad (performance wise), as we only have to 
anti-compact new unrepaired data since the last repair. Not being able to 
perform parallel -pr repairs is an unfortunate side-effect of this, but I'd 
still prefer to recommend avoid using -pr in parallel and fall back to range 
based repairs if the cluster size doesn't allow this. Doing subrange repairs 
would actually cause the same problems as -pr, but with CASSANDRA-10422 it was 
decided to skip them, so all the caveats described above will apply there, 
although I'd not expect users doing subrange repairs mixed with incremental 
repairs.

> Allow to run full repairs in 3.0 without additional cost of anti-compaction
> ---
>
> Key: CASSANDRA-13885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13885
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Thomas Steinmaurer
>
> This ticket is basically the result of the discussion in Cassandra user list: 
> https://www.mail-archive.com/user@cassandra.apache.org/msg53562.html
> I was asked to open a ticket by Paulo Motta to think about back-porting 
> running full repairs without the additional cost of anti-compaction.
> Basically there is no way in 3.0 to run full repairs from several nodes 
> concurrently without troubles caused by (overlapping?) anti-compactions. 
> Coming from 2.1 this is a major change from an operational POV, basically 
> breaking any e.g. cron job based solution kicking off -pr based repairs on 
> several nodes concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13885) Allow to run full repairs in 3.0 without additional cost of anti-compaction

2017-09-20 Thread Thomas Steinmaurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173049#comment-16173049
 ] 

Thomas Steinmaurer commented on CASSANDRA-13885:


It is about ease the operational side and that 2.2+ is a major shift towards 
behaving differently and being much more complex when I simply want to run a 
full repair across my 9 node cluster on 2 small volume CFs on a daily basis 
(grace period = 72hr) and being used to so by running the following with 2.1 
kicked off in parallel on all nodes:
{code}
nodetool repair -pr mykeyspace mycf1 mycf2
{code}
Ok, I learned incremental repair being the default since 2.2+, so I need to 
additionally apply the -full option. Ok, not a big deal, but when running the 
following with 3.0.14, again kicked off in parallel on all nodes:
{code}
nodetool repair -full -pr mykeyspace mycf1 mycf2
{code}
I start to see basically the following nodetool output:
{code}
...
[2017-09-20 11:34:49,968] Some repair failed
[2017-09-20 11:34:49,968] Repair command #8 finished in 0 seconds
error: Repair job has failed with the error message: [2017-09-20 11:34:49,968] 
Some repair failed
-- StackTrace --
java.lang.RuntimeException: Repair job has failed with the error message: 
[2017-09-20 11:34:49,968] Some repair failed
at 
org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:115)
at 
org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:583)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:533)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)
{code}


> Allow to run full repairs in 3.0 without additional cost of anti-compaction
> ---
>
> Key: CASSANDRA-13885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13885
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Thomas Steinmaurer
>
> This ticket is basically the result of the discussion in Cassandra user list: 
> https://www.mail-archive.com/user@cassandra.apache.org/msg53562.html
> I was asked to open a ticket by Paulo Motta to think about back-porting 
> running full repairs without the additional cost of anti-compaction.
> Basically there is no way in 3.0 to run full repairs from several nodes 
> concurrently without troubles caused by (overlapping?) anti-compactions. 
> Coming from 2.1 this is a major change from an operational POV, basically 
> breaking any e.g. cron job based solution kicking off -pr based repairs on 
> several nodes concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13885) Allow to run full repairs in 3.0 without additional cost of anti-compaction

2017-09-20 Thread Thomas Steinmaurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173049#comment-16173049
 ] 

Thomas Steinmaurer edited comment on CASSANDRA-13885 at 9/20/17 11:48 AM:
--

It is about ease the operational side and that 2.2+ is a major shift towards 
behaving differently and being much more complex when I simply want to run a 
full repair across my 9 node cluster on 2 small volume CFs on a daily basis 
(grace period = 72hr) and being used to so by running the following with 2.1 
kicked off in parallel on all nodes:
{code}
nodetool repair -pr mykeyspace mycf1 mycf2
{code}
Ok, I learned incremental repair being the default since 2.2+, so I need to 
additionally apply the -full option. Ok, not a big deal, but when running the 
following with 3.0.14, again kicked off in parallel on all nodes:
{code}
nodetool repair -full -pr mykeyspace mycf1 mycf2
{code}
I start to see basically the following nodetool output:
{code}
...
[2017-09-20 11:34:49,968] Some repair failed
[2017-09-20 11:34:49,968] Repair command #8 finished in 0 seconds
error: Repair job has failed with the error message: [2017-09-20 11:34:49,968] 
Some repair failed
-- StackTrace --
java.lang.RuntimeException: Repair job has failed with the error message: 
[2017-09-20 11:34:49,968] Some repair failed
at 
org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:115)
at 
org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:583)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:533)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)
at 
com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)
{code}
With corresponding entries in the Cassandra log:
{noformat}
...
6084592,-2610211481793768452], (280506507907773715,302389115279520703], 
(-5974981857606828384,-5962141498717352776], 
(6642604399479339844,6664596384716805222], 
(3176178340546590823,3182242320217954219], 
(6534347373256357699,6534785652363368819], 
(-3756238465673315474,-3752190783358815211], 
(7139677986395944961,7145455101208653220], 
(-3297144043975661711,-3274612177648431803], 
(5273980670821159743,5281982202791896119], 
(-6128989336346960670,-6080468590993099589], 
(-2173810736498649004,-2131529908597487459], 
(7439773636855937356,7476905072738807852]]] Validation failed in /10.176.38.128
at 
org.apache.cassandra.repair.ValidationTask.treesReceived(ValidationTask.java:68)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:178)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:486)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:164)
 ~[apache-cassandra-3.0.14.jar:3.0.14]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) 
~[apache-cassandra-3.0.14.jar:3.0.14]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_102]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_102]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_102]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_102]
at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
 [apache-cassandra-3.0.14.jar:3.0.14]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_102]
INFO  [InternalResponseStage:32] 2017-09-20 11:41:58,054 
RepairRunnable.java:337 - Repair command #11 finished in 0 seconds
ERROR [ValidationExecutor:29] 2017-09-20 11:41:58,056 Validator.java:268 - 
Failed creating a merkle tree for [repair #b53b44a0-9df8-11e7-916c-a5c15f10854d 
on ruxitdb/Me2Data, [(-9036672081060178828,-9030154922268771156], 
(1469740174912727009,1543926123757478678], 
(8863036841963129257,8867114458641555677], 
(-2610211481793768452,-2603133469451342452], 
(-5434810958758711978,-5401236033897257975], 
(5446456273884963354,5512385756828046297], 
(-5733849916893192315,-5651354489457211297], 
(5579261856873396905,5629665914232130557], 
(-3661618321040339655,-3653143301436649195], 
(-3344525143879048394,-3314190367243835481], 
(2113416595214497156,2140252649319845130], 
(-186804760253388038,-136455684914788326], 
(130823363710141924,188931062065209030], 
(229

[jira] [Commented] (CASSANDRA-13404) Hostname verification for client-to-node encryption

2017-09-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173059#comment-16173059
 ] 

Per Otterström commented on CASSANDRA-13404:


Let me elaborate further on the context where I think this makes sense. We 
embed Cassandra in our products which are deployed on customer premises around 
the world, including regions where corruption is a common problem. Internal 
fraud is often, by far, the biggest security issue for these customers. History 
taught us that people have both the skill and imagination as they try to trick 
our systems when there is an incitement; money. We can advice customers to hire 
staff they trust, but as you can imagine it is not that easy.

Without hostname verification on the server side it is easy for someone with 
access to copy certificates from one of the client application hosts to another 
host on the network and connect to the server. Or, a person with access to the 
datacenter can replace a "broken" disk and then extract certificates from it. 
Or, you can shut down a VM, mount the disk image in another VM and extract the 
certificates.

Hostname verification on the server side is not going to be the thing that 
finally makes it impossible to manipulate data in Cassandra, but it is yet 
another barrier that will limit the options available to an attacker.

bq. By the time an attacker can copy a cert, can't they also spoof an IP 
address, as well?

To properly spoof an IP address and carry out a handshake you would have to 
implement ARP poisoning. There are ways to create barriers for this as well 
making it harder for an attacker. I don't think we should assume that an 
attacker can do IP spoofing just because he can steal a certificate. The 
security level of a system is not going to come down to one single barrier, but 
all barriers together.

bq. I'll be honest, I'm unconfortable with the patch - taking the incoming IP 
address and passing that directly into the SslHandler just seems wrong.

I fail to see the harm. The change is small and contained. If you have the 
time, can you elaborate a bit? Or do you think there is some setup of tests 
that would bring confidence into this?

Btw, I'd be happy to rebase the patch and work on dtests, depending on the 
conclusions here of course.


> Hostname verification for client-to-node encryption
> ---
>
> Key: CASSANDRA-13404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Jan Karlsson
> Fix For: 4.x
>
> Attachments: 13404-trunk.txt
>
>
> Similarily to CASSANDRA-9220, Cassandra should support hostname verification 
> for client-node connections.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13404) Hostname verification for client-to-node encryption

2017-09-20 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173099#comment-16173099
 ] 

Stefan Podkowinski commented on CASSANDRA-13404:


Using client certificates sounds like a very unconventional way to control 
access at network level. Why don't you just lock down any cluster network 
access to addresses and subnets on the firewall, instead of authorizing hosts 
via your key management service, by creating host specific client certificates 
that you depend on to be refused if not used from a matching IP? As already 
mentioned, the hostname verification process has been designed to prevent 
men-in-the-middle attacks, which doesn't really apply here.

Although this is a small patch, I'm not convinced how this would benefit the 
broader user community, without adding more complexity and security options 
that can be misunderstood and misconfigured. 

> Hostname verification for client-to-node encryption
> ---
>
> Key: CASSANDRA-13404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Jan Karlsson
> Fix For: 4.x
>
> Attachments: 13404-trunk.txt
>
>
> Similarily to CASSANDRA-9220, Cassandra should support hostname verification 
> for client-node connections.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[6/6] cassandra git commit: Merge commit '66115139addfb2bb6e26fa85e4225a1178d2e99c' into trunk

2017-09-20 Thread slebresne
Merge commit '66115139addfb2bb6e26fa85e4225a1178d2e99c' into trunk

* commit '66115139addfb2bb6e26fa85e4225a1178d2e99c':
  Fix sstable reader to support range-tombstone-marker for multi-slices


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9a624748
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9a624748
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9a624748

Branch: refs/heads/trunk
Commit: 9a62474822149bca358d7535e6dd7210ca17277e
Parents: eb76692 6611513
Author: Sylvain Lebresne 
Authored: Wed Sep 20 15:16:53 2017 +0200
Committer: Sylvain Lebresne 
Committed: Wed Sep 20 15:17:32 2017 +0200

--
 CHANGES.txt |   1 +
 .../columniterator/AbstractSSTableIterator.java |   7 -
 .../db/columniterator/SSTableIterator.java  |   8 +-
 .../columniterator/SSTableReversedIterator.java |   2 +-
 .../org/apache/cassandra/cql3/ViewTest.java |  49 +++
 .../db/SinglePartitionSliceCommandTest.java | 143 ++-
 6 files changed, 195 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a624748/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a624748/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a624748/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a624748/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a624748/test/unit/org/apache/cassandra/cql3/ViewTest.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a624748/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java
--
diff --cc test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java
index f79066b,7ad6198..d03d3bc
--- a/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java
+++ b/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java
@@@ -20,9 -20,12 +20,10 @@@
   */
  package org.apache.cassandra.db;
  
+ import static org.junit.Assert.*;
+ 
  import java.io.IOException;
 -import java.nio.ByteBuffer;
 -import java.util.Collections;
  import java.util.Iterator;
- 
  import org.junit.Assert;
  import org.junit.Before;
  import org.junit.BeforeClass;
@@@ -30,14 -33,17 +31,17 @@@ import org.junit.Test
  
  import org.slf4j.Logger;
  import org.slf4j.LoggerFactory;
- 
  import org.apache.cassandra.SchemaLoader;
 -import org.apache.cassandra.Util;
 -import org.apache.cassandra.config.CFMetaData;
 -import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.schema.ColumnMetadata;
 +import org.apache.cassandra.schema.TableMetadata;
  import org.apache.cassandra.config.DatabaseDescriptor;
 -import org.apache.cassandra.config.Schema;
 +import org.apache.cassandra.schema.Schema;
  import org.apache.cassandra.cql3.ColumnIdentifier;
++import org.apache.cassandra.Util;
  import org.apache.cassandra.cql3.QueryProcessor;
+ import org.apache.cassandra.cql3.UntypedResultSet;
+ import org.apache.cassandra.db.filter.AbstractClusteringIndexFilter;
+ import org.apache.cassandra.db.filter.ClusteringIndexNamesFilter;
  import org.apache.cassandra.db.filter.ClusteringIndexSliceFilter;
  import org.apache.cassandra.db.filter.ColumnFilter;
  import org.apache.cassandra.db.filter.DataLimits;
@@@ -64,33 -74,221 +72,162 @@@ public class SinglePartitionSliceComman
  private static final String KEYSPACE = "ks";
  private static final String TABLE = "tbl";
  
 -private static CFMetaData cfm;
 -private static ColumnDefinition v;
 -private static ColumnDefinition s;
 +private static TableMetadata metadata;
 +private static ColumnMetadata v;
 +private static ColumnMetadata s;
  
+ private static final String TABLE_SCLICES = "tbl_slices";
 -private static CFMetaData CFM_SLICES;
++private static TableMetadata CFM_SLICES;
+ 
  @BeforeClass
  public static void defineSchema() throws ConfigurationException
  {
  DatabaseDescriptor.daemonInitialization();
  
 -cfm = CFMetaData.Builder.create(KEYSPACE, TABLE)
 -.addPartitionKey("k", UTF8Type.instance)
 -.addStaticColumn

[4/6] cassandra git commit: Merge commit '975c3d81b67e9c1e1dcefdda3f90e8edf6be5efa' into cassandra-3.11

2017-09-20 Thread slebresne
Merge commit '975c3d81b67e9c1e1dcefdda3f90e8edf6be5efa' into cassandra-3.11

* commit '975c3d81b67e9c1e1dcefdda3f90e8edf6be5efa':
  Fix sstable reader to support range-tombstone-marker for multi-slices


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66115139
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66115139
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66115139

Branch: refs/heads/trunk
Commit: 66115139addfb2bb6e26fa85e4225a1178d2e99c
Parents: 594f1c1 975c3d8
Author: Sylvain Lebresne 
Authored: Wed Sep 20 15:11:58 2017 +0200
Committer: Sylvain Lebresne 
Committed: Wed Sep 20 15:12:40 2017 +0200

--
 CHANGES.txt |   1 +
 .../columniterator/AbstractSSTableIterator.java |   7 -
 .../db/columniterator/SSTableIterator.java  |   8 +-
 .../columniterator/SSTableReversedIterator.java |   2 +-
 .../org/apache/cassandra/cql3/ViewTest.java |  49 +++
 .../db/SinglePartitionSliceCommandTest.java | 145 ++-
 6 files changed, 197 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/66115139/CHANGES.txt
--
diff --cc CHANGES.txt
index 01955f6,2d11a3e..39270e5
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 -3.0.15
 +3.11.1
 + * AbstractTokenTreeBuilder#serializedSize returns wrong value when there is 
a single leaf and overflow collisions (CASSANDRA-13869)
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
   * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)
   * Make isBuilt volatile in PartitionUpdate (CASSANDRA-13619)
   * Prevent integer overflow of timestamps in CellTest and RowsTest 
(CASSANDRA-13866)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/66115139/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
--
diff --cc 
src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
index b6c60fe,f9e6545..c15416f
--- 
a/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
@@@ -371,14 -329,7 +371,7 @@@ public abstract class AbstractSSTableIt
  openMarker = marker.isOpen(false) ? 
marker.openDeletionTime(false) : null;
  }
  
- protected DeletionTime getAndClearOpenMarker()
- {
- DeletionTime toReturn = openMarker;
- openMarker = null;
- return toReturn;
- }
- 
 -public boolean hasNext() 
 +public boolean hasNext()
  {
  try
  {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/66115139/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/66115139/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--
diff --cc 
src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
index 88d415b,76d8c4d..cf8798d
--- 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
@@@ -249,8 -239,8 +249,8 @@@ public class SSTableReversedIterator ex
  // not breaking ImmutableBTreePartition, we should skip it 
when returning from the iterator, hence the
  // skipFirstIteratedItem (this is the last item of the block, 
but we're iterating in reverse order so it will
  // be the first returned by the iterator).
 -RangeTombstone.Bound markerEnd = end == null ? 
RangeTombstone.Bound.TOP : RangeTombstone.Bound.fromSlic

[3/6] cassandra git commit: Fix sstable reader to support range-tombstone-marker for multi-slices

2017-09-20 Thread slebresne
Fix sstable reader to support range-tombstone-marker for multi-slices

patch by Zhao Yang; reviewed by Sylvain Lebresne for CASSANDRA-13787


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/975c3d81
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/975c3d81
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/975c3d81

Branch: refs/heads/trunk
Commit: 975c3d81b67e9c1e1dcefdda3f90e8edf6be5efa
Parents: 35e32f2
Author: Zhao Yang 
Authored: Wed Aug 23 16:15:25 2017 +0800
Committer: Sylvain Lebresne 
Committed: Wed Sep 20 15:09:49 2017 +0200

--
 CHANGES.txt |   1 +
 .../columniterator/AbstractSSTableIterator.java |   7 -
 .../db/columniterator/SSTableIterator.java  |   8 +-
 .../columniterator/SSTableReversedIterator.java |   2 +-
 .../org/apache/cassandra/cql3/ViewTest.java |  49 +++
 .../db/SinglePartitionSliceCommandTest.java | 145 ++-
 6 files changed, 197 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 74e70e1..2d11a3e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
  * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)
  * Make isBuilt volatile in PartitionUpdate (CASSANDRA-13619)
  * Prevent integer overflow of timestamps in CellTest and RowsTest 
(CASSANDRA-13866)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
index c61b6aa..f9e6545 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
@@ -329,13 +329,6 @@ abstract class AbstractSSTableIterator implements 
SliceableUnfilteredRowIterator
 openMarker = marker.isOpen(false) ? marker.openDeletionTime(false) 
: null;
 }
 
-protected DeletionTime getAndClearOpenMarker()
-{
-DeletionTime toReturn = openMarker;
-openMarker = null;
-return toReturn;
-}
-
 public boolean hasNext() 
 {
 try

http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
index ff91871..47f85ac 100644
--- a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
+++ b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
@@ -173,14 +173,14 @@ public class SSTableIterator extends 
AbstractSSTableIterator
 if (next != null)
 return true;
 
-// If we have an open marker, we should close it before finishing
+// for current slice, no data read from deserialization
+sliceDone = true;
+// If we have an open marker, we should not close it, there could 
be more slices
 if (openMarker != null)
 {
-next = new RangeTombstoneBoundMarker(end, 
getAndClearOpenMarker());
+next = new RangeTombstoneBoundMarker(end, openMarker);
 return true;
 }
-
-sliceDone = true; // not absolutely necessary but accurate and 
cheap
 return false;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
index b12ed67..76d8c4d 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
@@ -240,7 +240,7 @@ public class SSTableReversedIterator extends 
AbstractSSTableIterator
 // skipFirstIteratedItem (this is the last item of the block, 
but we're iterating in reverse order so it will
  

[5/6] cassandra git commit: Merge commit '975c3d81b67e9c1e1dcefdda3f90e8edf6be5efa' into cassandra-3.11

2017-09-20 Thread slebresne
Merge commit '975c3d81b67e9c1e1dcefdda3f90e8edf6be5efa' into cassandra-3.11

* commit '975c3d81b67e9c1e1dcefdda3f90e8edf6be5efa':
  Fix sstable reader to support range-tombstone-marker for multi-slices


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66115139
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66115139
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66115139

Branch: refs/heads/cassandra-3.11
Commit: 66115139addfb2bb6e26fa85e4225a1178d2e99c
Parents: 594f1c1 975c3d8
Author: Sylvain Lebresne 
Authored: Wed Sep 20 15:11:58 2017 +0200
Committer: Sylvain Lebresne 
Committed: Wed Sep 20 15:12:40 2017 +0200

--
 CHANGES.txt |   1 +
 .../columniterator/AbstractSSTableIterator.java |   7 -
 .../db/columniterator/SSTableIterator.java  |   8 +-
 .../columniterator/SSTableReversedIterator.java |   2 +-
 .../org/apache/cassandra/cql3/ViewTest.java |  49 +++
 .../db/SinglePartitionSliceCommandTest.java | 145 ++-
 6 files changed, 197 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/66115139/CHANGES.txt
--
diff --cc CHANGES.txt
index 01955f6,2d11a3e..39270e5
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 -3.0.15
 +3.11.1
 + * AbstractTokenTreeBuilder#serializedSize returns wrong value when there is 
a single leaf and overflow collisions (CASSANDRA-13869)
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
   * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)
   * Make isBuilt volatile in PartitionUpdate (CASSANDRA-13619)
   * Prevent integer overflow of timestamps in CellTest and RowsTest 
(CASSANDRA-13866)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/66115139/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
--
diff --cc 
src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
index b6c60fe,f9e6545..c15416f
--- 
a/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
@@@ -371,14 -329,7 +371,7 @@@ public abstract class AbstractSSTableIt
  openMarker = marker.isOpen(false) ? 
marker.openDeletionTime(false) : null;
  }
  
- protected DeletionTime getAndClearOpenMarker()
- {
- DeletionTime toReturn = openMarker;
- openMarker = null;
- return toReturn;
- }
- 
 -public boolean hasNext() 
 +public boolean hasNext()
  {
  try
  {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/66115139/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/66115139/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--
diff --cc 
src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
index 88d415b,76d8c4d..cf8798d
--- 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
@@@ -249,8 -239,8 +249,8 @@@ public class SSTableReversedIterator ex
  // not breaking ImmutableBTreePartition, we should skip it 
when returning from the iterator, hence the
  // skipFirstIteratedItem (this is the last item of the block, 
but we're iterating in reverse order so it will
  // be the first returned by the iterator).
 -RangeTombstone.Bound markerEnd = end == null ? 
RangeTombstone.Bound.TOP : RangeTombstone.Bound

[2/6] cassandra git commit: Fix sstable reader to support range-tombstone-marker for multi-slices

2017-09-20 Thread slebresne
Fix sstable reader to support range-tombstone-marker for multi-slices

patch by Zhao Yang; reviewed by Sylvain Lebresne for CASSANDRA-13787


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/975c3d81
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/975c3d81
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/975c3d81

Branch: refs/heads/cassandra-3.11
Commit: 975c3d81b67e9c1e1dcefdda3f90e8edf6be5efa
Parents: 35e32f2
Author: Zhao Yang 
Authored: Wed Aug 23 16:15:25 2017 +0800
Committer: Sylvain Lebresne 
Committed: Wed Sep 20 15:09:49 2017 +0200

--
 CHANGES.txt |   1 +
 .../columniterator/AbstractSSTableIterator.java |   7 -
 .../db/columniterator/SSTableIterator.java  |   8 +-
 .../columniterator/SSTableReversedIterator.java |   2 +-
 .../org/apache/cassandra/cql3/ViewTest.java |  49 +++
 .../db/SinglePartitionSliceCommandTest.java | 145 ++-
 6 files changed, 197 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 74e70e1..2d11a3e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
  * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)
  * Make isBuilt volatile in PartitionUpdate (CASSANDRA-13619)
  * Prevent integer overflow of timestamps in CellTest and RowsTest 
(CASSANDRA-13866)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
index c61b6aa..f9e6545 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
@@ -329,13 +329,6 @@ abstract class AbstractSSTableIterator implements 
SliceableUnfilteredRowIterator
 openMarker = marker.isOpen(false) ? marker.openDeletionTime(false) 
: null;
 }
 
-protected DeletionTime getAndClearOpenMarker()
-{
-DeletionTime toReturn = openMarker;
-openMarker = null;
-return toReturn;
-}
-
 public boolean hasNext() 
 {
 try

http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
index ff91871..47f85ac 100644
--- a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
+++ b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
@@ -173,14 +173,14 @@ public class SSTableIterator extends 
AbstractSSTableIterator
 if (next != null)
 return true;
 
-// If we have an open marker, we should close it before finishing
+// for current slice, no data read from deserialization
+sliceDone = true;
+// If we have an open marker, we should not close it, there could 
be more slices
 if (openMarker != null)
 {
-next = new RangeTombstoneBoundMarker(end, 
getAndClearOpenMarker());
+next = new RangeTombstoneBoundMarker(end, openMarker);
 return true;
 }
-
-sliceDone = true; // not absolutely necessary but accurate and 
cheap
 return false;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
index b12ed67..76d8c4d 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
@@ -240,7 +240,7 @@ public class SSTableReversedIterator extends 
AbstractSSTableIterator
 // skipFirstIteratedItem (this is the last item of the block, 
but we're iterating in reverse order so it will
 

[1/6] cassandra git commit: Fix sstable reader to support range-tombstone-marker for multi-slices

2017-09-20 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 35e32f20b -> 975c3d81b
  refs/heads/cassandra-3.11 594f1c1df -> 66115139a
  refs/heads/trunk eb7669215 -> 9a6247482


Fix sstable reader to support range-tombstone-marker for multi-slices

patch by Zhao Yang; reviewed by Sylvain Lebresne for CASSANDRA-13787


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/975c3d81
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/975c3d81
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/975c3d81

Branch: refs/heads/cassandra-3.0
Commit: 975c3d81b67e9c1e1dcefdda3f90e8edf6be5efa
Parents: 35e32f2
Author: Zhao Yang 
Authored: Wed Aug 23 16:15:25 2017 +0800
Committer: Sylvain Lebresne 
Committed: Wed Sep 20 15:09:49 2017 +0200

--
 CHANGES.txt |   1 +
 .../columniterator/AbstractSSTableIterator.java |   7 -
 .../db/columniterator/SSTableIterator.java  |   8 +-
 .../columniterator/SSTableReversedIterator.java |   2 +-
 .../org/apache/cassandra/cql3/ViewTest.java |  49 +++
 .../db/SinglePartitionSliceCommandTest.java | 145 ++-
 6 files changed, 197 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 74e70e1..2d11a3e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
  * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)
  * Make isBuilt volatile in PartitionUpdate (CASSANDRA-13619)
  * Prevent integer overflow of timestamps in CellTest and RowsTest 
(CASSANDRA-13866)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
index c61b6aa..f9e6545 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/AbstractSSTableIterator.java
@@ -329,13 +329,6 @@ abstract class AbstractSSTableIterator implements 
SliceableUnfilteredRowIterator
 openMarker = marker.isOpen(false) ? marker.openDeletionTime(false) 
: null;
 }
 
-protected DeletionTime getAndClearOpenMarker()
-{
-DeletionTime toReturn = openMarker;
-openMarker = null;
-return toReturn;
-}
-
 public boolean hasNext() 
 {
 try

http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
index ff91871..47f85ac 100644
--- a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
+++ b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java
@@ -173,14 +173,14 @@ public class SSTableIterator extends 
AbstractSSTableIterator
 if (next != null)
 return true;
 
-// If we have an open marker, we should close it before finishing
+// for current slice, no data read from deserialization
+sliceDone = true;
+// If we have an open marker, we should not close it, there could 
be more slices
 if (openMarker != null)
 {
-next = new RangeTombstoneBoundMarker(end, 
getAndClearOpenMarker());
+next = new RangeTombstoneBoundMarker(end, openMarker);
 return true;
 }
-
-sliceDone = true; // not absolutely necessary but accurate and 
cheap
 return false;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/975c3d81/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
index b12ed67..76d8c4d 100644
--- 
a/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
+++ 
b/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java
@@ -240,7 +240,7 @@ public class SS

[jira] [Updated] (CASSANDRA-13787) RangeTombstoneMarker and PartitionDeletion is not properly included in MV

2017-09-20 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-13787:
-
   Resolution: Fixed
Fix Version/s: 3.11.1
   3.0.15
   Status: Resolved  (was: Ready to Commit)

Alright, committed, thanks.

> RangeTombstoneMarker and PartitionDeletion is not properly included in MV
> -
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
> Fix For: 3.0.15, 3.11.1
>
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> -2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.- 
> (Merged in CASSANDRA-11500)
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 1, 1, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 2, 1, 2) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 3, 1, 3) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
> 0, 2, 0) USING TIMESTAMP 5; " +
> "APPLY BATCH");
> assertRowsIgnoringOrder(execute("select * from %s"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> }
> @Test
> public void testExistingParitionDeletionWithFlush() throws Throwable
> {
> testExistingParitionDeletion(true);
> }
> @Test
> public void testExistingParitionDeletionWithoutFlush() throws Throwable
> {
> testExistingParitionDeletion(false);
> }
> public void testExistingParitionDeletion(boolean flush) throws Throwable
> {
> // for partition range deletion, need to know that existing row is 
> shadowed instead of not existed.
> createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
> (a))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("mv_test1",
>"CREATE MATERIALIZED VIEW 

[jira] [Updated] (CASSANDRA-13006) Disable automatic heap dumps on OOM error

2017-09-20 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13006:
---
Fix Version/s: (was: 3.0.9)
   3.0.15

> Disable automatic heap dumps on OOM error
> -
>
> Key: CASSANDRA-13006
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13006
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: anmols
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0.15
>
> Attachments: 13006-3.0.9.txt
>
>
> With CASSANDRA-9861, a change was added to enable collecting heap dumps by 
> default if the process encountered an OOM error. These heap dumps are stored 
> in the Apache Cassandra home directory unless configured otherwise (see 
> [Cassandra Support 
> Document|https://support.datastax.com/hc/en-us/articles/204225959-Generating-and-Analyzing-Heap-Dumps]
>  for this feature).
>  
> The creation and storage of heap dumps aides debugging and investigative 
> workflows, but is not be desirable for a production environment where these 
> heap dumps may occupy a large amount of disk space and require manual 
> intervention for cleanups. 
>  
> Managing heap dumps on out of memory errors and configuring the paths for 
> these heap dumps are available as JVM options in JVM. The current behavior 
> conflicts with the Boolean JVM flag HeapDumpOnOutOfMemoryError. 
>  
> A patch can be proposed here that would make the heap dump on OOM error honor 
> the HeapDumpOnOutOfMemoryError flag. Users who would want to still generate 
> heap dumps on OOM errors can set the -XX:+HeapDumpOnOutOfMemoryError JVM 
> option.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[13/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-20 Thread jasobrown
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1efaf3a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1efaf3a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1efaf3a

Branch: refs/heads/cassandra-3.11
Commit: c1efaf3a72a1efae983b66599416081719dbf5ff
Parents: 6611513 ab5084a
Author: Jason Brown 
Authored: Wed Sep 20 07:06:02 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 07:06:02 2017 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[10/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-09-20 Thread jasobrown
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab5084a5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab5084a5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab5084a5

Branch: refs/heads/trunk
Commit: ab5084a529254b3d7494409ce5a77c35ebcbccf9
Parents: 975c3d8 405ad00
Author: Jason Brown 
Authored: Wed Sep 20 07:05:37 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 07:05:37 2017 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[12/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-09-20 Thread jasobrown
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab5084a5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab5084a5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab5084a5

Branch: refs/heads/cassandra-3.0
Commit: ab5084a529254b3d7494409ce5a77c35ebcbccf9
Parents: 975c3d8 405ad00
Author: Jason Brown 
Authored: Wed Sep 20 07:05:37 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 07:05:37 2017 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[04/15] cassandra git commit: Add storage port options to sstableloader

2017-09-20 Thread jasobrown
Add storage port options to sstableloader

patch by Eduard Tudenhoefner; reviewed by Alex Petrov for CASSANDRA-13844


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/428eaa3e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/428eaa3e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/428eaa3e

Branch: refs/heads/cassandra-3.0
Commit: 428eaa3e37cab7227c81fdf124d29dfc1db4257c
Parents: 665f693
Author: Eduard Tudenhoefner 
Authored: Tue Sep 5 09:01:32 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 06:57:35 2017 -0700

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 38 
 2 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/428eaa3e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4f8f65f..848628b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.19
+ * Add storage port options to sstableloader (CASSANDRA-13844)
  * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775) 
  * Clone HeartBeatState when building gossip messages. Make its 
generation/version volatile (CASSANDRA-13700)
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/428eaa3e/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index 0b1a1d4..52f9467 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -56,6 +56,8 @@ public class BulkLoader
 private static final String IGNORE_NODES_OPTION  = "ignore";
 private static final String INITIAL_HOST_ADDRESS_OPTION = "nodes";
 private static final String RPC_PORT_OPTION = "port";
+private static final String STORAGE_PORT_OPTION = "storage-port";
+private static final String SSL_STORAGE_PORT_OPTION = "ssl-storage-port";
 private static final String USER_OPTION = "username";
 private static final String PASSWD_OPTION = "password";
 private static final String THROTTLE_MBITS = "throttle";
@@ -399,7 +401,7 @@ public class BulkLoader
 public boolean debug;
 public boolean verbose;
 public boolean noProgress;
-public int rpcPort = 9160;
+public int rpcPort;
 public String user;
 public String passwd;
 public int throttle = 0;
@@ -462,9 +464,6 @@ public class BulkLoader
 opts.verbose = cmd.hasOption(VERBOSE_OPTION);
 opts.noProgress = cmd.hasOption(NOPROGRESS_OPTION);
 
-if (cmd.hasOption(RPC_PORT_OPTION))
-opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
-
 if (cmd.hasOption(USER_OPTION))
 opts.user = cmd.getOptionValue(USER_OPTION);
 
@@ -532,13 +531,38 @@ public class BulkLoader
 config.stream_throughput_outbound_megabits_per_sec = 0;
 
config.inter_dc_stream_throughput_outbound_megabits_per_sec = 0;
 }
-opts.storagePort = config.storage_port;
-opts.sslStoragePort = config.ssl_storage_port;
 opts.throttle = 
config.stream_throughput_outbound_megabits_per_sec;
 opts.interDcThrottle = 
config.inter_dc_stream_throughput_outbound_megabits_per_sec;
 opts.encOptions = config.client_encryption_options;
 opts.serverEncOptions = config.server_encryption_options;
 
+if (cmd.hasOption(RPC_PORT_OPTION))
+{
+opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
+}
+else
+{
+opts.rpcPort = config.rpc_port;
+}
+
+if (cmd.hasOption(STORAGE_PORT_OPTION))
+{
+opts.storagePort = 
Integer.parseInt(cmd.getOptionValue(STORAGE_PORT_OPTION));
+}
+else
+{
+opts.storagePort = config.storage_port;
+}
+
+if (cmd.hasOption(SSL_STORAGE_PORT_OPTION))
+{
+opts.sslStoragePort = 
Integer.parseInt(cmd.getOptionValue(SSL_STORAGE_PORT_OPTION));
+}
+else
+{
+opts.sslStoragePort = config.ssl_storage_port;
+}
+
 if (cmd.hasOption(THROTTLE_MBITS))
 {
 opts.thrott

[02/15] cassandra git commit: Add storage port options to sstableloader

2017-09-20 Thread jasobrown
Add storage port options to sstableloader

patch by Eduard Tudenhoefner; reviewed by Alex Petrov for CASSANDRA-13844


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/428eaa3e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/428eaa3e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/428eaa3e

Branch: refs/heads/cassandra-2.2
Commit: 428eaa3e37cab7227c81fdf124d29dfc1db4257c
Parents: 665f693
Author: Eduard Tudenhoefner 
Authored: Tue Sep 5 09:01:32 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 06:57:35 2017 -0700

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 38 
 2 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/428eaa3e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4f8f65f..848628b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.19
+ * Add storage port options to sstableloader (CASSANDRA-13844)
  * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775) 
  * Clone HeartBeatState when building gossip messages. Make its 
generation/version volatile (CASSANDRA-13700)
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/428eaa3e/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index 0b1a1d4..52f9467 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -56,6 +56,8 @@ public class BulkLoader
 private static final String IGNORE_NODES_OPTION  = "ignore";
 private static final String INITIAL_HOST_ADDRESS_OPTION = "nodes";
 private static final String RPC_PORT_OPTION = "port";
+private static final String STORAGE_PORT_OPTION = "storage-port";
+private static final String SSL_STORAGE_PORT_OPTION = "ssl-storage-port";
 private static final String USER_OPTION = "username";
 private static final String PASSWD_OPTION = "password";
 private static final String THROTTLE_MBITS = "throttle";
@@ -399,7 +401,7 @@ public class BulkLoader
 public boolean debug;
 public boolean verbose;
 public boolean noProgress;
-public int rpcPort = 9160;
+public int rpcPort;
 public String user;
 public String passwd;
 public int throttle = 0;
@@ -462,9 +464,6 @@ public class BulkLoader
 opts.verbose = cmd.hasOption(VERBOSE_OPTION);
 opts.noProgress = cmd.hasOption(NOPROGRESS_OPTION);
 
-if (cmd.hasOption(RPC_PORT_OPTION))
-opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
-
 if (cmd.hasOption(USER_OPTION))
 opts.user = cmd.getOptionValue(USER_OPTION);
 
@@ -532,13 +531,38 @@ public class BulkLoader
 config.stream_throughput_outbound_megabits_per_sec = 0;
 
config.inter_dc_stream_throughput_outbound_megabits_per_sec = 0;
 }
-opts.storagePort = config.storage_port;
-opts.sslStoragePort = config.ssl_storage_port;
 opts.throttle = 
config.stream_throughput_outbound_megabits_per_sec;
 opts.interDcThrottle = 
config.inter_dc_stream_throughput_outbound_megabits_per_sec;
 opts.encOptions = config.client_encryption_options;
 opts.serverEncOptions = config.server_encryption_options;
 
+if (cmd.hasOption(RPC_PORT_OPTION))
+{
+opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
+}
+else
+{
+opts.rpcPort = config.rpc_port;
+}
+
+if (cmd.hasOption(STORAGE_PORT_OPTION))
+{
+opts.storagePort = 
Integer.parseInt(cmd.getOptionValue(STORAGE_PORT_OPTION));
+}
+else
+{
+opts.storagePort = config.storage_port;
+}
+
+if (cmd.hasOption(SSL_STORAGE_PORT_OPTION))
+{
+opts.sslStoragePort = 
Integer.parseInt(cmd.getOptionValue(SSL_STORAGE_PORT_OPTION));
+}
+else
+{
+opts.sslStoragePort = config.ssl_storage_port;
+}
+
 if (cmd.hasOption(THROTTLE_MBITS))
 {
 opts.thrott

[05/15] cassandra git commit: Add storage port options to sstableloader

2017-09-20 Thread jasobrown
Add storage port options to sstableloader

patch by Eduard Tudenhoefner; reviewed by Alex Petrov for CASSANDRA-13844


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/428eaa3e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/428eaa3e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/428eaa3e

Branch: refs/heads/cassandra-3.11
Commit: 428eaa3e37cab7227c81fdf124d29dfc1db4257c
Parents: 665f693
Author: Eduard Tudenhoefner 
Authored: Tue Sep 5 09:01:32 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 06:57:35 2017 -0700

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 38 
 2 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/428eaa3e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4f8f65f..848628b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.19
+ * Add storage port options to sstableloader (CASSANDRA-13844)
  * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775) 
  * Clone HeartBeatState when building gossip messages. Make its 
generation/version volatile (CASSANDRA-13700)
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/428eaa3e/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index 0b1a1d4..52f9467 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -56,6 +56,8 @@ public class BulkLoader
 private static final String IGNORE_NODES_OPTION  = "ignore";
 private static final String INITIAL_HOST_ADDRESS_OPTION = "nodes";
 private static final String RPC_PORT_OPTION = "port";
+private static final String STORAGE_PORT_OPTION = "storage-port";
+private static final String SSL_STORAGE_PORT_OPTION = "ssl-storage-port";
 private static final String USER_OPTION = "username";
 private static final String PASSWD_OPTION = "password";
 private static final String THROTTLE_MBITS = "throttle";
@@ -399,7 +401,7 @@ public class BulkLoader
 public boolean debug;
 public boolean verbose;
 public boolean noProgress;
-public int rpcPort = 9160;
+public int rpcPort;
 public String user;
 public String passwd;
 public int throttle = 0;
@@ -462,9 +464,6 @@ public class BulkLoader
 opts.verbose = cmd.hasOption(VERBOSE_OPTION);
 opts.noProgress = cmd.hasOption(NOPROGRESS_OPTION);
 
-if (cmd.hasOption(RPC_PORT_OPTION))
-opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
-
 if (cmd.hasOption(USER_OPTION))
 opts.user = cmd.getOptionValue(USER_OPTION);
 
@@ -532,13 +531,38 @@ public class BulkLoader
 config.stream_throughput_outbound_megabits_per_sec = 0;
 
config.inter_dc_stream_throughput_outbound_megabits_per_sec = 0;
 }
-opts.storagePort = config.storage_port;
-opts.sslStoragePort = config.ssl_storage_port;
 opts.throttle = 
config.stream_throughput_outbound_megabits_per_sec;
 opts.interDcThrottle = 
config.inter_dc_stream_throughput_outbound_megabits_per_sec;
 opts.encOptions = config.client_encryption_options;
 opts.serverEncOptions = config.server_encryption_options;
 
+if (cmd.hasOption(RPC_PORT_OPTION))
+{
+opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
+}
+else
+{
+opts.rpcPort = config.rpc_port;
+}
+
+if (cmd.hasOption(STORAGE_PORT_OPTION))
+{
+opts.storagePort = 
Integer.parseInt(cmd.getOptionValue(STORAGE_PORT_OPTION));
+}
+else
+{
+opts.storagePort = config.storage_port;
+}
+
+if (cmd.hasOption(SSL_STORAGE_PORT_OPTION))
+{
+opts.sslStoragePort = 
Integer.parseInt(cmd.getOptionValue(SSL_STORAGE_PORT_OPTION));
+}
+else
+{
+opts.sslStoragePort = config.ssl_storage_port;
+}
+
 if (cmd.hasOption(THROTTLE_MBITS))
 {
 opts.throt

[15/15] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-20 Thread jasobrown
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4809f427
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4809f427
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4809f427

Branch: refs/heads/trunk
Commit: 4809f4275c3ed0e53eebc5e9a9304f60f07e8c38
Parents: 9a62474 c1efaf3
Author: Jason Brown 
Authored: Wed Sep 20 07:06:20 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 07:06:20 2017 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[01/15] cassandra git commit: Add storage port options to sstableloader

2017-09-20 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 665f69370 -> 428eaa3e3
  refs/heads/cassandra-2.2 a8e2dc524 -> 405ad0099
  refs/heads/cassandra-3.0 975c3d81b -> ab5084a52
  refs/heads/cassandra-3.11 66115139a -> c1efaf3a7
  refs/heads/trunk 9a6247482 -> 4809f4275


Add storage port options to sstableloader

patch by Eduard Tudenhoefner; reviewed by Alex Petrov for CASSANDRA-13844


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/428eaa3e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/428eaa3e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/428eaa3e

Branch: refs/heads/cassandra-2.1
Commit: 428eaa3e37cab7227c81fdf124d29dfc1db4257c
Parents: 665f693
Author: Eduard Tudenhoefner 
Authored: Tue Sep 5 09:01:32 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 06:57:35 2017 -0700

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 38 
 2 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/428eaa3e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4f8f65f..848628b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.19
+ * Add storage port options to sstableloader (CASSANDRA-13844)
  * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775) 
  * Clone HeartBeatState when building gossip messages. Make its 
generation/version volatile (CASSANDRA-13700)
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/428eaa3e/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index 0b1a1d4..52f9467 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -56,6 +56,8 @@ public class BulkLoader
 private static final String IGNORE_NODES_OPTION  = "ignore";
 private static final String INITIAL_HOST_ADDRESS_OPTION = "nodes";
 private static final String RPC_PORT_OPTION = "port";
+private static final String STORAGE_PORT_OPTION = "storage-port";
+private static final String SSL_STORAGE_PORT_OPTION = "ssl-storage-port";
 private static final String USER_OPTION = "username";
 private static final String PASSWD_OPTION = "password";
 private static final String THROTTLE_MBITS = "throttle";
@@ -399,7 +401,7 @@ public class BulkLoader
 public boolean debug;
 public boolean verbose;
 public boolean noProgress;
-public int rpcPort = 9160;
+public int rpcPort;
 public String user;
 public String passwd;
 public int throttle = 0;
@@ -462,9 +464,6 @@ public class BulkLoader
 opts.verbose = cmd.hasOption(VERBOSE_OPTION);
 opts.noProgress = cmd.hasOption(NOPROGRESS_OPTION);
 
-if (cmd.hasOption(RPC_PORT_OPTION))
-opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
-
 if (cmd.hasOption(USER_OPTION))
 opts.user = cmd.getOptionValue(USER_OPTION);
 
@@ -532,13 +531,38 @@ public class BulkLoader
 config.stream_throughput_outbound_megabits_per_sec = 0;
 
config.inter_dc_stream_throughput_outbound_megabits_per_sec = 0;
 }
-opts.storagePort = config.storage_port;
-opts.sslStoragePort = config.ssl_storage_port;
 opts.throttle = 
config.stream_throughput_outbound_megabits_per_sec;
 opts.interDcThrottle = 
config.inter_dc_stream_throughput_outbound_megabits_per_sec;
 opts.encOptions = config.client_encryption_options;
 opts.serverEncOptions = config.server_encryption_options;
 
+if (cmd.hasOption(RPC_PORT_OPTION))
+{
+opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
+}
+else
+{
+opts.rpcPort = config.rpc_port;
+}
+
+if (cmd.hasOption(STORAGE_PORT_OPTION))
+{
+opts.storagePort = 
Integer.parseInt(cmd.getOptionValue(STORAGE_PORT_OPTION));
+}
+else
+{
+opts.storagePort = config.storage_port;
+}
+
+if (cmd.hasOption(SSL_STORAGE_PORT_OPTION))
+{
+opts.sslStoragePort = 
Integer.parseInt(cmd.getOption

[09/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-09-20 Thread jasobrown
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/405ad009
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/405ad009
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/405ad009

Branch: refs/heads/cassandra-3.11
Commit: 405ad0099bf6318fa47072b32c3d6ad2cbc68c41
Parents: a8e2dc5 428eaa3
Author: Jason Brown 
Authored: Wed Sep 20 06:58:02 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 07:05:13 2017 -0700

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 38 
 2 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/405ad009/CHANGES.txt
--
diff --cc CHANGES.txt
index 0af156f,848628b..0b3421f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,6 +1,19 @@@
 -2.1.19
 +2.2.11
 + * Safely handle empty buffers when outputting to JSON (CASSANDRA-13868)
 + * Copy session properties on cqlsh.py do_login (CASSANDRA-13847)
 + * Fix load over calculated issue in IndexSummaryRedistribution 
(CASSANDRA-13738)
 + * Fix compaction and flush exception not captured (CASSANDRA-13833)
 + * Make BatchlogManagerMBean.forceBatchlogReplay() blocking (CASSANDRA-13809)
 + * Uncaught exceptions in Netty pipeline (CASSANDRA-13649)
 + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) 
 + * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
 + * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 + * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 + * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 + * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
 +Merged from 2.1:
+  * Add storage port options to sstableloader (CASSANDRA-13844)
 - * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775) 
 + * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775)
   * Clone HeartBeatState when building gossip messages. Make its 
generation/version volatile (CASSANDRA-13700)
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/405ad009/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --cc src/java/org/apache/cassandra/tools/BulkLoader.java
index 7d0fdc8,52f9467..093a063
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@@ -47,7 -55,9 +47,9 @@@ public class BulkLoade
  private static final String NOPROGRESS_OPTION  = "no-progress";
  private static final String IGNORE_NODES_OPTION  = "ignore";
  private static final String INITIAL_HOST_ADDRESS_OPTION = "nodes";
 -private static final String RPC_PORT_OPTION = "port";
 +private static final String NATIVE_PORT_OPTION = "port";
+ private static final String STORAGE_PORT_OPTION = "storage-port";
+ private static final String SSL_STORAGE_PORT_OPTION = "ssl-storage-port";
  private static final String USER_OPTION = "username";
  private static final String PASSWD_OPTION = "password";
  private static final String THROTTLE_MBITS = "throttle";
@@@ -306,7 -401,7 +308,7 @@@
  public boolean debug;
  public boolean verbose;
  public boolean noProgress;
- public int nativePort = 9042;
 -public int rpcPort;
++public int nativePort;
  public String user;
  public String passwd;
  public int throttle = 0;
@@@ -438,13 -531,38 +437,38 @@@
  config.stream_throughput_outbound_megabits_per_sec = 0;
  
config.inter_dc_stream_throughput_outbound_megabits_per_sec = 0;
  }
- opts.storagePort = config.storage_port;
- opts.sslStoragePort = config.ssl_storage_port;
  opts.throttle = 
config.stream_throughput_outbound_megabits_per_sec;
  opts.interDcThrottle = 
config.inter_dc_stream_throughput_outbound_megabits_per_sec;
 -opts.encOptions = config.client_encryption_options;
 +opts.clientEncOptions = config.client_encryption_options;
  opts.serverEncOptions = config.server_encryption_options;
  
 -if (cmd.hasOption(RPC_PORT_OPTION))
++if (cmd.hasOption(NATIVE_PORT_OPTION))
+ {
 -opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
++opts.nativePort = 
Integer.parseInt(cmd.getOptionValue(NATIVE_PORT_OPTION));
+ }
+ else
+ 

[03/15] cassandra git commit: Add storage port options to sstableloader

2017-09-20 Thread jasobrown
Add storage port options to sstableloader

patch by Eduard Tudenhoefner; reviewed by Alex Petrov for CASSANDRA-13844


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/428eaa3e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/428eaa3e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/428eaa3e

Branch: refs/heads/trunk
Commit: 428eaa3e37cab7227c81fdf124d29dfc1db4257c
Parents: 665f693
Author: Eduard Tudenhoefner 
Authored: Tue Sep 5 09:01:32 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 06:57:35 2017 -0700

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 38 
 2 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/428eaa3e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4f8f65f..848628b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.19
+ * Add storage port options to sstableloader (CASSANDRA-13844)
  * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775) 
  * Clone HeartBeatState when building gossip messages. Make its 
generation/version volatile (CASSANDRA-13700)
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/428eaa3e/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index 0b1a1d4..52f9467 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -56,6 +56,8 @@ public class BulkLoader
 private static final String IGNORE_NODES_OPTION  = "ignore";
 private static final String INITIAL_HOST_ADDRESS_OPTION = "nodes";
 private static final String RPC_PORT_OPTION = "port";
+private static final String STORAGE_PORT_OPTION = "storage-port";
+private static final String SSL_STORAGE_PORT_OPTION = "ssl-storage-port";
 private static final String USER_OPTION = "username";
 private static final String PASSWD_OPTION = "password";
 private static final String THROTTLE_MBITS = "throttle";
@@ -399,7 +401,7 @@ public class BulkLoader
 public boolean debug;
 public boolean verbose;
 public boolean noProgress;
-public int rpcPort = 9160;
+public int rpcPort;
 public String user;
 public String passwd;
 public int throttle = 0;
@@ -462,9 +464,6 @@ public class BulkLoader
 opts.verbose = cmd.hasOption(VERBOSE_OPTION);
 opts.noProgress = cmd.hasOption(NOPROGRESS_OPTION);
 
-if (cmd.hasOption(RPC_PORT_OPTION))
-opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
-
 if (cmd.hasOption(USER_OPTION))
 opts.user = cmd.getOptionValue(USER_OPTION);
 
@@ -532,13 +531,38 @@ public class BulkLoader
 config.stream_throughput_outbound_megabits_per_sec = 0;
 
config.inter_dc_stream_throughput_outbound_megabits_per_sec = 0;
 }
-opts.storagePort = config.storage_port;
-opts.sslStoragePort = config.ssl_storage_port;
 opts.throttle = 
config.stream_throughput_outbound_megabits_per_sec;
 opts.interDcThrottle = 
config.inter_dc_stream_throughput_outbound_megabits_per_sec;
 opts.encOptions = config.client_encryption_options;
 opts.serverEncOptions = config.server_encryption_options;
 
+if (cmd.hasOption(RPC_PORT_OPTION))
+{
+opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
+}
+else
+{
+opts.rpcPort = config.rpc_port;
+}
+
+if (cmd.hasOption(STORAGE_PORT_OPTION))
+{
+opts.storagePort = 
Integer.parseInt(cmd.getOptionValue(STORAGE_PORT_OPTION));
+}
+else
+{
+opts.storagePort = config.storage_port;
+}
+
+if (cmd.hasOption(SSL_STORAGE_PORT_OPTION))
+{
+opts.sslStoragePort = 
Integer.parseInt(cmd.getOptionValue(SSL_STORAGE_PORT_OPTION));
+}
+else
+{
+opts.sslStoragePort = config.ssl_storage_port;
+}
+
 if (cmd.hasOption(THROTTLE_MBITS))
 {
 opts.throttle = 
In

[08/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-09-20 Thread jasobrown
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/405ad009
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/405ad009
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/405ad009

Branch: refs/heads/cassandra-3.0
Commit: 405ad0099bf6318fa47072b32c3d6ad2cbc68c41
Parents: a8e2dc5 428eaa3
Author: Jason Brown 
Authored: Wed Sep 20 06:58:02 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 07:05:13 2017 -0700

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 38 
 2 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/405ad009/CHANGES.txt
--
diff --cc CHANGES.txt
index 0af156f,848628b..0b3421f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,6 +1,19 @@@
 -2.1.19
 +2.2.11
 + * Safely handle empty buffers when outputting to JSON (CASSANDRA-13868)
 + * Copy session properties on cqlsh.py do_login (CASSANDRA-13847)
 + * Fix load over calculated issue in IndexSummaryRedistribution 
(CASSANDRA-13738)
 + * Fix compaction and flush exception not captured (CASSANDRA-13833)
 + * Make BatchlogManagerMBean.forceBatchlogReplay() blocking (CASSANDRA-13809)
 + * Uncaught exceptions in Netty pipeline (CASSANDRA-13649)
 + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) 
 + * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
 + * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 + * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 + * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 + * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
 +Merged from 2.1:
+  * Add storage port options to sstableloader (CASSANDRA-13844)
 - * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775) 
 + * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775)
   * Clone HeartBeatState when building gossip messages. Make its 
generation/version volatile (CASSANDRA-13700)
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/405ad009/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --cc src/java/org/apache/cassandra/tools/BulkLoader.java
index 7d0fdc8,52f9467..093a063
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@@ -47,7 -55,9 +47,9 @@@ public class BulkLoade
  private static final String NOPROGRESS_OPTION  = "no-progress";
  private static final String IGNORE_NODES_OPTION  = "ignore";
  private static final String INITIAL_HOST_ADDRESS_OPTION = "nodes";
 -private static final String RPC_PORT_OPTION = "port";
 +private static final String NATIVE_PORT_OPTION = "port";
+ private static final String STORAGE_PORT_OPTION = "storage-port";
+ private static final String SSL_STORAGE_PORT_OPTION = "ssl-storage-port";
  private static final String USER_OPTION = "username";
  private static final String PASSWD_OPTION = "password";
  private static final String THROTTLE_MBITS = "throttle";
@@@ -306,7 -401,7 +308,7 @@@
  public boolean debug;
  public boolean verbose;
  public boolean noProgress;
- public int nativePort = 9042;
 -public int rpcPort;
++public int nativePort;
  public String user;
  public String passwd;
  public int throttle = 0;
@@@ -438,13 -531,38 +437,38 @@@
  config.stream_throughput_outbound_megabits_per_sec = 0;
  
config.inter_dc_stream_throughput_outbound_megabits_per_sec = 0;
  }
- opts.storagePort = config.storage_port;
- opts.sslStoragePort = config.ssl_storage_port;
  opts.throttle = 
config.stream_throughput_outbound_megabits_per_sec;
  opts.interDcThrottle = 
config.inter_dc_stream_throughput_outbound_megabits_per_sec;
 -opts.encOptions = config.client_encryption_options;
 +opts.clientEncOptions = config.client_encryption_options;
  opts.serverEncOptions = config.server_encryption_options;
  
 -if (cmd.hasOption(RPC_PORT_OPTION))
++if (cmd.hasOption(NATIVE_PORT_OPTION))
+ {
 -opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
++opts.nativePort = 
Integer.parseInt(cmd.getOptionValue(NATIVE_PORT_OPTION));
+ }
+ else
+  

[14/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-20 Thread jasobrown
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1efaf3a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1efaf3a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1efaf3a

Branch: refs/heads/trunk
Commit: c1efaf3a72a1efae983b66599416081719dbf5ff
Parents: 6611513 ab5084a
Author: Jason Brown 
Authored: Wed Sep 20 07:06:02 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 07:06:02 2017 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[07/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-09-20 Thread jasobrown
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/405ad009
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/405ad009
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/405ad009

Branch: refs/heads/cassandra-2.2
Commit: 405ad0099bf6318fa47072b32c3d6ad2cbc68c41
Parents: a8e2dc5 428eaa3
Author: Jason Brown 
Authored: Wed Sep 20 06:58:02 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 07:05:13 2017 -0700

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 38 
 2 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/405ad009/CHANGES.txt
--
diff --cc CHANGES.txt
index 0af156f,848628b..0b3421f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,6 +1,19 @@@
 -2.1.19
 +2.2.11
 + * Safely handle empty buffers when outputting to JSON (CASSANDRA-13868)
 + * Copy session properties on cqlsh.py do_login (CASSANDRA-13847)
 + * Fix load over calculated issue in IndexSummaryRedistribution 
(CASSANDRA-13738)
 + * Fix compaction and flush exception not captured (CASSANDRA-13833)
 + * Make BatchlogManagerMBean.forceBatchlogReplay() blocking (CASSANDRA-13809)
 + * Uncaught exceptions in Netty pipeline (CASSANDRA-13649)
 + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) 
 + * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
 + * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 + * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 + * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 + * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
 +Merged from 2.1:
+  * Add storage port options to sstableloader (CASSANDRA-13844)
 - * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775) 
 + * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775)
   * Clone HeartBeatState when building gossip messages. Make its 
generation/version volatile (CASSANDRA-13700)
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/405ad009/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --cc src/java/org/apache/cassandra/tools/BulkLoader.java
index 7d0fdc8,52f9467..093a063
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@@ -47,7 -55,9 +47,9 @@@ public class BulkLoade
  private static final String NOPROGRESS_OPTION  = "no-progress";
  private static final String IGNORE_NODES_OPTION  = "ignore";
  private static final String INITIAL_HOST_ADDRESS_OPTION = "nodes";
 -private static final String RPC_PORT_OPTION = "port";
 +private static final String NATIVE_PORT_OPTION = "port";
+ private static final String STORAGE_PORT_OPTION = "storage-port";
+ private static final String SSL_STORAGE_PORT_OPTION = "ssl-storage-port";
  private static final String USER_OPTION = "username";
  private static final String PASSWD_OPTION = "password";
  private static final String THROTTLE_MBITS = "throttle";
@@@ -306,7 -401,7 +308,7 @@@
  public boolean debug;
  public boolean verbose;
  public boolean noProgress;
- public int nativePort = 9042;
 -public int rpcPort;
++public int nativePort;
  public String user;
  public String passwd;
  public int throttle = 0;
@@@ -438,13 -531,38 +437,38 @@@
  config.stream_throughput_outbound_megabits_per_sec = 0;
  
config.inter_dc_stream_throughput_outbound_megabits_per_sec = 0;
  }
- opts.storagePort = config.storage_port;
- opts.sslStoragePort = config.ssl_storage_port;
  opts.throttle = 
config.stream_throughput_outbound_megabits_per_sec;
  opts.interDcThrottle = 
config.inter_dc_stream_throughput_outbound_megabits_per_sec;
 -opts.encOptions = config.client_encryption_options;
 +opts.clientEncOptions = config.client_encryption_options;
  opts.serverEncOptions = config.server_encryption_options;
  
 -if (cmd.hasOption(RPC_PORT_OPTION))
++if (cmd.hasOption(NATIVE_PORT_OPTION))
+ {
 -opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
++opts.nativePort = 
Integer.parseInt(cmd.getOptionValue(NATIVE_PORT_OPTION));
+ }
+ else
+  

[06/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2017-09-20 Thread jasobrown
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/405ad009
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/405ad009
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/405ad009

Branch: refs/heads/trunk
Commit: 405ad0099bf6318fa47072b32c3d6ad2cbc68c41
Parents: a8e2dc5 428eaa3
Author: Jason Brown 
Authored: Wed Sep 20 06:58:02 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 07:05:13 2017 -0700

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/tools/BulkLoader.java  | 38 
 2 files changed, 33 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/405ad009/CHANGES.txt
--
diff --cc CHANGES.txt
index 0af156f,848628b..0b3421f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,6 +1,19 @@@
 -2.1.19
 +2.2.11
 + * Safely handle empty buffers when outputting to JSON (CASSANDRA-13868)
 + * Copy session properties on cqlsh.py do_login (CASSANDRA-13847)
 + * Fix load over calculated issue in IndexSummaryRedistribution 
(CASSANDRA-13738)
 + * Fix compaction and flush exception not captured (CASSANDRA-13833)
 + * Make BatchlogManagerMBean.forceBatchlogReplay() blocking (CASSANDRA-13809)
 + * Uncaught exceptions in Netty pipeline (CASSANDRA-13649)
 + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) 
 + * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
 + * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 + * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 + * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 + * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
 +Merged from 2.1:
+  * Add storage port options to sstableloader (CASSANDRA-13844)
 - * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775) 
 + * Remove stress-test target in CircleCI as it's not existing 
(CASSANDRA-13775)
   * Clone HeartBeatState when building gossip messages. Make its 
generation/version volatile (CASSANDRA-13700)
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/405ad009/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --cc src/java/org/apache/cassandra/tools/BulkLoader.java
index 7d0fdc8,52f9467..093a063
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@@ -47,7 -55,9 +47,9 @@@ public class BulkLoade
  private static final String NOPROGRESS_OPTION  = "no-progress";
  private static final String IGNORE_NODES_OPTION  = "ignore";
  private static final String INITIAL_HOST_ADDRESS_OPTION = "nodes";
 -private static final String RPC_PORT_OPTION = "port";
 +private static final String NATIVE_PORT_OPTION = "port";
+ private static final String STORAGE_PORT_OPTION = "storage-port";
+ private static final String SSL_STORAGE_PORT_OPTION = "ssl-storage-port";
  private static final String USER_OPTION = "username";
  private static final String PASSWD_OPTION = "password";
  private static final String THROTTLE_MBITS = "throttle";
@@@ -306,7 -401,7 +308,7 @@@
  public boolean debug;
  public boolean verbose;
  public boolean noProgress;
- public int nativePort = 9042;
 -public int rpcPort;
++public int nativePort;
  public String user;
  public String passwd;
  public int throttle = 0;
@@@ -438,13 -531,38 +437,38 @@@
  config.stream_throughput_outbound_megabits_per_sec = 0;
  
config.inter_dc_stream_throughput_outbound_megabits_per_sec = 0;
  }
- opts.storagePort = config.storage_port;
- opts.sslStoragePort = config.ssl_storage_port;
  opts.throttle = 
config.stream_throughput_outbound_megabits_per_sec;
  opts.interDcThrottle = 
config.inter_dc_stream_throughput_outbound_megabits_per_sec;
 -opts.encOptions = config.client_encryption_options;
 +opts.clientEncOptions = config.client_encryption_options;
  opts.serverEncOptions = config.server_encryption_options;
  
 -if (cmd.hasOption(RPC_PORT_OPTION))
++if (cmd.hasOption(NATIVE_PORT_OPTION))
+ {
 -opts.rpcPort = 
Integer.parseInt(cmd.getOptionValue(RPC_PORT_OPTION));
++opts.nativePort = 
Integer.parseInt(cmd.getOptionValue(NATIVE_PORT_OPTION));
+ }
+ else
+ {

[11/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-09-20 Thread jasobrown
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab5084a5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab5084a5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab5084a5

Branch: refs/heads/cassandra-3.11
Commit: ab5084a529254b3d7494409ce5a77c35ebcbccf9
Parents: 975c3d8 405ad00
Author: Jason Brown 
Authored: Wed Sep 20 07:05:37 2017 -0700
Committer: Jason Brown 
Committed: Wed Sep 20 07:05:37 2017 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13844) sstableloader doesn't support non default storage_port and ssl_storage_port

2017-09-20 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-13844:

   Resolution: Fixed
Reproduced In: 2.2.10, 2.1.18  (was: 2.1.18, 2.2.10)
   Status: Resolved  (was: Patch Available)

I took the liberty of committing to unblock the next round of releases. commit 
sha is {{428eaa3e37cab7227c81fdf124d29dfc1db4257c}}

Thanks!

> sstableloader doesn't support non default storage_port and ssl_storage_port
> ---
>
> Key: CASSANDRA-13844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Eduard Tudenhoefner
>Assignee: Eduard Tudenhoefner
> Fix For: 2.1.19, 2.2.11
>
>
> Currently *storage_port* and *ssl_storage_port* are hardcoded to the 
> defaults. The problem was already fixed in CASSANDRA-13518 for C* 3.0+, so 
> this here is just backporting it to C* 2.1/2.2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13362) Cassandra 2.1.15 main thread stuck in logback stack trace upon joining existing cluster

2017-09-20 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa resolved CASSANDRA-13362.

Resolution: Incomplete

> Cassandra 2.1.15 main thread stuck in logback stack trace upon joining 
> existing cluster
> ---
>
> Key: CASSANDRA-13362
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13362
> Project: Cassandra
>  Issue Type: Bug
> Environment: 
>Reporter: Thomas Steinmaurer
> Attachments: td___2017-03-21-21-30-09.tdump, 
> td___2017-03-21-23-09-59.tdump
>
>
> Switching from Cassandra 2.0.17 to Cassandra 2.1.15 (DSC edition: 
> dsc-cassandra-2.1.15-bin.tar.gz) in a local VM based Linux environment for 
> installer verification tests.
> {noformat}
> [root@localhost jdk1.8.0_102]# lsb_release -d
> Description:  CentOS release 6.7 (Final)
> You have new mail in /var/spool/mail/root
> [root@localhost jdk1.8.0_102]# uname -a
> Linux localhost 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 
> x86_64 x86_64 x86_64 GNU/Linux
> {noformat}
> The test environment is started from scratch, thus in the following scenario 
> not an upgrade from 2.0 to 2.1, but a fresh 2.1 installation.
> The first node started up fine, but when extending the cluster with a second 
> node, the second node hangs in the following Cassandra log output while 
> starting up, joining the existing node:
> {noformat}
> INFO  [InternalResponseStage:1] 2017-03-21 21:10:43,864 DefsTables.java:373 - 
> Loading 
> org.apache.cassandra.config.CFMetaData@1c3daf27[cfId=a8cb1eb0-0e61-11e7-9a56-b20ca863,ksName=ruxitdb,cfName=EventQueue,cf$
> INFO  [main] 2017-03-21 21:11:11,404 StorageService.java:1138 - JOINING: 
> schema complete, ready to bootstrap
> ...
> INFO  [main] 2017-03-22 03:13:36,148 StorageService.java:1138 - JOINING: 
> waiting for pending range calculation
> INFO  [main] 2017-03-22 03:13:36,149 StorageService.java:1138 - JOINING: 
> calculation complete, ready to bootstrap
> INFO  [main] 2017-03-22 03:13:36,156 StorageService.java:1138 - JOINING: 
> getting bootstrap token
> ...
> {noformat}
> So, basically it was stuck on 2017-03-21 21:11:11,404 and the main thread 
> somehow continued on  2017-03-22 03:13:36,148, ~ 6 hours later.
> I have two thread dumps. The first from 21:30:
> [^td___2017-03-21-21-30-09.tdump]
> and a second one ~ 100min later:
> [^td___2017-03-21-23-09-59.tdump]
> Both thread dumps have in common, that the main thread is stuck in some 
> logback code:
> {noformat}
> "main" #1 prio=5 os_prio=0 tid=0x7fe93821a800 nid=0x4d4e waiting on 
> condition [0x7fe93c813000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xc861bb88> (a 
> java.util.concurrent.locks.ReentrantLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>   at 
> java.util.concurrent.locks.ReentrantLock$FairSync.lock(ReentrantLock.java:224)
>   at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>   at 
> ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:217)
>   at 
> ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:103)
>   at 
> ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
>   at 
> ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
>   at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:273)
>   at ch.qos.logback.classic.Logger.callAppenders(Logger.java:260)
>   at 
> ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:442)
>   at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:396)
>   at ch.qos.logback.classic.Logger.info(Logger.java:600)
>   at 
> org.apache.cassandra.service.StorageService.setMode(StorageService.java:1138)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:870)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:740)
>   - locked <0xc85d37d8> (a 
> org.apache.cassandra.service.StorageService)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:617)
>   - locked <0xc85d37d8> (a 
> org.apache.cassandra.service.StorageService)
>   at 
> org.apache.ca

[jira] [Comment Edited] (CASSANDRA-12373) 3.0 breaks CQL compatibility with super columns families

2017-09-20 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173016#comment-16173016
 ] 

Alex Petrov edited comment on CASSANDRA-12373 at 9/20/17 2:50 PM:
--

bq. Thanks for the changes and you patience on this. My main remaining remark 
is that I don't think we should include the SC key and value columns to 
partitionColumns (in CFMetaData.rebuild).

This was surprisingly simple to do.

bq. In CFMetaData.renameColumn, in the case of updating the SC key or value 
column, I believe we should be updating columnMetadata as well since those 
columns are listed in it, but that doesn't seem to be the case (not sure how 
important it is, it might be a following call to rebuild fixes that in 
practice, but since the method doesn't call rebuild itself, probably better to 
make sure we handle it).

I can't see how this can be helpful because of the subsequent {{rebuild}} call, 
but this also doesn't break anything, so I went ahead and changed it.

bq. In CFMetaData.makeLegacyDefaultValidator, compact tables with counter will 
now return BytesType instead of CounterColumnType, which is kind of technically 
incorrect. To be entirely honest, this doesn't matter currently because that 
method isn't ever called for non-compact tables (and at this point, probably 
never will), but if we're going to rely on this, I'd rather make it an 
assertion than returning something somewhat wrong. Personally, I'd just keep 
the counter special case and move on, as this has nothing to do with this 
ticket, but if you prefer transforming it to a assert !isCompactTable(), no 
complain.

I've added the {{isCounter}} back, no strong opinion here, too.

bq. Nit: in CFMetaData.renameColumn, the comment "SuperColumn tables allow 
renaming all columns" doesn't match the code entirely anymore.

Yeah, I was implying dense ones, but I don't think this comment is of much use 
here anyways.

bq. Nit: in SuperColumnCompatibility.getSuperCfKeyColumn, I don't think the 
"3.x created supercolumn family" comment is accurate anymore since in 
ThriftConversion you now add the 2nd clustering column (which, in itself, lgtm).

It's still true for pre-12373 3.x thrift-created supercolumn family tables. 
We've discussed this offline shortly: there was no good way to force the table 
update to make all the table look completely the same, so this is the only 
place we still have to special-case. I've added the {{pre 12373}} remark and 
hope it's clearer now.

|[3.0 
patch|https://github.com/apache/cassandra/compare/cassandra-3.0...ifesdjeen:12373-3.0]|[3.11
 
patch|https://github.com/apache/cassandra/compare/cassandra-3.11...ifesdjeen:12373-3.11]|[dtests|https://github.com/apache/cassandra-dtest/compare/master...ifesdjeen:12373]|[2.2
 
patch|https://github.com/apache/cassandra/compare/cassandra-2.2...ifesdjeen:12373-2.2]|

UPDATE: updated and rebased both branches, CI looks good.


was (Author: ifesdjeen):
bq. Thanks for the changes and you patience on this. My main remaining remark 
is that I don't think we should include the SC key and value columns to 
partitionColumns (in CFMetaData.rebuild).

This was surprisingly simple to do.

bq. In CFMetaData.renameColumn, in the case of updating the SC key or value 
column, I believe we should be updating columnMetadata as well since those 
columns are listed in it, but that doesn't seem to be the case (not sure how 
important it is, it might be a following call to rebuild fixes that in 
practice, but since the method doesn't call rebuild itself, probably better to 
make sure we handle it).

I can't see how this can be helpful because of the subsequent {{rebuild}} call, 
but this also doesn't break anything, so I went ahead and changed it.

bq. In CFMetaData.makeLegacyDefaultValidator, compact tables with counter will 
now return BytesType instead of CounterColumnType, which is kind of technically 
incorrect. To be entirely honest, this doesn't matter currently because that 
method isn't ever called for non-compact tables (and at this point, probably 
never will), but if we're going to rely on this, I'd rather make it an 
assertion than returning something somewhat wrong. Personally, I'd just keep 
the counter special case and move on, as this has nothing to do with this 
ticket, but if you prefer transforming it to a assert !isCompactTable(), no 
complain.

I've added the {{isCounter}} back, no strong opinion here, too.

bq. Nit: in CFMetaData.renameColumn, the comment "SuperColumn tables allow 
renaming all columns" doesn't match the code entirely anymore.

Yeah, I was implying dense ones, but I don't think this comment is of much use 
here anyways.

bq. Nit: in SuperColumnCompatibility.getSuperCfKeyColumn, I don't think the 
"3.x created supercolumn family" comment is accurate anymore since in 
ThriftConversion you now add the 2nd clustering column (which, in itse

[jira] [Created] (CASSANDRA-13887) Add SASI metrics to JMX

2017-09-20 Thread James Howe (JIRA)
James Howe created CASSANDRA-13887:
--

 Summary: Add SASI metrics to JMX
 Key: CASSANDRA-13887
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13887
 Project: Cassandra
  Issue Type: Improvement
  Components: sasi
Reporter: James Howe
Priority: Minor


Currently there are MBeans for secondary index metrics 
{{org.apache.cassandra.metrics:type=IndexTable}} but I cannot see SASI metrics 
anywhere.
The only place they're even mentioned is in the table's {{BuiltIndexes}} list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13888) Support GROUP BY on indexed columns

2017-09-20 Thread James Howe (JIRA)
James Howe created CASSANDRA-13888:
--

 Summary: Support GROUP BY on indexed columns
 Key: CASSANDRA-13888
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13888
 Project: Cassandra
  Issue Type: New Feature
  Components: CQL
Reporter: James Howe






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Reopened] (CASSANDRA-13362) Cassandra 2.1.15 main thread stuck in logback stack trace upon joining existing cluster

2017-09-20 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa reopened CASSANDRA-13362:


Oops, closed as incomplete instead of invalid.


> Cassandra 2.1.15 main thread stuck in logback stack trace upon joining 
> existing cluster
> ---
>
> Key: CASSANDRA-13362
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13362
> Project: Cassandra
>  Issue Type: Bug
> Environment: 
>Reporter: Thomas Steinmaurer
> Attachments: td___2017-03-21-21-30-09.tdump, 
> td___2017-03-21-23-09-59.tdump
>
>
> Switching from Cassandra 2.0.17 to Cassandra 2.1.15 (DSC edition: 
> dsc-cassandra-2.1.15-bin.tar.gz) in a local VM based Linux environment for 
> installer verification tests.
> {noformat}
> [root@localhost jdk1.8.0_102]# lsb_release -d
> Description:  CentOS release 6.7 (Final)
> You have new mail in /var/spool/mail/root
> [root@localhost jdk1.8.0_102]# uname -a
> Linux localhost 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 
> x86_64 x86_64 x86_64 GNU/Linux
> {noformat}
> The test environment is started from scratch, thus in the following scenario 
> not an upgrade from 2.0 to 2.1, but a fresh 2.1 installation.
> The first node started up fine, but when extending the cluster with a second 
> node, the second node hangs in the following Cassandra log output while 
> starting up, joining the existing node:
> {noformat}
> INFO  [InternalResponseStage:1] 2017-03-21 21:10:43,864 DefsTables.java:373 - 
> Loading 
> org.apache.cassandra.config.CFMetaData@1c3daf27[cfId=a8cb1eb0-0e61-11e7-9a56-b20ca863,ksName=ruxitdb,cfName=EventQueue,cf$
> INFO  [main] 2017-03-21 21:11:11,404 StorageService.java:1138 - JOINING: 
> schema complete, ready to bootstrap
> ...
> INFO  [main] 2017-03-22 03:13:36,148 StorageService.java:1138 - JOINING: 
> waiting for pending range calculation
> INFO  [main] 2017-03-22 03:13:36,149 StorageService.java:1138 - JOINING: 
> calculation complete, ready to bootstrap
> INFO  [main] 2017-03-22 03:13:36,156 StorageService.java:1138 - JOINING: 
> getting bootstrap token
> ...
> {noformat}
> So, basically it was stuck on 2017-03-21 21:11:11,404 and the main thread 
> somehow continued on  2017-03-22 03:13:36,148, ~ 6 hours later.
> I have two thread dumps. The first from 21:30:
> [^td___2017-03-21-21-30-09.tdump]
> and a second one ~ 100min later:
> [^td___2017-03-21-23-09-59.tdump]
> Both thread dumps have in common, that the main thread is stuck in some 
> logback code:
> {noformat}
> "main" #1 prio=5 os_prio=0 tid=0x7fe93821a800 nid=0x4d4e waiting on 
> condition [0x7fe93c813000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xc861bb88> (a 
> java.util.concurrent.locks.ReentrantLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>   at 
> java.util.concurrent.locks.ReentrantLock$FairSync.lock(ReentrantLock.java:224)
>   at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>   at 
> ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:217)
>   at 
> ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:103)
>   at 
> ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
>   at 
> ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
>   at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:273)
>   at ch.qos.logback.classic.Logger.callAppenders(Logger.java:260)
>   at 
> ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:442)
>   at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:396)
>   at ch.qos.logback.classic.Logger.info(Logger.java:600)
>   at 
> org.apache.cassandra.service.StorageService.setMode(StorageService.java:1138)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:870)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:740)
>   - locked <0xc85d37d8> (a 
> org.apache.cassandra.service.StorageService)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:617)
>   - locked <0xc85d37d8> (a 
> org.apache.cassandra.service.StorageService)
>

[jira] [Resolved] (CASSANDRA-13362) Cassandra 2.1.15 main thread stuck in logback stack trace upon joining existing cluster

2017-09-20 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa resolved CASSANDRA-13362.

Resolution: Invalid

> Cassandra 2.1.15 main thread stuck in logback stack trace upon joining 
> existing cluster
> ---
>
> Key: CASSANDRA-13362
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13362
> Project: Cassandra
>  Issue Type: Bug
> Environment: 
>Reporter: Thomas Steinmaurer
> Attachments: td___2017-03-21-21-30-09.tdump, 
> td___2017-03-21-23-09-59.tdump
>
>
> Switching from Cassandra 2.0.17 to Cassandra 2.1.15 (DSC edition: 
> dsc-cassandra-2.1.15-bin.tar.gz) in a local VM based Linux environment for 
> installer verification tests.
> {noformat}
> [root@localhost jdk1.8.0_102]# lsb_release -d
> Description:  CentOS release 6.7 (Final)
> You have new mail in /var/spool/mail/root
> [root@localhost jdk1.8.0_102]# uname -a
> Linux localhost 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 
> x86_64 x86_64 x86_64 GNU/Linux
> {noformat}
> The test environment is started from scratch, thus in the following scenario 
> not an upgrade from 2.0 to 2.1, but a fresh 2.1 installation.
> The first node started up fine, but when extending the cluster with a second 
> node, the second node hangs in the following Cassandra log output while 
> starting up, joining the existing node:
> {noformat}
> INFO  [InternalResponseStage:1] 2017-03-21 21:10:43,864 DefsTables.java:373 - 
> Loading 
> org.apache.cassandra.config.CFMetaData@1c3daf27[cfId=a8cb1eb0-0e61-11e7-9a56-b20ca863,ksName=ruxitdb,cfName=EventQueue,cf$
> INFO  [main] 2017-03-21 21:11:11,404 StorageService.java:1138 - JOINING: 
> schema complete, ready to bootstrap
> ...
> INFO  [main] 2017-03-22 03:13:36,148 StorageService.java:1138 - JOINING: 
> waiting for pending range calculation
> INFO  [main] 2017-03-22 03:13:36,149 StorageService.java:1138 - JOINING: 
> calculation complete, ready to bootstrap
> INFO  [main] 2017-03-22 03:13:36,156 StorageService.java:1138 - JOINING: 
> getting bootstrap token
> ...
> {noformat}
> So, basically it was stuck on 2017-03-21 21:11:11,404 and the main thread 
> somehow continued on  2017-03-22 03:13:36,148, ~ 6 hours later.
> I have two thread dumps. The first from 21:30:
> [^td___2017-03-21-21-30-09.tdump]
> and a second one ~ 100min later:
> [^td___2017-03-21-23-09-59.tdump]
> Both thread dumps have in common, that the main thread is stuck in some 
> logback code:
> {noformat}
> "main" #1 prio=5 os_prio=0 tid=0x7fe93821a800 nid=0x4d4e waiting on 
> condition [0x7fe93c813000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xc861bb88> (a 
> java.util.concurrent.locks.ReentrantLock$FairSync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>   at 
> java.util.concurrent.locks.ReentrantLock$FairSync.lock(ReentrantLock.java:224)
>   at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>   at 
> ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:217)
>   at 
> ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:103)
>   at 
> ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
>   at 
> ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
>   at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:273)
>   at ch.qos.logback.classic.Logger.callAppenders(Logger.java:260)
>   at 
> ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:442)
>   at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:396)
>   at ch.qos.logback.classic.Logger.info(Logger.java:600)
>   at 
> org.apache.cassandra.service.StorageService.setMode(StorageService.java:1138)
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:870)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:740)
>   - locked <0xc85d37d8> (a 
> org.apache.cassandra.service.StorageService)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:617)
>   - locked <0xc85d37d8> (a 
> org.apache.cassandra.service.StorageService)
>   at 
> org.apache.cassa

[jira] [Commented] (CASSANDRA-13404) Hostname verification for client-to-node encryption

2017-09-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173391#comment-16173391
 ] 

Per Otterström commented on CASSANDRA-13404:


OK, I can see where this is going. And I appreciate the time you all spent on 
this topic!

Can't say I'm happy with the conclusion, then again I understand that this 
feature has limited value to the average user.

My final attempt to convince you all. Please have a quick look at the following 
references which I got from my security team:
https://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet#Rule_-_Use_a_Certificate_That_Supports_Required_Domain_Names
https://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet#Client-Side_Certificates

Let me know if its changing your mind. Otherwise I'll drop it. For this time. 
;-)


> Hostname verification for client-to-node encryption
> ---
>
> Key: CASSANDRA-13404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Jan Karlsson
> Fix For: 4.x
>
> Attachments: 13404-trunk.txt
>
>
> Similarily to CASSANDRA-9220, Cassandra should support hostname verification 
> for client-node connections.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13404) Hostname verification for client-to-node encryption

2017-09-20 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173429#comment-16173429
 ] 

Jeff Jirsa commented on CASSANDRA-13404:


Is there another way to implement the ip restrictions here that would work? 
Could we extend the auth APIs to provide for something like the 
{{IInternodeAuthenticator}} for client connections, and then people with 
atypical-but-valid requirements like this can implement their own pluggable 
client auth library? 



> Hostname verification for client-to-node encryption
> ---
>
> Key: CASSANDRA-13404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Jan Karlsson
> Fix For: 4.x
>
> Attachments: 13404-trunk.txt
>
>
> Similarily to CASSANDRA-9220, Cassandra should support hostname verification 
> for client-node connections.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13873) Ref bug in Scrub

2017-09-20 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-13873:
--

Assignee: Joel Knighton

> Ref bug in Scrub
> 
>
> Key: CASSANDRA-13873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13873
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: Joel Knighton
>Priority: Critical
>
> I'm hitting a Ref bug when many scrubs run against a node.  This doesn't 
> happen on 3.0.X.  I'm not sure if/if not this happens with compactions too 
> but I suspect it does.
> I'm not seeing any Ref leaks or double frees.
> To Reproduce:
> {quote}
> ./tools/bin/cassandra-stress write n=10m -rate threads=100
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> #Ctrl-C
> ./bin/nodetool scrub
> {quote}
> Eventually in the logs you get:
> WARN  [RMI TCP Connection(4)-127.0.0.1] 2017-09-14 15:51:26,722 
> NoSpamLogger.java:97 - Spinning trying to capture readers 
> [BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-32-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-31-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-29-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-27-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-26-big-Data.db'),
>  
> BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-20-big-Data.db')],
> *released: 
> [BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db')],*
>  
> This released table has a selfRef of 0 but is in the Tracker



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-20 Thread aleksey
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2bae4ca9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2bae4ca9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2bae4ca9

Branch: refs/heads/cassandra-3.11
Commit: 2bae4ca907ac4d2ab53c899e5cf5c9e4de631f52
Parents: c1efaf3 f93e6e3
Author: Aleksey Yeschenko 
Authored: Wed Sep 20 17:41:07 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 17:41:07 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   5 +
 .../apache/cassandra/service/DataResolver.java  | 304 +++
 3 files changed, 187 insertions(+), 123 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bae4ca9/CHANGES.txt
--
diff --cc CHANGES.txt
index 39270e5,07742ef..8d07cbc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 -3.0.15
 +3.11.1
 + * AbstractTokenTreeBuilder#serializedSize returns wrong value when there is 
a single leaf and overflow collisions (CASSANDRA-13869)
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * Improve short read protection performance (CASSANDRA-13794)
   * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
   * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)
   * Make isBuilt volatile in PartitionUpdate (CASSANDRA-13619)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bae4ca9/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bae4ca9/src/java/org/apache/cassandra/service/DataResolver.java
--
diff --cc src/java/org/apache/cassandra/service/DataResolver.java
index 4b0bd3c,9a98ee5..32fc015
--- a/src/java/org/apache/cassandra/service/DataResolver.java
+++ b/src/java/org/apache/cassandra/service/DataResolver.java
@@@ -43,12 -43,10 +43,12 @@@ public class DataResolver extends Respo
  {
  @VisibleForTesting
  final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
 +private final long queryStartNanoTime;
  
- public DataResolver(Keyspace keyspace, ReadCommand command, 
ConsistencyLevel consistency, int maxResponseCount, long queryStartNanoTime)
 -DataResolver(Keyspace keyspace, ReadCommand command, ConsistencyLevel 
consistency, int maxResponseCount)
++DataResolver(Keyspace keyspace, ReadCommand command, ConsistencyLevel 
consistency, int maxResponseCount, long queryStartNanoTime)
  {
  super(keyspace, command, consistency, maxResponseCount);
 +this.queryStartNanoTime = queryStartNanoTime;
  }
  
  public PartitionIterator getData()
@@@ -122,10 -123,23 +125,23 @@@
  if (!command.limits().isUnlimited())
  {
  for (int i = 0; i < results.size(); i++)
- results.set(i, Transformation.apply(results.get(i), new 
ShortReadProtection(sources[i], resultCounter, queryStartNanoTime)));
+ {
+ DataLimits.Counter singleResultCounter =
+ command.limits().newCounter(command.nowInSec(), false, 
command.selectsFullPartition()).onlyCount();
+ 
+ ShortReadResponseProtection protection =
 -new ShortReadResponseProtection(sources[i], 
singleResultCounter, mergedResultCounter);
++new ShortReadResponseProtection(sources[i], 
singleResultCounter, mergedResultCounter, queryStartNanoTime);
+ 
+ /*
+  * The order of transformations is important here. See 
ShortReadResponseProtection.applyToPartition()
+  * comments for details. We want 
singleResultCounter.applyToPartition() to be called after SRRP applies
+  * its transformations, so that this order is preserv

[3/6] cassandra git commit: Fix short read protection performance

2017-09-20 Thread aleksey
Fix short read protection performance

patch by Aleksey Yeschenko; reviewed by Benedict Elliott Smith for
CASSANDRA-13794


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f93e6e34
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f93e6e34
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f93e6e34

Branch: refs/heads/trunk
Commit: f93e6e3401c343dec74687d8b079b5697813ab28
Parents: ab5084a
Author: Aleksey Yeschenko 
Authored: Thu Aug 31 20:51:08 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 16:11:18 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   5 +
 .../apache/cassandra/service/DataResolver.java  | 272 ---
 3 files changed, 181 insertions(+), 97 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f93e6e34/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2d11a3e..07742ef 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Improve short read protection performance (CASSANDRA-13794)
  * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
  * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)
  * Make isBuilt volatile in PartitionUpdate (CASSANDRA-13619)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f93e6e34/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 983d6b1..e6e46b2 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2470,4 +2470,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 return keyspace.getColumnFamilyStore(id);
 }
+
+public static TableMetrics metricsFor(UUID tableId)
+{
+return getIfExists(tableId).metric;
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f93e6e34/src/java/org/apache/cassandra/service/DataResolver.java
--
diff --git a/src/java/org/apache/cassandra/service/DataResolver.java 
b/src/java/org/apache/cassandra/service/DataResolver.java
index 99399a3..9a98ee5 100644
--- a/src/java/org/apache/cassandra/service/DataResolver.java
+++ b/src/java/org/apache/cassandra/service/DataResolver.java
@@ -44,7 +44,7 @@ public class DataResolver extends ResponseResolver
 @VisibleForTesting
 final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
 
-public DataResolver(Keyspace keyspace, ReadCommand command, 
ConsistencyLevel consistency, int maxResponseCount)
+DataResolver(Keyspace keyspace, ReadCommand command, ConsistencyLevel 
consistency, int maxResponseCount)
 {
 super(keyspace, command, consistency, maxResponseCount);
 }
@@ -55,6 +55,20 @@ public class DataResolver extends ResponseResolver
 return 
UnfilteredPartitionIterators.filter(response.makeIterator(command), 
command.nowInSec());
 }
 
+public boolean isDataPresent()
+{
+return !responses.isEmpty();
+}
+
+public void compareResponses()
+{
+// We need to fully consume the results to trigger read repairs if 
appropriate
+try (PartitionIterator iterator = resolve())
+{
+PartitionIterators.consume(iterator);
+}
+}
+
 public PartitionIterator resolve()
 {
 // We could get more responses while this method runs, which is ok 
(we're happy to ignore any response not here
@@ -83,54 +97,56 @@ public class DataResolver extends ResponseResolver
  * See CASSANDRA-13747 for more details.
  */
 
-DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
+DataLimits.Counter mergedResultCounter =
+command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
 
-UnfilteredPartitionIterator merged = 
mergeWithShortReadProtection(iters, sources, counter);
-FilteredPartitions filtered = FilteredPartitions.filter(merged,
-new 
Filter(command.nowInSec(),
-   
command.metadata().enforceStrictLiveness()));
-PartitionIterator counted = counter.applyTo(filtered);
+UnfilteredPartitionIterator merged = 
mergeWithShortReadProtection(iters, sources, mergedResultCounter

[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-20 Thread aleksey
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/030ec1f0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/030ec1f0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/030ec1f0

Branch: refs/heads/trunk
Commit: 030ec1f056d0e0b9094ddf7fcd2a491cb8ddf621
Parents: 4809f42 2bae4ca
Author: Aleksey Yeschenko 
Authored: Wed Sep 20 17:47:32 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 17:47:32 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   5 +
 .../apache/cassandra/service/DataResolver.java  | 303 +++
 3 files changed, 187 insertions(+), 122 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/030ec1f0/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/030ec1f0/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 5aecc9d,548de88..72a63f0
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -2654,10 -2635,15 +2654,15 @@@ public class ColumnFamilyStore implemen
  if (keyspace == null)
  return null;
  
 -UUID id = Schema.instance.getId(ksName, cfName);
 -if (id == null)
 +TableMetadata table = Schema.instance.getTableMetadata(ksName, 
cfName);
 +if (table == null)
  return null;
  
 -return keyspace.getColumnFamilyStore(id);
 +return keyspace.getColumnFamilyStore(table.id);
  }
+ 
 -public static TableMetrics metricsFor(UUID tableId)
++public static TableMetrics metricsFor(TableId tableId)
+ {
+ return getIfExists(tableId).metric;
+ }
  }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/030ec1f0/src/java/org/apache/cassandra/service/DataResolver.java
--
diff --cc src/java/org/apache/cassandra/service/DataResolver.java
index 98e3285,32fc015..b0741da
--- a/src/java/org/apache/cassandra/service/DataResolver.java
+++ b/src/java/org/apache/cassandra/service/DataResolver.java
@@@ -27,10 -27,7 +27,9 @@@ import com.google.common.collect.Iterab
  
  import org.apache.cassandra.concurrent.Stage;
  import org.apache.cassandra.concurrent.StageManager;
 -import org.apache.cassandra.config.*;
 +import org.apache.cassandra.schema.ColumnMetadata;
- import org.apache.cassandra.schema.Schema;
 +import org.apache.cassandra.schema.TableMetadata;
 +import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.db.*;
  import org.apache.cassandra.db.filter.*;
  import org.apache.cassandra.db.filter.DataLimits.Counter;
@@@ -88,26 -99,22 +101,19 @@@ public class DataResolver extends Respo
   * See CASSANDRA-13747 for more details.
   */
  
- DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
+ DataLimits.Counter mergedResultCounter =
+ command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
  
- UnfilteredPartitionIterator merged = 
mergeWithShortReadProtection(iters, sources, counter);
- FilteredPartitions filtered = FilteredPartitions.filter(merged, new 
Filter(command.nowInSec(), command.metadata().enforceStrictLiveness()));
- PartitionIterator counted = counter.applyTo(filtered);
+ UnfilteredPartitionIterator merged = 
mergeWithShortReadProtection(iters, sources, mergedResultCounter);
+ FilteredPartitions filtered =
+ FilteredPartitions.filter(merged, new Filter(command.nowInSec(), 
command.metadata().enforceStrictLiveness()));
+ PartitionIterator counted = Transformation.apply(filtered, 
mergedResultCounter);
 -
 -return command.isForThrift()
 - ? counted
 - : Transformation.apply(counted, new EmptyPartitionsDiscarder());
 +return Transformation.apply(counted, new EmptyPartitionsDiscarder());
  }
  
- public void compareResponses()
- {
- // We need to fully consume the results to trigger read repairs if 
appropriate
- try (PartitionIterator iterator = resolve())
- {
- PartitionIterators.consume(iterator);
- }
- }
- 
  private UnfilteredPartitionIterator 
mergeWithShortReadProtection(List results,
   
InetAddress[] sources,
-   

[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-20 Thread aleksey
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2bae4ca9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2bae4ca9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2bae4ca9

Branch: refs/heads/trunk
Commit: 2bae4ca907ac4d2ab53c899e5cf5c9e4de631f52
Parents: c1efaf3 f93e6e3
Author: Aleksey Yeschenko 
Authored: Wed Sep 20 17:41:07 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 17:41:07 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   5 +
 .../apache/cassandra/service/DataResolver.java  | 304 +++
 3 files changed, 187 insertions(+), 123 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bae4ca9/CHANGES.txt
--
diff --cc CHANGES.txt
index 39270e5,07742ef..8d07cbc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 -3.0.15
 +3.11.1
 + * AbstractTokenTreeBuilder#serializedSize returns wrong value when there is 
a single leaf and overflow collisions (CASSANDRA-13869)
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * Improve short read protection performance (CASSANDRA-13794)
   * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
   * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)
   * Make isBuilt volatile in PartitionUpdate (CASSANDRA-13619)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bae4ca9/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bae4ca9/src/java/org/apache/cassandra/service/DataResolver.java
--
diff --cc src/java/org/apache/cassandra/service/DataResolver.java
index 4b0bd3c,9a98ee5..32fc015
--- a/src/java/org/apache/cassandra/service/DataResolver.java
+++ b/src/java/org/apache/cassandra/service/DataResolver.java
@@@ -43,12 -43,10 +43,12 @@@ public class DataResolver extends Respo
  {
  @VisibleForTesting
  final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
 +private final long queryStartNanoTime;
  
- public DataResolver(Keyspace keyspace, ReadCommand command, 
ConsistencyLevel consistency, int maxResponseCount, long queryStartNanoTime)
 -DataResolver(Keyspace keyspace, ReadCommand command, ConsistencyLevel 
consistency, int maxResponseCount)
++DataResolver(Keyspace keyspace, ReadCommand command, ConsistencyLevel 
consistency, int maxResponseCount, long queryStartNanoTime)
  {
  super(keyspace, command, consistency, maxResponseCount);
 +this.queryStartNanoTime = queryStartNanoTime;
  }
  
  public PartitionIterator getData()
@@@ -122,10 -123,23 +125,23 @@@
  if (!command.limits().isUnlimited())
  {
  for (int i = 0; i < results.size(); i++)
- results.set(i, Transformation.apply(results.get(i), new 
ShortReadProtection(sources[i], resultCounter, queryStartNanoTime)));
+ {
+ DataLimits.Counter singleResultCounter =
+ command.limits().newCounter(command.nowInSec(), false, 
command.selectsFullPartition()).onlyCount();
+ 
+ ShortReadResponseProtection protection =
 -new ShortReadResponseProtection(sources[i], 
singleResultCounter, mergedResultCounter);
++new ShortReadResponseProtection(sources[i], 
singleResultCounter, mergedResultCounter, queryStartNanoTime);
+ 
+ /*
+  * The order of transformations is important here. See 
ShortReadResponseProtection.applyToPartition()
+  * comments for details. We want 
singleResultCounter.applyToPartition() to be called after SRRP applies
+  * its transformations, so that this order is preserved when 

[1/6] cassandra git commit: Fix short read protection performance

2017-09-20 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 ab5084a52 -> f93e6e340
  refs/heads/cassandra-3.11 c1efaf3a7 -> 2bae4ca90
  refs/heads/trunk 4809f4275 -> 030ec1f05


Fix short read protection performance

patch by Aleksey Yeschenko; reviewed by Benedict Elliott Smith for
CASSANDRA-13794


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f93e6e34
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f93e6e34
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f93e6e34

Branch: refs/heads/cassandra-3.0
Commit: f93e6e3401c343dec74687d8b079b5697813ab28
Parents: ab5084a
Author: Aleksey Yeschenko 
Authored: Thu Aug 31 20:51:08 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 16:11:18 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   5 +
 .../apache/cassandra/service/DataResolver.java  | 272 ---
 3 files changed, 181 insertions(+), 97 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f93e6e34/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2d11a3e..07742ef 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Improve short read protection performance (CASSANDRA-13794)
  * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
  * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)
  * Make isBuilt volatile in PartitionUpdate (CASSANDRA-13619)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f93e6e34/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 983d6b1..e6e46b2 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2470,4 +2470,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 return keyspace.getColumnFamilyStore(id);
 }
+
+public static TableMetrics metricsFor(UUID tableId)
+{
+return getIfExists(tableId).metric;
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f93e6e34/src/java/org/apache/cassandra/service/DataResolver.java
--
diff --git a/src/java/org/apache/cassandra/service/DataResolver.java 
b/src/java/org/apache/cassandra/service/DataResolver.java
index 99399a3..9a98ee5 100644
--- a/src/java/org/apache/cassandra/service/DataResolver.java
+++ b/src/java/org/apache/cassandra/service/DataResolver.java
@@ -44,7 +44,7 @@ public class DataResolver extends ResponseResolver
 @VisibleForTesting
 final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
 
-public DataResolver(Keyspace keyspace, ReadCommand command, 
ConsistencyLevel consistency, int maxResponseCount)
+DataResolver(Keyspace keyspace, ReadCommand command, ConsistencyLevel 
consistency, int maxResponseCount)
 {
 super(keyspace, command, consistency, maxResponseCount);
 }
@@ -55,6 +55,20 @@ public class DataResolver extends ResponseResolver
 return 
UnfilteredPartitionIterators.filter(response.makeIterator(command), 
command.nowInSec());
 }
 
+public boolean isDataPresent()
+{
+return !responses.isEmpty();
+}
+
+public void compareResponses()
+{
+// We need to fully consume the results to trigger read repairs if 
appropriate
+try (PartitionIterator iterator = resolve())
+{
+PartitionIterators.consume(iterator);
+}
+}
+
 public PartitionIterator resolve()
 {
 // We could get more responses while this method runs, which is ok 
(we're happy to ignore any response not here
@@ -83,54 +97,56 @@ public class DataResolver extends ResponseResolver
  * See CASSANDRA-13747 for more details.
  */
 
-DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
+DataLimits.Counter mergedResultCounter =
+command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
 
-UnfilteredPartitionIterator merged = 
mergeWithShortReadProtection(iters, sources, counter);
-FilteredPartitions filtered = FilteredPartitions.filter(merged,
-new 
Filter(command.nowInSec(),
-   
command.metadata().enforceSt

[2/6] cassandra git commit: Fix short read protection performance

2017-09-20 Thread aleksey
Fix short read protection performance

patch by Aleksey Yeschenko; reviewed by Benedict Elliott Smith for
CASSANDRA-13794


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f93e6e34
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f93e6e34
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f93e6e34

Branch: refs/heads/cassandra-3.11
Commit: f93e6e3401c343dec74687d8b079b5697813ab28
Parents: ab5084a
Author: Aleksey Yeschenko 
Authored: Thu Aug 31 20:51:08 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 16:11:18 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   5 +
 .../apache/cassandra/service/DataResolver.java  | 272 ---
 3 files changed, 181 insertions(+), 97 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f93e6e34/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2d11a3e..07742ef 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Improve short read protection performance (CASSANDRA-13794)
  * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
  * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)
  * Make isBuilt volatile in PartitionUpdate (CASSANDRA-13619)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f93e6e34/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 983d6b1..e6e46b2 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2470,4 +2470,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 return keyspace.getColumnFamilyStore(id);
 }
+
+public static TableMetrics metricsFor(UUID tableId)
+{
+return getIfExists(tableId).metric;
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f93e6e34/src/java/org/apache/cassandra/service/DataResolver.java
--
diff --git a/src/java/org/apache/cassandra/service/DataResolver.java 
b/src/java/org/apache/cassandra/service/DataResolver.java
index 99399a3..9a98ee5 100644
--- a/src/java/org/apache/cassandra/service/DataResolver.java
+++ b/src/java/org/apache/cassandra/service/DataResolver.java
@@ -44,7 +44,7 @@ public class DataResolver extends ResponseResolver
 @VisibleForTesting
 final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
 
-public DataResolver(Keyspace keyspace, ReadCommand command, 
ConsistencyLevel consistency, int maxResponseCount)
+DataResolver(Keyspace keyspace, ReadCommand command, ConsistencyLevel 
consistency, int maxResponseCount)
 {
 super(keyspace, command, consistency, maxResponseCount);
 }
@@ -55,6 +55,20 @@ public class DataResolver extends ResponseResolver
 return 
UnfilteredPartitionIterators.filter(response.makeIterator(command), 
command.nowInSec());
 }
 
+public boolean isDataPresent()
+{
+return !responses.isEmpty();
+}
+
+public void compareResponses()
+{
+// We need to fully consume the results to trigger read repairs if 
appropriate
+try (PartitionIterator iterator = resolve())
+{
+PartitionIterators.consume(iterator);
+}
+}
+
 public PartitionIterator resolve()
 {
 // We could get more responses while this method runs, which is ok 
(we're happy to ignore any response not here
@@ -83,54 +97,56 @@ public class DataResolver extends ResponseResolver
  * See CASSANDRA-13747 for more details.
  */
 
-DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
+DataLimits.Counter mergedResultCounter =
+command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
 
-UnfilteredPartitionIterator merged = 
mergeWithShortReadProtection(iters, sources, counter);
-FilteredPartitions filtered = FilteredPartitions.filter(merged,
-new 
Filter(command.nowInSec(),
-   
command.metadata().enforceStrictLiveness()));
-PartitionIterator counted = counter.applyTo(filtered);
+UnfilteredPartitionIterator merged = 
mergeWithShortReadProtection(iters, sources, mergedResu

[jira] [Commented] (CASSANDRA-13794) Fix short read protection logic for querying more rows

2017-09-20 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173500#comment-16173500
 ] 

Aleksey Yeschenko commented on CASSANDRA-13794:
---

Committed to 3.0 as 
[f93e6e3401c343dec74687d8b079b5697813ab28|https://github.com/apache/cassandra/commit/f93e6e3401c343dec74687d8b079b5697813ab28]
 and merged with 3.11 and trunk.

Circle run for 3.0 [here|https://circleci.com/gh/iamaleksey/cassandra/39] has 
two completely unrelated {{CommitLogSegmentManagerTest}} failures, and [dtest 
run|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/314/testReport/]
 here is mostly failures to git clone.

The passing tests include the 3 new dtests added since this JIRA was created. 
My initial plan was to cover it with proper unit tests, too - similar to read 
repair tests we have - but doing it properly has proven to be too time 
consuming. In addition to the tests we have, I did a lot of manual testing 
(which uncovered a couple more issues - not affecting my branch). But more unit 
test coverage will be added later - we've budgeted a significant chunk of time 
on {{DataResolver}} testing alone.

Follow up JIRAs I'll file soonish. Thanks for the review! 

> Fix short read protection logic for querying more rows
> --
>
> Key: CASSANDRA-13794
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13794
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Benedict
>Assignee: Aleksey Yeschenko
>  Labels: Correctness
> Fix For: 3.0.15, 3.11.1
>
>
> Discovered by [~benedict] while reviewing CASSANDRA-13747:
> {quote}
> While reviewing I got a little suspicious of the modified line 
> {{DataResolver}} :479, as it seemed that n and x were the wrong way around... 
> and, reading the comment of intent directly above, and reproducing the 
> calculation, they are indeed.
> This is probably a significant enough bug that it warrants its own ticket for 
> record keeping, though I'm fairly agnostic on that decision.
> I'm a little concerned about our current short read behaviour, as right now 
> it seems we should be requesting exactly one row, for any size of under-read, 
> which could mean extremely poor performance in case of large under-reads.
> I would suggest that the outer unconditional {{Math.max}} is a bad idea, has 
> been (poorly) insulating us from this error, and that we should first be 
> asserting that the calculation yields a value >= 0 before setting to 1.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13794) Fix short read protection logic for querying more rows

2017-09-20 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-13794:
--
   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 3.0.x)
   3.11.1
   3.0.15
   Status: Resolved  (was: Patch Available)

> Fix short read protection logic for querying more rows
> --
>
> Key: CASSANDRA-13794
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13794
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Benedict
>Assignee: Aleksey Yeschenko
>  Labels: Correctness
> Fix For: 3.0.15, 3.11.1
>
>
> Discovered by [~benedict] while reviewing CASSANDRA-13747:
> {quote}
> While reviewing I got a little suspicious of the modified line 
> {{DataResolver}} :479, as it seemed that n and x were the wrong way around... 
> and, reading the comment of intent directly above, and reproducing the 
> calculation, they are indeed.
> This is probably a significant enough bug that it warrants its own ticket for 
> record keeping, though I'm fairly agnostic on that decision.
> I'm a little concerned about our current short read behaviour, as right now 
> it seems we should be requesting exactly one row, for any size of under-read, 
> which could mean extremely poor performance in case of large under-reads.
> I would suggest that the outer unconditional {{Math.max}} is a bad idea, has 
> been (poorly) insulating us from this error, and that we should first be 
> asserting that the calculation yields a value >= 0 before setting to 1.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13595) Short read protection doesn't work at the end of a partition

2017-09-20 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173532#comment-16173532
 ] 

Aleksey Yeschenko commented on CASSANDRA-13595:
---

[~jasonstack] My current thinking is that as bad as this is, fixing in 3.0+ is 
fine. You need to upgrade to get the fix though.

Can you rebase based on the latest 3.0, though - with CASSANDRA-13794 
committed? Then we can take it from there.

> Short read protection doesn't work at the end of a partition
> 
>
> Key: CASSANDRA-13595
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13595
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Andrés de la Peña
>Assignee: ZhaoYang
>  Labels: Correctness
>
> It seems that short read protection doesn't work when the short read is done 
> at the end of a partition in a range query. The final assertion of this dtest 
> fails:
> {code}
> def short_read_partitions_delete_test(self):
> cluster = self.cluster
> cluster.set_configuration_options(values={'hinted_handoff_enabled': 
> False})
> cluster.set_batch_commitlog(enabled=True)
> cluster.populate(2).start(wait_other_notice=True)
> node1, node2 = self.cluster.nodelist()
> session = self.patient_cql_connection(node1)
> create_ks(session, 'ks', 2)
> session.execute("CREATE TABLE t (k int, c int, PRIMARY KEY(k, c)) 
> WITH read_repair_chance = 0.0")
> # we write 1 and 2 in a partition: all nodes get it.
> session.execute(SimpleStatement("INSERT INTO t (k, c) VALUES (1, 1)", 
> consistency_level=ConsistencyLevel.ALL))
> session.execute(SimpleStatement("INSERT INTO t (k, c) VALUES (2, 1)", 
> consistency_level=ConsistencyLevel.ALL))
> # we delete partition 1: only node 1 gets it.
> node2.flush()
> node2.stop(wait_other_notice=True)
> session = self.patient_cql_connection(node1, 'ks', 
> consistency_level=ConsistencyLevel.ONE)
> session.execute(SimpleStatement("DELETE FROM t WHERE k = 1"))
> node2.start(wait_other_notice=True)
> # we delete partition 2: only node 2 gets it.
> node1.flush()
> node1.stop(wait_other_notice=True)
> session = self.patient_cql_connection(node2, 'ks', 
> consistency_level=ConsistencyLevel.ONE)
> session.execute(SimpleStatement("DELETE FROM t WHERE k = 2"))
> node1.start(wait_other_notice=True)
> # read from both nodes
> session = self.patient_cql_connection(node1, 'ks', 
> consistency_level=ConsistencyLevel.ALL)
> assert_none(session, "SELECT * FROM t LIMIT 1")
> {code}
> However, the dtest passes if we remove the {{LIMIT 1}}.
> Short read protection [uses a 
> {{SinglePartitionReadCommand}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/DataResolver.java#L484],
>  maybe it should use a {{PartitionRangeReadCommand}} instead?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13404) Hostname verification for client-to-node encryption

2017-09-20 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173535#comment-16173535
 ] 

Jason Brown commented on CASSANDRA-13404:
-

[~jjirsa] I am fine with that idea. 

> Hostname verification for client-to-node encryption
> ---
>
> Key: CASSANDRA-13404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Jan Karlsson
> Fix For: 4.x
>
> Attachments: 13404-trunk.txt
>
>
> Similarily to CASSANDRA-9220, Cassandra should support hostname verification 
> for client-node connections.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-20 Thread aleksey
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/79e344fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/79e344fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/79e344fc

Branch: refs/heads/trunk
Commit: 79e344fc63dbe60b3817d1eb68cdc6274dbe0d58
Parents: 030ec1f 85514ed
Author: Aleksey Yeschenko 
Authored: Wed Sep 20 18:45:06 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 18:45:06 2017 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/service/StorageProxy.java  | 25 +---
 .../cassandra/service/StorageService.java   | 18 +++---
 3 files changed, 32 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/79e344fc/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79e344fc/src/java/org/apache/cassandra/service/StorageProxy.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79e344fc/src/java/org/apache/cassandra/service/StorageService.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/6] cassandra git commit: Remove non-rpc-ready nodes from counter leader candidates

2017-09-20 Thread aleksey
Remove non-rpc-ready nodes from counter leader candidates

patch by Stefano Ortolani; reviewed by Aleksey Yeschenko for
CASSANDRA-13043


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/36bdc253
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/36bdc253
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/36bdc253

Branch: refs/heads/cassandra-3.11
Commit: 36bdc253193318ceaf5beb9bc5e869f6af590cb1
Parents: f93e6e3
Author: Stefano Ortolani 
Authored: Sun Sep 3 16:48:36 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 18:32:24 2017 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/service/StorageProxy.java  | 25 +---
 .../cassandra/service/StorageService.java   | 18 +++---
 3 files changed, 32 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/36bdc253/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 07742ef..91f5a51 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Remove non-rpc-ready nodes from counter leader candidates (CASSANDRA-13043)
  * Improve short read protection performance (CASSANDRA-13794)
  * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
  * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/36bdc253/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 1ce1bc5..6bf275d 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -1404,27 +1404,34 @@ public class StorageProxy implements StorageProxyMBean
 {
 Keyspace keyspace = Keyspace.open(keyspaceName);
 IEndpointSnitch snitch = DatabaseDescriptor.getEndpointSnitch();
-List endpoints = 
StorageService.instance.getLiveNaturalEndpoints(keyspace, key);
+List endpoints = new ArrayList<>();
+StorageService.instance.getLiveNaturalEndpoints(keyspace, key, 
endpoints);
+
+// CASSANDRA-13043: filter out those endpoints not accepting clients 
yet, maybe because still bootstrapping
+endpoints.removeIf(endpoint -> 
!StorageService.instance.isRpcReady(endpoint));
+
+// TODO have a way to compute the consistency level
 if (endpoints.isEmpty())
-// TODO have a way to compute the consistency level
 throw new UnavailableException(cl, cl.blockFor(keyspace), 0);
 
-List localEndpoints = new ArrayList();
+List localEndpoints = new ArrayList<>(endpoints.size());
+
 for (InetAddress endpoint : endpoints)
-{
 if (snitch.getDatacenter(endpoint).equals(localDataCenter))
 localEndpoints.add(endpoint);
-}
+
 if (localEndpoints.isEmpty())
 {
+// If the consistency required is local then we should not involve 
other DCs
+if (cl.isDatacenterLocal())
+throw new UnavailableException(cl, cl.blockFor(keyspace), 0);
+
 // No endpoint in local DC, pick the closest endpoint according to 
the snitch
 snitch.sortByProximity(FBUtilities.getBroadcastAddress(), 
endpoints);
 return endpoints.get(0);
 }
-else
-{
-return 
localEndpoints.get(ThreadLocalRandom.current().nextInt(localEndpoints.size()));
-}
+
+return 
localEndpoints.get(ThreadLocalRandom.current().nextInt(localEndpoints.size()));
 }
 
 // Must be called on a replica of the mutation. This replica becomes the

http://git-wip-us.apache.org/repos/asf/cassandra/blob/36bdc253/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a1d1756..52f28d4 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -3415,16 +3415,28 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public List getLiveNaturalEndpoints(Keyspace keyspace, 
RingPosition pos)
 {
+List liveEps = new ArrayList<>();
+getLiveNaturalEndpoints(keyspace, pos, liveEps);
+return liveEps;
+}
+
+/**
+ * This method attempts to return N endpoints t

[3/6] cassandra git commit: Remove non-rpc-ready nodes from counter leader candidates

2017-09-20 Thread aleksey
Remove non-rpc-ready nodes from counter leader candidates

patch by Stefano Ortolani; reviewed by Aleksey Yeschenko for
CASSANDRA-13043


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/36bdc253
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/36bdc253
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/36bdc253

Branch: refs/heads/trunk
Commit: 36bdc253193318ceaf5beb9bc5e869f6af590cb1
Parents: f93e6e3
Author: Stefano Ortolani 
Authored: Sun Sep 3 16:48:36 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 18:32:24 2017 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/service/StorageProxy.java  | 25 +---
 .../cassandra/service/StorageService.java   | 18 +++---
 3 files changed, 32 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/36bdc253/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 07742ef..91f5a51 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Remove non-rpc-ready nodes from counter leader candidates (CASSANDRA-13043)
  * Improve short read protection performance (CASSANDRA-13794)
  * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
  * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/36bdc253/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 1ce1bc5..6bf275d 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -1404,27 +1404,34 @@ public class StorageProxy implements StorageProxyMBean
 {
 Keyspace keyspace = Keyspace.open(keyspaceName);
 IEndpointSnitch snitch = DatabaseDescriptor.getEndpointSnitch();
-List endpoints = 
StorageService.instance.getLiveNaturalEndpoints(keyspace, key);
+List endpoints = new ArrayList<>();
+StorageService.instance.getLiveNaturalEndpoints(keyspace, key, 
endpoints);
+
+// CASSANDRA-13043: filter out those endpoints not accepting clients 
yet, maybe because still bootstrapping
+endpoints.removeIf(endpoint -> 
!StorageService.instance.isRpcReady(endpoint));
+
+// TODO have a way to compute the consistency level
 if (endpoints.isEmpty())
-// TODO have a way to compute the consistency level
 throw new UnavailableException(cl, cl.blockFor(keyspace), 0);
 
-List localEndpoints = new ArrayList();
+List localEndpoints = new ArrayList<>(endpoints.size());
+
 for (InetAddress endpoint : endpoints)
-{
 if (snitch.getDatacenter(endpoint).equals(localDataCenter))
 localEndpoints.add(endpoint);
-}
+
 if (localEndpoints.isEmpty())
 {
+// If the consistency required is local then we should not involve 
other DCs
+if (cl.isDatacenterLocal())
+throw new UnavailableException(cl, cl.blockFor(keyspace), 0);
+
 // No endpoint in local DC, pick the closest endpoint according to 
the snitch
 snitch.sortByProximity(FBUtilities.getBroadcastAddress(), 
endpoints);
 return endpoints.get(0);
 }
-else
-{
-return 
localEndpoints.get(ThreadLocalRandom.current().nextInt(localEndpoints.size()));
-}
+
+return 
localEndpoints.get(ThreadLocalRandom.current().nextInt(localEndpoints.size()));
 }
 
 // Must be called on a replica of the mutation. This replica becomes the

http://git-wip-us.apache.org/repos/asf/cassandra/blob/36bdc253/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a1d1756..52f28d4 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -3415,16 +3415,28 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public List getLiveNaturalEndpoints(Keyspace keyspace, 
RingPosition pos)
 {
+List liveEps = new ArrayList<>();
+getLiveNaturalEndpoints(keyspace, pos, liveEps);
+return liveEps;
+}
+
+/**
+ * This method attempts to return N endpoints that are r

[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-20 Thread aleksey
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/85514ed9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/85514ed9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/85514ed9

Branch: refs/heads/trunk
Commit: 85514ed9e652de5c5e83e74a9199bbfbd2c3e3e2
Parents: 2bae4ca 36bdc25
Author: Aleksey Yeschenko 
Authored: Wed Sep 20 18:44:45 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 18:44:45 2017 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/service/StorageProxy.java  | 25 +---
 .../cassandra/service/StorageService.java   | 18 +++---
 3 files changed, 32 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/85514ed9/CHANGES.txt
--
diff --cc CHANGES.txt
index 8d07cbc,91f5a51..eb2ccc0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 -3.0.15
 +3.11.1
 + * AbstractTokenTreeBuilder#serializedSize returns wrong value when there is 
a single leaf and overflow collisions (CASSANDRA-13869)
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * Remove non-rpc-ready nodes from counter leader candidates (CASSANDRA-13043)
   * Improve short read protection performance (CASSANDRA-13794)
   * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
   * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/85514ed9/src/java/org/apache/cassandra/service/StorageProxy.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/85514ed9/src/java/org/apache/cassandra/service/StorageService.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/6] cassandra git commit: Remove non-rpc-ready nodes from counter leader candidates

2017-09-20 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 f93e6e340 -> 36bdc2531
  refs/heads/cassandra-3.11 2bae4ca90 -> 85514ed9e
  refs/heads/trunk 030ec1f05 -> 79e344fc6


Remove non-rpc-ready nodes from counter leader candidates

patch by Stefano Ortolani; reviewed by Aleksey Yeschenko for
CASSANDRA-13043


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/36bdc253
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/36bdc253
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/36bdc253

Branch: refs/heads/cassandra-3.0
Commit: 36bdc253193318ceaf5beb9bc5e869f6af590cb1
Parents: f93e6e3
Author: Stefano Ortolani 
Authored: Sun Sep 3 16:48:36 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 18:32:24 2017 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/service/StorageProxy.java  | 25 +---
 .../cassandra/service/StorageService.java   | 18 +++---
 3 files changed, 32 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/36bdc253/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 07742ef..91f5a51 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Remove non-rpc-ready nodes from counter leader candidates (CASSANDRA-13043)
  * Improve short read protection performance (CASSANDRA-13794)
  * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
  * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/36bdc253/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 1ce1bc5..6bf275d 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -1404,27 +1404,34 @@ public class StorageProxy implements StorageProxyMBean
 {
 Keyspace keyspace = Keyspace.open(keyspaceName);
 IEndpointSnitch snitch = DatabaseDescriptor.getEndpointSnitch();
-List endpoints = 
StorageService.instance.getLiveNaturalEndpoints(keyspace, key);
+List endpoints = new ArrayList<>();
+StorageService.instance.getLiveNaturalEndpoints(keyspace, key, 
endpoints);
+
+// CASSANDRA-13043: filter out those endpoints not accepting clients 
yet, maybe because still bootstrapping
+endpoints.removeIf(endpoint -> 
!StorageService.instance.isRpcReady(endpoint));
+
+// TODO have a way to compute the consistency level
 if (endpoints.isEmpty())
-// TODO have a way to compute the consistency level
 throw new UnavailableException(cl, cl.blockFor(keyspace), 0);
 
-List localEndpoints = new ArrayList();
+List localEndpoints = new ArrayList<>(endpoints.size());
+
 for (InetAddress endpoint : endpoints)
-{
 if (snitch.getDatacenter(endpoint).equals(localDataCenter))
 localEndpoints.add(endpoint);
-}
+
 if (localEndpoints.isEmpty())
 {
+// If the consistency required is local then we should not involve 
other DCs
+if (cl.isDatacenterLocal())
+throw new UnavailableException(cl, cl.blockFor(keyspace), 0);
+
 // No endpoint in local DC, pick the closest endpoint according to 
the snitch
 snitch.sortByProximity(FBUtilities.getBroadcastAddress(), 
endpoints);
 return endpoints.get(0);
 }
-else
-{
-return 
localEndpoints.get(ThreadLocalRandom.current().nextInt(localEndpoints.size()));
-}
+
+return 
localEndpoints.get(ThreadLocalRandom.current().nextInt(localEndpoints.size()));
 }
 
 // Must be called on a replica of the mutation. This replica becomes the

http://git-wip-us.apache.org/repos/asf/cassandra/blob/36bdc253/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index a1d1756..52f28d4 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -3415,16 +3415,28 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public List getLiveNaturalEndpoints(Keyspace keyspace, 
RingPosition pos)
 {
+Lis

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-20 Thread aleksey
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/85514ed9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/85514ed9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/85514ed9

Branch: refs/heads/cassandra-3.11
Commit: 85514ed9e652de5c5e83e74a9199bbfbd2c3e3e2
Parents: 2bae4ca 36bdc25
Author: Aleksey Yeschenko 
Authored: Wed Sep 20 18:44:45 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Wed Sep 20 18:44:45 2017 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/service/StorageProxy.java  | 25 +---
 .../cassandra/service/StorageService.java   | 18 +++---
 3 files changed, 32 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/85514ed9/CHANGES.txt
--
diff --cc CHANGES.txt
index 8d07cbc,91f5a51..eb2ccc0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 -3.0.15
 +3.11.1
 + * AbstractTokenTreeBuilder#serializedSize returns wrong value when there is 
a single leaf and overflow collisions (CASSANDRA-13869)
 + * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
 + * BTree.Builder memory leak (CASSANDRA-13754)
 + * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
 + * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)
 + * Better bootstrap failure message when blocked by (potential) range 
movement (CASSANDRA-13744)
 + * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
 + * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
+  * Remove non-rpc-ready nodes from counter leader candidates (CASSANDRA-13043)
   * Improve short read protection performance (CASSANDRA-13794)
   * Fix sstable reader to support range-tombstone-marker for multi-slices 
(CASSANDRA-13787)
   * Fix short read protection for tables with no clustering columns 
(CASSANDRA-13880)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/85514ed9/src/java/org/apache/cassandra/service/StorageProxy.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/85514ed9/src/java/org/apache/cassandra/service/StorageService.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13885) Allow to run full repairs in 3.0 without additional cost of anti-compaction

2017-09-20 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173655#comment-16173655
 ] 

Blake Eggleston commented on CASSANDRA-13885:
-

I agree that this behavior is weird, and that it has some negative operational 
implications. However, fixing this would mean some non-trivial changes to 
repair behavior which have the potential to affect correctness. I'd lean pretty 
strongly towards not-fixing this one.

> Allow to run full repairs in 3.0 without additional cost of anti-compaction
> ---
>
> Key: CASSANDRA-13885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13885
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Thomas Steinmaurer
>
> This ticket is basically the result of the discussion in Cassandra user list: 
> https://www.mail-archive.com/user@cassandra.apache.org/msg53562.html
> I was asked to open a ticket by Paulo Motta to think about back-porting 
> running full repairs without the additional cost of anti-compaction.
> Basically there is no way in 3.0 to run full repairs from several nodes 
> concurrently without troubles caused by (overlapping?) anti-compactions. 
> Coming from 2.1 this is a major change from an operational POV, basically 
> breaking any e.g. cron job based solution kicking off -pr based repairs on 
> several nodes concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13885) Allow to run full repairs in 3.0 without additional cost of anti-compaction

2017-09-20 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173655#comment-16173655
 ] 

Blake Eggleston edited comment on CASSANDRA-13885 at 9/20/17 6:42 PM:
--

I agree that this behavior is weird, and that it has some negative operational 
implications. However, since fixing this would mean some non-trivial changes to 
repair behavior which have the potential to affect correctness, I'd lean pretty 
strongly towards not-fixing this one.


was (Author: bdeggleston):
I agree that this behavior is weird, and that it has some negative operational 
implications. However, fixing this would mean some non-trivial changes to 
repair behavior which have the potential to affect correctness. I'd lean pretty 
strongly towards not-fixing this one.

> Allow to run full repairs in 3.0 without additional cost of anti-compaction
> ---
>
> Key: CASSANDRA-13885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13885
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Thomas Steinmaurer
>
> This ticket is basically the result of the discussion in Cassandra user list: 
> https://www.mail-archive.com/user@cassandra.apache.org/msg53562.html
> I was asked to open a ticket by Paulo Motta to think about back-porting 
> running full repairs without the additional cost of anti-compaction.
> Basically there is no way in 3.0 to run full repairs from several nodes 
> concurrently without troubles caused by (overlapping?) anti-compactions. 
> Coming from 2.1 this is a major change from an operational POV, basically 
> breaking any e.g. cron job based solution kicking off -pr based repairs on 
> several nodes concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13889) cfstats should take sorting and limit parameters

2017-09-20 Thread Jon Haddad (JIRA)
Jon Haddad created CASSANDRA-13889:
--

 Summary: cfstats should take sorting and limit parameters
 Key: CASSANDRA-13889
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13889
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Jon Haddad
 Fix For: 4.0


When looking at a problematic node I'm not familiar with, one of the first 
things I do is check cfstats to identify the tables with the most reads, 
writes, and data.  This is fine as long as there aren't a lot of tables but 
once it goes above a dozen it's quite difficult.  cfstats should allow me to 
sort the results and limit to top K tables.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13043) UnavailabeException caused by counter writes forwarded to leaders without complete cluster view

2017-09-20 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173722#comment-16173722
 ] 

Aleksey Yeschenko commented on CASSANDRA-13043:
---

Committed as 
[36bdc253193318ceaf5beb9bc5e869f6af590cb1|https://github.com/apache/cassandra/commit/36bdc253193318ceaf5beb9bc5e869f6af590cb1]
 to 3.0 and merged up with 3.11 and trunk.

The dtest no longer applies cleanly. Care to rebase so I can commit? Thanks.

> UnavailabeException caused by counter writes forwarded to leaders without 
> complete cluster view
> ---
>
> Key: CASSANDRA-13043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13043
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
> Environment: Debian
>Reporter: Catalin Alexandru Zamfir
>Assignee: Stefano Ortolani
>Priority: Minor
> Fix For: 3.0.x, 3.11.x
>
> Attachments: 13043-3.0.patch, patch.diff
>
>
> In version 3.9 of Cassandra, we get the following exceptions on the 
> system.log whenever booting an agent. They seem to grow in number with each 
> reboot. Any idea where they come from or what can we do about them? Note that 
> the cluster is healthy (has sufficient live nodes).
> {noformat}
> 2/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-111,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat java.lang.Thread.run(Thread.java:745) 
> [na:1.8.0_111]
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-118,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:

[jira] [Updated] (CASSANDRA-13043) UnavailabeException caused by counter writes forwarded to leaders without complete cluster view

2017-09-20 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-13043:
--
   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 3.0.x)
   3.11.1
   3.0.15
   Status: Resolved  (was: Patch Available)

> UnavailabeException caused by counter writes forwarded to leaders without 
> complete cluster view
> ---
>
> Key: CASSANDRA-13043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13043
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
> Environment: Debian
>Reporter: Catalin Alexandru Zamfir
>Assignee: Stefano Ortolani
>Priority: Minor
> Fix For: 3.0.15, 3.11.1
>
> Attachments: 13043-3.0.patch, patch.diff
>
>
> In version 3.9 of Cassandra, we get the following exceptions on the 
> system.log whenever booting an agent. They seem to grow in number with each 
> reboot. Any idea where they come from or what can we do about them? Note that 
> the cluster is healthy (has sufficient live nodes).
> {noformat}
> 2/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-111,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat java.lang.Thread.run(Thread.java:745) 
> [na:1.8.0_111]
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-118,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecu

[jira] [Created] (CASSANDRA-13890) Expose current compaction throughput

2017-09-20 Thread Jon Haddad (JIRA)
Jon Haddad created CASSANDRA-13890:
--

 Summary: Expose current compaction throughput
 Key: CASSANDRA-13890
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13890
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Jon Haddad
Priority: Minor
 Fix For: 4.0


Getting and setting the current compaction throughput limit is supported, but 
there's no means of knowing if the setting is actually making a difference.

Let's expose the current throughput being utilized by Cassandra that's in the 
{{compactionRateLimiter}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13043) UnavailabeException caused by counter writes forwarded to leaders without complete cluster view

2017-09-20 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173739#comment-16173739
 ] 

Stefano Ortolani commented on CASSANDRA-13043:
--

Weird, riptano's last commit on master is from Jul 13th, the commit should 
apply cleanly.
Maybe I am missing something?

> UnavailabeException caused by counter writes forwarded to leaders without 
> complete cluster view
> ---
>
> Key: CASSANDRA-13043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13043
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
> Environment: Debian
>Reporter: Catalin Alexandru Zamfir
>Assignee: Stefano Ortolani
>Priority: Minor
> Fix For: 3.0.15, 3.11.1
>
> Attachments: 13043-3.0.patch, patch.diff
>
>
> In version 3.9 of Cassandra, we get the following exceptions on the 
> system.log whenever booting an agent. They seem to grow in number with each 
> reboot. Any idea where they come from or what can we do about them? Note that 
> the cluster is healthy (has sufficient live nodes).
> {noformat}
> 2/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-111,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat java.lang.Thread.run(Thread.java:745) 
> [na:1.8.0_111]
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-118,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run

[jira] [Updated] (CASSANDRA-13890) Expose current compaction throughput

2017-09-20 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-13890:
---
Labels: lhf  (was: )

> Expose current compaction throughput
> 
>
> Key: CASSANDRA-13890
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13890
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jon Haddad
>Priority: Minor
>  Labels: lhf
> Fix For: 4.0
>
>
> Getting and setting the current compaction throughput limit is supported, but 
> there's no means of knowing if the setting is actually making a difference.
> Let's expose the current throughput being utilized by Cassandra that's in the 
> {{compactionRateLimiter}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13043) UnavailabeException caused by counter writes forwarded to leaders without complete cluster view

2017-09-20 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173760#comment-16173760
 ] 

Aleksey Yeschenko commented on CASSANDRA-13043:
---

The last commit to apache/cassandra-dtest is from 2 days ago. It's where the 
dtests live now.

> UnavailabeException caused by counter writes forwarded to leaders without 
> complete cluster view
> ---
>
> Key: CASSANDRA-13043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13043
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
> Environment: Debian
>Reporter: Catalin Alexandru Zamfir
>Assignee: Stefano Ortolani
>Priority: Minor
> Fix For: 3.0.15, 3.11.1
>
> Attachments: 13043-3.0.patch, patch.diff
>
>
> In version 3.9 of Cassandra, we get the following exceptions on the 
> system.log whenever booting an agent. They seem to grow in number with each 
> reboot. Any idea where they come from or what can we do about them? Note that 
> the cluster is healthy (has sufficient live nodes).
> {noformat}
> 2/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-111,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat java.lang.Thread.run(Thread.java:745) 
> [na:1.8.0_111]
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-118,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecu

[jira] [Commented] (CASSANDRA-13043) UnavailabeException caused by counter writes forwarded to leaders without complete cluster view

2017-09-20 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173769#comment-16173769
 ] 

Stefano Ortolani commented on CASSANDRA-13043:
--

That explains :) done

> UnavailabeException caused by counter writes forwarded to leaders without 
> complete cluster view
> ---
>
> Key: CASSANDRA-13043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13043
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
> Environment: Debian
>Reporter: Catalin Alexandru Zamfir
>Assignee: Stefano Ortolani
>Priority: Minor
> Fix For: 3.0.15, 3.11.1
>
> Attachments: 13043-3.0.patch, patch.diff
>
>
> In version 3.9 of Cassandra, we get the following exceptions on the 
> system.log whenever booting an agent. They seem to grow in number with each 
> reboot. Any idea where they come from or what can we do about them? Note that 
> the cluster is healthy (has sufficient live nodes).
> {noformat}
> 2/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-111,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat java.lang.Thread.run(Thread.java:745) 
> [na:1.8.0_111]
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-118,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:

[jira] [Updated] (CASSANDRA-12961) LCS needlessly checks for L0 STCS candidates multiple times

2017-09-20 Thread Vusal Ahmadoglu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vusal Ahmadoglu updated CASSANDRA-12961:

Attachment: 0001-CASSANDRA-12961-Moving-getSTCSInL0CompactionCandidat.patch

Patch for CASSANDRA-12961

> LCS needlessly checks for L0 STCS candidates multiple times
> ---
>
> Key: CASSANDRA-12961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12961
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: Vusal Ahmadoglu
>Priority: Trivial
>  Labels: lhf
> Attachments: 
> 0001-CASSANDRA-12961-Moving-getSTCSInL0CompactionCandidat.patch
>
>
> It's very likely that the check for L0 STCS candidates (if L0 is falling 
> behind) can be moved outside of the loop, or at very least made so that it's 
> not called on each loop iteration:
> {code}
> for (int i = generations.length - 1; i > 0; i--)
> {
> List sstables = getLevel(i);
> if (sstables.isEmpty())
> continue; // mostly this just avoids polluting the debug log 
> with zero scores
> // we want to calculate score excluding compacting ones
> Set sstablesInLevel = Sets.newHashSet(sstables);
> Set remaining = Sets.difference(sstablesInLevel, 
> cfs.getTracker().getCompacting());
> double score = (double) SSTableReader.getTotalBytes(remaining) / 
> (double)maxBytesForLevel(i, maxSSTableSizeInBytes);
> logger.trace("Compaction score for level {} is {}", i, score);
> if (score > 1.001)
> {
> // before proceeding with a higher level, let's see if L0 is 
> far enough behind to warrant STCS
> CompactionCandidate l0Compaction = 
> getSTCSInL0CompactionCandidate();
> if (l0Compaction != null)
> return l0Compaction;
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12961) LCS needlessly checks for L0 STCS candidates multiple times

2017-09-20 Thread Vusal Ahmadoglu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vusal Ahmadoglu updated CASSANDRA-12961:

Status: Patch Available  (was: In Progress)

Hi, Jeff. I'm new here, that's why I might go to the wrong direction :) I 
attached the patch as a file. Please, could you have a look and let me know in 
case there is anything wrong. Have a lovely day!

> LCS needlessly checks for L0 STCS candidates multiple times
> ---
>
> Key: CASSANDRA-12961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12961
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: Vusal Ahmadoglu
>Priority: Trivial
>  Labels: lhf
> Attachments: 
> 0001-CASSANDRA-12961-Moving-getSTCSInL0CompactionCandidat.patch
>
>
> It's very likely that the check for L0 STCS candidates (if L0 is falling 
> behind) can be moved outside of the loop, or at very least made so that it's 
> not called on each loop iteration:
> {code}
> for (int i = generations.length - 1; i > 0; i--)
> {
> List sstables = getLevel(i);
> if (sstables.isEmpty())
> continue; // mostly this just avoids polluting the debug log 
> with zero scores
> // we want to calculate score excluding compacting ones
> Set sstablesInLevel = Sets.newHashSet(sstables);
> Set remaining = Sets.difference(sstablesInLevel, 
> cfs.getTracker().getCompacting());
> double score = (double) SSTableReader.getTotalBytes(remaining) / 
> (double)maxBytesForLevel(i, maxSSTableSizeInBytes);
> logger.trace("Compaction score for level {} is {}", i, score);
> if (score > 1.001)
> {
> // before proceeding with a higher level, let's see if L0 is 
> far enough behind to warrant STCS
> CompactionCandidate l0Compaction = 
> getSTCSInL0CompactionCandidate();
> if (l0Compaction != null)
> return l0Compaction;
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12961) LCS needlessly checks for L0 STCS candidates multiple times

2017-09-20 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173821#comment-16173821
 ] 

Jeff Jirsa commented on CASSANDRA-12961:


Hi [~vusal.ahmadoglu] - looks really good, one small change I'd like to make 
(which I can make for you if you're ok with it): -
 [you should be able to use l0Compaction here as 
well|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java#L392]
 and skip one last useless call.

The other thing we'll need to do is run your patch through CI - so I've pushed 
that small change to my branch 
[https://github.com/jeffjirsa/cassandra/tree/cassandra-12961|here] (+ a slight 
text change to CHANGES)

And I'm running unit tests 
[here|https://circleci.com/gh/jeffjirsa/cassandra/356] ( you can do this if you 
setup circle ci as well)
I'm running dtests 
[here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/332]
 (you CANT do this, but I'll do it for you):


> LCS needlessly checks for L0 STCS candidates multiple times
> ---
>
> Key: CASSANDRA-12961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12961
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: Vusal Ahmadoglu
>Priority: Trivial
>  Labels: lhf
> Attachments: 
> 0001-CASSANDRA-12961-Moving-getSTCSInL0CompactionCandidat.patch
>
>
> It's very likely that the check for L0 STCS candidates (if L0 is falling 
> behind) can be moved outside of the loop, or at very least made so that it's 
> not called on each loop iteration:
> {code}
> for (int i = generations.length - 1; i > 0; i--)
> {
> List sstables = getLevel(i);
> if (sstables.isEmpty())
> continue; // mostly this just avoids polluting the debug log 
> with zero scores
> // we want to calculate score excluding compacting ones
> Set sstablesInLevel = Sets.newHashSet(sstables);
> Set remaining = Sets.difference(sstablesInLevel, 
> cfs.getTracker().getCompacting());
> double score = (double) SSTableReader.getTotalBytes(remaining) / 
> (double)maxBytesForLevel(i, maxSSTableSizeInBytes);
> logger.trace("Compaction score for level {} is {}", i, score);
> if (score > 1.001)
> {
> // before proceeding with a higher level, let's see if L0 is 
> far enough behind to warrant STCS
> CompactionCandidate l0Compaction = 
> getSTCSInL0CompactionCandidate();
> if (l0Compaction != null)
> return l0Compaction;
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12961) LCS needlessly checks for L0 STCS candidates multiple times

2017-09-20 Thread Vusal Ahmadoglu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173848#comment-16173848
 ] 

Vusal Ahmadoglu commented on CASSANDRA-12961:
-

Ohh, cool! I missed that part, good point. I have to setup CI, otherwise, it 
takes too long on the local machine.

> LCS needlessly checks for L0 STCS candidates multiple times
> ---
>
> Key: CASSANDRA-12961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12961
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: Vusal Ahmadoglu
>Priority: Trivial
>  Labels: lhf
> Attachments: 
> 0001-CASSANDRA-12961-Moving-getSTCSInL0CompactionCandidat.patch
>
>
> It's very likely that the check for L0 STCS candidates (if L0 is falling 
> behind) can be moved outside of the loop, or at very least made so that it's 
> not called on each loop iteration:
> {code}
> for (int i = generations.length - 1; i > 0; i--)
> {
> List sstables = getLevel(i);
> if (sstables.isEmpty())
> continue; // mostly this just avoids polluting the debug log 
> with zero scores
> // we want to calculate score excluding compacting ones
> Set sstablesInLevel = Sets.newHashSet(sstables);
> Set remaining = Sets.difference(sstablesInLevel, 
> cfs.getTracker().getCompacting());
> double score = (double) SSTableReader.getTotalBytes(remaining) / 
> (double)maxBytesForLevel(i, maxSSTableSizeInBytes);
> logger.trace("Compaction score for level {} is {}", i, score);
> if (score > 1.001)
> {
> // before proceeding with a higher level, let's see if L0 is 
> far enough behind to warrant STCS
> CompactionCandidate l0Compaction = 
> getSTCSInL0CompactionCandidate();
> if (l0Compaction != null)
> return l0Compaction;
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12961) LCS needlessly checks for L0 STCS candidates multiple times

2017-09-20 Thread Vusal Ahmadoglu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173857#comment-16173857
 ] 

Vusal Ahmadoglu commented on CASSANDRA-12961:
-

BTW, thanks for your changes :)

> LCS needlessly checks for L0 STCS candidates multiple times
> ---
>
> Key: CASSANDRA-12961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12961
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: Vusal Ahmadoglu
>Priority: Trivial
>  Labels: lhf
> Attachments: 
> 0001-CASSANDRA-12961-Moving-getSTCSInL0CompactionCandidat.patch
>
>
> It's very likely that the check for L0 STCS candidates (if L0 is falling 
> behind) can be moved outside of the loop, or at very least made so that it's 
> not called on each loop iteration:
> {code}
> for (int i = generations.length - 1; i > 0; i--)
> {
> List sstables = getLevel(i);
> if (sstables.isEmpty())
> continue; // mostly this just avoids polluting the debug log 
> with zero scores
> // we want to calculate score excluding compacting ones
> Set sstablesInLevel = Sets.newHashSet(sstables);
> Set remaining = Sets.difference(sstablesInLevel, 
> cfs.getTracker().getCompacting());
> double score = (double) SSTableReader.getTotalBytes(remaining) / 
> (double)maxBytesForLevel(i, maxSSTableSizeInBytes);
> logger.trace("Compaction score for level {} is {}", i, score);
> if (score > 1.001)
> {
> // before proceeding with a higher level, let's see if L0 is 
> far enough behind to warrant STCS
> CompactionCandidate l0Compaction = 
> getSTCSInL0CompactionCandidate();
> if (l0Compaction != null)
> return l0Compaction;
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12961) LCS needlessly checks for L0 STCS candidates multiple times

2017-09-20 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-12961:
---
Fix Version/s: 4.x

> LCS needlessly checks for L0 STCS candidates multiple times
> ---
>
> Key: CASSANDRA-12961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12961
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: Vusal Ahmadoglu
>Priority: Trivial
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: 
> 0001-CASSANDRA-12961-Moving-getSTCSInL0CompactionCandidat.patch
>
>
> It's very likely that the check for L0 STCS candidates (if L0 is falling 
> behind) can be moved outside of the loop, or at very least made so that it's 
> not called on each loop iteration:
> {code}
> for (int i = generations.length - 1; i > 0; i--)
> {
> List sstables = getLevel(i);
> if (sstables.isEmpty())
> continue; // mostly this just avoids polluting the debug log 
> with zero scores
> // we want to calculate score excluding compacting ones
> Set sstablesInLevel = Sets.newHashSet(sstables);
> Set remaining = Sets.difference(sstablesInLevel, 
> cfs.getTracker().getCompacting());
> double score = (double) SSTableReader.getTotalBytes(remaining) / 
> (double)maxBytesForLevel(i, maxSSTableSizeInBytes);
> logger.trace("Compaction score for level {} is {}", i, score);
> if (score > 1.001)
> {
> // before proceeding with a higher level, let's see if L0 is 
> far enough behind to warrant STCS
> CompactionCandidate l0Compaction = 
> getSTCSInL0CompactionCandidate();
> if (l0Compaction != null)
> return l0Compaction;
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12961) LCS needlessly checks for L0 STCS candidates multiple times

2017-09-20 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173821#comment-16173821
 ] 

Jeff Jirsa edited comment on CASSANDRA-12961 at 9/20/17 9:41 PM:
-

Hi [~vusal.ahmadoglu] - looks really good, one small change I'd like to make 
(which I can make for you if you're ok with it): -
 [you should be able to use l0Compaction here as 
well|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java#L392]
 and skip one last useless call.

The other thing we'll need to do is run your patch through CI - so I've pushed 
that small change to my branch 
[here|https://github.com/jeffjirsa/cassandra/tree/cassandra-12961] (+ a slight 
text change to CHANGES)

And I'm running unit tests 
[here|https://circleci.com/gh/jeffjirsa/cassandra/356] ( you can do this if you 
setup circle ci as well)
I'm running dtests 
[here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/332]
 (you CANT do this, but I'll do it for you):



was (Author: jjirsa):
Hi [~vusal.ahmadoglu] - looks really good, one small change I'd like to make 
(which I can make for you if you're ok with it): -
 [you should be able to use l0Compaction here as 
well|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java#L392]
 and skip one last useless call.

The other thing we'll need to do is run your patch through CI - so I've pushed 
that small change to my branch 
[https://github.com/jeffjirsa/cassandra/tree/cassandra-12961|here] (+ a slight 
text change to CHANGES)

And I'm running unit tests 
[here|https://circleci.com/gh/jeffjirsa/cassandra/356] ( you can do this if you 
setup circle ci as well)
I'm running dtests 
[here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/332]
 (you CANT do this, but I'll do it for you):


> LCS needlessly checks for L0 STCS candidates multiple times
> ---
>
> Key: CASSANDRA-12961
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12961
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Jeff Jirsa
>Assignee: Vusal Ahmadoglu
>Priority: Trivial
>  Labels: lhf
> Fix For: 4.x
>
> Attachments: 
> 0001-CASSANDRA-12961-Moving-getSTCSInL0CompactionCandidat.patch
>
>
> It's very likely that the check for L0 STCS candidates (if L0 is falling 
> behind) can be moved outside of the loop, or at very least made so that it's 
> not called on each loop iteration:
> {code}
> for (int i = generations.length - 1; i > 0; i--)
> {
> List sstables = getLevel(i);
> if (sstables.isEmpty())
> continue; // mostly this just avoids polluting the debug log 
> with zero scores
> // we want to calculate score excluding compacting ones
> Set sstablesInLevel = Sets.newHashSet(sstables);
> Set remaining = Sets.difference(sstablesInLevel, 
> cfs.getTracker().getCompacting());
> double score = (double) SSTableReader.getTotalBytes(remaining) / 
> (double)maxBytesForLevel(i, maxSSTableSizeInBytes);
> logger.trace("Compaction score for level {} is {}", i, score);
> if (score > 1.001)
> {
> // before proceeding with a higher level, let's see if L0 is 
> far enough behind to warrant STCS
> CompactionCandidate l0Compaction = 
> getSTCSInL0CompactionCandidate();
> if (l0Compaction != null)
> return l0Compaction;
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13149) AssertionError prepending to a list

2017-09-20 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16173977#comment-16173977
 ] 

Jason Brown commented on CASSANDRA-13149:
-

Pushed a few updates to the branches.

bq. handle the case where the list of elements to prepend > {{MAX_NANOS}}

Done, even though a client would be rather bananas to prepare 10,000 elements 
to the beginning of list! Also, added test for that.

bq. nothing currently stops us overflowing the range of the PT instance except 
the construction of the for loop

I tried turning the "range" represented by the instance returned by 
{{PrecisionTime#getNext()}} into something like an interator, as wel basically 
want to iterate/walk the value represented by the range. It quickly started to 
feel like it was over-engineered. While there is this "tied-at-the-hip" 
relationship between {{Prepender}} and {{PrecisionTime}}, I'm not sure that 
additional guards won't feel similiarly out of place. wdyt?

Tests and other nits have been addressed,as well.


> AssertionError prepending to a list
> ---
>
> Key: CASSANDRA-13149
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13149
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: 3.0.8
>Reporter: Steven Warren
>Assignee: Jason Brown
>
> Prepending to a list produces the following AssertionError randomly. Changing 
> the update to append (and sort in the client) works around the issue.
> {code}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.cql3.Lists$PrecisionTime.getNext(Lists.java:275) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.cql3.Lists$Prepender.execute(Lists.java:430) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:94)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.addUpdates(ModificationStatement.java:682)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:613)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:420)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:408)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:487)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:464)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_101]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.0.8.jar:3.0.8]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.8.jar:3.0.8]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13787) RangeTombstoneMarker and PartitionDeletion is not properly included in MV

2017-09-20 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-13787:
-
Fix Version/s: 4.0

> RangeTombstoneMarker and PartitionDeletion is not properly included in MV
> -
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> -2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.- 
> (Merged in CASSANDRA-11500)
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 1, 1, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 2, 1, 2) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 3, 1, 3) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
> 0, 2, 0) USING TIMESTAMP 5; " +
> "APPLY BATCH");
> assertRowsIgnoringOrder(execute("select * from %s"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> }
> @Test
> public void testExistingParitionDeletionWithFlush() throws Throwable
> {
> testExistingParitionDeletion(true);
> }
> @Test
> public void testExistingParitionDeletionWithoutFlush() throws Throwable
> {
> testExistingParitionDeletion(false);
> }
> public void testExistingParitionDeletion(boolean flush) throws Throwable
> {
> // for partition range deletion, need to know that existing row is 
> shadowed instead of not existed.
> createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
> (a))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("mv_test1",
>"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a 
> IS NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");
> Keyspace ks = Keyspace.open(keyspace());
>   

[jira] [Commented] (CASSANDRA-13787) RangeTombstoneMarker and PartitionDeletion is not properly included in MV

2017-09-20 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16174269#comment-16174269
 ] 

ZhaoYang commented on CASSANDRA-13787:
--

thanks for reviewing~

> RangeTombstoneMarker and PartitionDeletion is not properly included in MV
> -
>
> Key: CASSANDRA-13787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13787
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Materialized Views
>Reporter: ZhaoYang
>Assignee: ZhaoYang
> Fix For: 3.0.15, 3.11.1, 4.0
>
>
> Found two problems related to MV tombstone. 
> 1. Range-tombstone-Marker being ignored after shadowing first row, subsequent 
> base rows are not shadowed in TableViews.
> If the range tombstone was not flushed, it was used as deleted row to 
> shadow new updates. It works correctly.
> After range tombstone was flushed, it was used as RangeTombstoneMarker 
> and being skipped after shadowing first update. The bound of 
> RangeTombstoneMarker seems wrong, it contained full clustering, but it should 
> contain range or it should be multiple RangeTombstoneMarkers for multiple 
> slices(aka. new updates)
> -2. Partition tombstone is not used when no existing live data, it will 
> resurrect deleted cells. It was found in 11500 and included in that patch.- 
> (Merged in CASSANDRA-11500)
> In order not to make 11500 patch more complicated, I will try fix 
> range/partition tombstone issue here.
> {code:title=Tests to reproduce}
> @Test
> public void testExistingRangeTombstoneWithFlush() throws Throwable
> {
> testExistingRangeTombstone(true);
> }
> @Test
> public void testExistingRangeTombstoneWithoutFlush() throws Throwable
> {
> testExistingRangeTombstone(false);
> }
> public void testExistingRangeTombstone(boolean flush) throws Throwable
> {
> createTable("CREATE TABLE %s (k1 int, c1 int, c2 int, v1 int, v2 int, 
> PRIMARY KEY (k1, c1, c2))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("view1",
>"CREATE MATERIALIZED VIEW view1 AS SELECT * FROM %%s WHERE 
> k1 IS NOT NULL AND c1 IS NOT NULL AND c2 IS NOT NULL PRIMARY KEY (k1, c2, 
> c1)");
> updateView("DELETE FROM %s USING TIMESTAMP 10 WHERE k1 = 1 and c1=1");
> if (flush)
> 
> Keyspace.open(keyspace()).getColumnFamilyStore(currentTable()).forceBlockingFlush();
> String table = KEYSPACE + "." + currentTable();
> updateView("BEGIN BATCH " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 0, 0, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 0, 
> 1, 0, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 0, 1, 0) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 1, 1, 1) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 2, 1, 2) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 1, 
> 3, 1, 3) USING TIMESTAMP 5; " +
> "INSERT INTO " + table + " (k1, c1, c2, v1, v2) VALUES (1, 2, 
> 0, 2, 0) USING TIMESTAMP 5; " +
> "APPLY BATCH");
> assertRowsIgnoringOrder(execute("select * from %s"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> assertRowsIgnoringOrder(execute("select k1,c1,c2,v1,v2 from view1"),
> row(1, 0, 0, 0, 0),
> row(1, 0, 1, 0, 1),
> row(1, 2, 0, 2, 0));
> }
> @Test
> public void testExistingParitionDeletionWithFlush() throws Throwable
> {
> testExistingParitionDeletion(true);
> }
> @Test
> public void testExistingParitionDeletionWithoutFlush() throws Throwable
> {
> testExistingParitionDeletion(false);
> }
> public void testExistingParitionDeletion(boolean flush) throws Throwable
> {
> // for partition range deletion, need to know that existing row is 
> shadowed instead of not existed.
> createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
> (a))");
> execute("USE " + keyspace());
> executeNet(protocolVersion, "USE " + keyspace());
> createView("mv_test1",
>"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a 
> IS NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");
> 

[jira] [Commented] (CASSANDRA-13885) Allow to run full repairs in 3.0 without additional cost of anti-compaction

2017-09-20 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16174313#comment-16174313
 ] 

Marcus Eriksson commented on CASSANDRA-13885:
-

You can run repair with {{-st  -et }} to avoid 
anticompaction in 3.0

> Allow to run full repairs in 3.0 without additional cost of anti-compaction
> ---
>
> Key: CASSANDRA-13885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13885
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Thomas Steinmaurer
>
> This ticket is basically the result of the discussion in Cassandra user list: 
> https://www.mail-archive.com/user@cassandra.apache.org/msg53562.html
> I was asked to open a ticket by Paulo Motta to think about back-porting 
> running full repairs without the additional cost of anti-compaction.
> Basically there is no way in 3.0 to run full repairs from several nodes 
> concurrently without troubles caused by (overlapping?) anti-compactions. 
> Coming from 2.1 this is a major change from an operational POV, basically 
> breaking any e.g. cron job based solution kicking off -pr based repairs on 
> several nodes concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13885) Allow to run full repairs in 3.0 without additional cost of anti-compaction

2017-09-20 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16174313#comment-16174313
 ] 

Marcus Eriksson edited comment on CASSANDRA-13885 at 9/21/17 6:07 AM:
--

-You can run repair with {{-st  -et }} to avoid 
anticompaction in 3.0- uh maybe not, seems you need to figure out the actual 
tokens for each node


was (Author: krummas):
You can run repair with {{-st  -et }} to avoid 
anticompaction in 3.0

> Allow to run full repairs in 3.0 without additional cost of anti-compaction
> ---
>
> Key: CASSANDRA-13885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13885
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Thomas Steinmaurer
>
> This ticket is basically the result of the discussion in Cassandra user list: 
> https://www.mail-archive.com/user@cassandra.apache.org/msg53562.html
> I was asked to open a ticket by Paulo Motta to think about back-porting 
> running full repairs without the additional cost of anti-compaction.
> Basically there is no way in 3.0 to run full repairs from several nodes 
> concurrently without troubles caused by (overlapping?) anti-compactions. 
> Coming from 2.1 this is a major change from an operational POV, basically 
> breaking any e.g. cron job based solution kicking off -pr based repairs on 
> several nodes concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org