[jira] [Updated] (CASSANDRA-11550) Make the fanout size for LeveledCompactionStrategy to be configurable

2016-04-11 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-11550:
--
Reviewer: Marcus Eriksson
  Status: Patch Available  (was: Open)

> Make the fanout size for LeveledCompactionStrategy to be configurable
> -
>
> Key: CASSANDRA-11550
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11550
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 3.x
>
> Attachments: 
> 0001-make-fanout-size-for-leveledcompactionstrategy-to-be.patch
>
>
> Currently, the fanout size for LeveledCompactionStrategy is hard coded in the 
> system (10). It would be useful to make the fanout size to be tunable, so 
> that we can change it according to different use cases.
> Further more, we can change the size dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11550) Make the fanout size for LeveledCompactionStrategy to be configurable

2016-04-11 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-11550:
--
Attachment: 0001-make-fanout-size-for-leveledcompactionstrategy-to-be.patch

> Make the fanout size for LeveledCompactionStrategy to be configurable
> -
>
> Key: CASSANDRA-11550
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11550
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 3.x
>
> Attachments: 
> 0001-make-fanout-size-for-leveledcompactionstrategy-to-be.patch
>
>
> Currently, the fanout size for LeveledCompactionStrategy is hard coded in the 
> system (10). It would be useful to make the fanout size to be tunable, so 
> that we can change it according to different use cases.
> Further more, we can change the size dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11550) Make the fanout size for LeveledCompactionStrategy to be configurable

2016-04-11 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-11550:
--
Issue Type: New Feature  (was: Improvement)

> Make the fanout size for LeveledCompactionStrategy to be configurable
> -
>
> Key: CASSANDRA-11550
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11550
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 3.x
>
>
> Currently, the fanout size for LeveledCompactionStrategy is hard coded in the 
> system (10). It would be useful to make the fanout size to be tunable, so 
> that we can change it according to different use cases.
> Further more, we can change the size dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11550) Make the fanout size for LeveledCompactionStrategy to be configurable

2016-04-11 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-11550:
--
Fix Version/s: 3.x

> Make the fanout size for LeveledCompactionStrategy to be configurable
> -
>
> Key: CASSANDRA-11550
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11550
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 3.x
>
>
> Currently, the fanout size for LeveledCompactionStrategy is hard coded in the 
> system (10). It would be useful to make the fanout size to be tunable, so 
> that we can change it according to different use cases.
> Further more, we can change the size dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11550) Make the fanout size for LeveledCompactionStrategy to be configurable

2016-04-11 Thread Dikang Gu (JIRA)
Dikang Gu created CASSANDRA-11550:
-

 Summary: Make the fanout size for LeveledCompactionStrategy to be 
configurable
 Key: CASSANDRA-11550
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11550
 Project: Cassandra
  Issue Type: Improvement
  Components: Compaction
Reporter: Dikang Gu
Assignee: Dikang Gu


Currently, the fanout size for LeveledCompactionStrategy is hard coded in the 
system (10). It would be useful to make the fanout size to be tunable, so that 
we can change it according to different use cases.

Further more, we can change the size dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11549) cqlsh: COPY FROM ignores NULL values in conversion

2016-04-11 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11549:
-
Status: Patch Available  (was: In Progress)

> cqlsh: COPY FROM ignores NULL values in conversion
> --
>
> Key: CASSANDRA-11549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11549
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> COPY FROM fails to import empty values. 
> For example:
> {code}
> $ cat test.csv
> a,10,20
> b,30,
> c,50,60
> $ cqlsh
> cqlsh> create keyspace if not exists test with replication = {'class': 
> 'SimpleStrategy', 'replication_factor':1};
> cqlsh> create table if not exists test.test (t text primary key, i1 int, i2 
> int);
> cqlsh> copy test.test (t,i1,i2) from 'test.csv';
> {code}
> Imports:
> {code}
> select * from test.test";
>  t | i1 | i2
> ---++
>  a | 10 | 20
>  c | 50 | 60
> (2 rows)
> {code}
> and generates a {{ParseError - invalid literal for int() with base 10: '',  
> given up without retries}} for the row with an empty value.
> It should import the empty value as a {{null}} and there should be no error:
> {code}
> cqlsh> select * from test.test";
>  t | i1 | i2
> ---++--
>  a | 10 |   20
>  c | 50 |   60
>  b | 30 | null
> (3 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11505) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors

2016-04-11 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236511#comment-15236511
 ] 

Stefania commented on CASSANDRA-11505:
--

Awesome, thank you so much for testing! :)

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors
> -
>
> Key: CASSANDRA-11505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11505
> Project: Cassandra
>  Issue Type: Test
>  Components: Tools
>Reporter: Michael Shuler
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.1.x, 2.2.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/197/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors
> Failed on CassCI build cassandra-3.0_novnode_dtest #197
> {noformat}
> Error Message
> False is not true
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-c2AJlu
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse errors
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 943, in test_reading_max_parse_errors
> self.assertTrue(num_rows_imported < (num_rows / 2))  # less than the 
> maximum number of valid rows in the csv
>   File "/usr/lib/python2.7/unittest/case.py", line 422, in assertTrue
> raise self.failureException(msg)
> "False is not true\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-c2AJlu\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None,\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse 
> errors\n- >> end captured logging << 
> -"
> Standard Output
> (EE)  Using CQL driver:  '/home/automaton/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>(EE)
>   Using connect timeout: 5 seconds(EE)  Using 'utf-8' encoding(EE)  
> :2:Failed to import 2500 rows: ParseError - could not convert string 
> to float: abc,  given up without retries(EE)  :2:Exceeded maximum 
> number of parse errors 10(EE)  :2:Failed to process 2500 rows; failed 
> rows written to import_ks_testmaxparseerrors.err(EE)  :2:Exceeded 
> maximum number of parse errors 10(EE)  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11505) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors

2016-04-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236494#comment-15236494
 ] 

Michael Shuler commented on CASSANDRA-11505:


2.1 HEAD hung on test_reading_max_parse_errors on my first try. Since it has 
been intermittent, we probably hadn't hit it in CI due to the smaller number of 
2.1 commits. With your 2.1 patch, 50 loops over the test have completed 
successfully. +1 for 2.1, too :)

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors
> -
>
> Key: CASSANDRA-11505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11505
> Project: Cassandra
>  Issue Type: Test
>  Components: Tools
>Reporter: Michael Shuler
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.1.x, 2.2.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/197/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors
> Failed on CassCI build cassandra-3.0_novnode_dtest #197
> {noformat}
> Error Message
> False is not true
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-c2AJlu
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse errors
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 943, in test_reading_max_parse_errors
> self.assertTrue(num_rows_imported < (num_rows / 2))  # less than the 
> maximum number of valid rows in the csv
>   File "/usr/lib/python2.7/unittest/case.py", line 422, in assertTrue
> raise self.failureException(msg)
> "False is not true\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-c2AJlu\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None,\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse 
> errors\n- >> end captured logging << 
> -"
> Standard Output
> (EE)  Using CQL driver:  '/home/automaton/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>(EE)
>   Using connect timeout: 5 seconds(EE)  Using 'utf-8' encoding(EE)  
> :2:Failed to import 2500 rows: ParseError - could not convert string 
> to float: abc,  given up without retries(EE)  :2:Exceeded maximum 
> number of parse errors 10(EE)  :2:Failed to process 2500 rows; failed 
> rows written to import_ks_testmaxparseerrors.err(EE)  :2:Exceeded 
> maximum number of parse errors 10(EE)  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11549) cqlsh: COPY FROM ignores NULL values in conversion

2016-04-11 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-11549:

Reviewer: Paulo Motta

> cqlsh: COPY FROM ignores NULL values in conversion
> --
>
> Key: CASSANDRA-11549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11549
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> COPY FROM fails to import empty values. 
> For example:
> {code}
> $ cat test.csv
> a,10,20
> b,30,
> c,50,60
> $ cqlsh
> cqlsh> create keyspace if not exists test with replication = {'class': 
> 'SimpleStrategy', 'replication_factor':1};
> cqlsh> create table if not exists test.test (t text primary key, i1 int, i2 
> int);
> cqlsh> copy test.test (t,i1,i2) from 'test.csv';
> {code}
> Imports:
> {code}
> select * from test.test";
>  t | i1 | i2
> ---++
>  a | 10 | 20
>  c | 50 | 60
> (2 rows)
> {code}
> and generates a {{ParseError - invalid literal for int() with base 10: '',  
> given up without retries}} for the row with an empty value.
> It should import the empty value as a {{null}} and there should be no error:
> {code}
> cqlsh> select * from test.test";
>  t | i1 | i2
> ---++--
>  a | 10 |   20
>  c | 50 |   60
>  b | 30 | null
> (3 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11529) Checking if an unlogged batch is local is inefficient

2016-04-11 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236453#comment-15236453
 ] 

Stefania commented on CASSANDRA-11529:
--

Thank you [~iamaleksey]!

> Checking if an unlogged batch is local is inefficient
> -
>
> Key: CASSANDRA-11529
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11529
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Paulo Motta
>Assignee: Stefania
>Priority: Critical
>  Labels: docs-impacting
> Fix For: 2.1.14, 2.2.6, 3.6, 3.0.6
>
>
> Based on CASSANDRA-11363 report I noticed that on CASSANDRA-9303 we 
> introduced the following check to avoid printing a {{WARN}} in case an 
> unlogged batch statement is local:
> {noformat}
>  for (IMutation im : mutations)
>  {
>  keySet.add(im.key());
>  for (ColumnFamily cf : im.getColumnFamilies())
>  ksCfPairs.add(String.format("%s.%s", 
> cf.metadata().ksName, cf.metadata().cfName));
> +
> +if (localMutationsOnly)
> +localMutationsOnly &= isMutationLocal(localTokensByKs, 
> im);
>  }
>  
> +// CASSANDRA-9303: If we only have local mutations we do not warn
> +if (localMutationsOnly)
> +return;
> +
>  NoSpamLogger.log(logger, NoSpamLogger.Level.WARN, 1, 
> TimeUnit.MINUTES, unloggedBatchWarning,
>   keySet.size(), keySet.size() == 1 ? "" : "s",
>   ksCfPairs.size() == 1 ? "" : "s", ksCfPairs);
> {noformat}
> The {{isMutationLocal}} check uses 
> {{StorageService.instance.getLocalRanges(mutation.getKeyspaceName())}}, which 
> underneaths uses {{AbstractReplication.getAddressRanges}} to calculate local 
> ranges. 
> Recalculating this at every unlogged batch can be pretty inefficient, so we 
> should at the very least cache it every time the ring changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11549) cqlsh: COPY FROM ignores NULL values in conversion

2016-04-11 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236452#comment-15236452
 ] 

Stefania commented on CASSANDRA-11549:
--

The patch is here:

||2.1||2.2||3.0||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/11549-2.1]|[patch|https://github.com/stef1927/cassandra/commits/11549-2.2]|[patch|https://github.com/stef1927/cassandra/commits/11549-3.0]|[patch|https://github.com/stef1927/cassandra/commits/11549]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11549-2.1-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11549-2.2-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11549-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11549-dtest/]|

Here is a [pull request|https://github.com/riptano/cassandra-dtest/pull/923] 
for dtests so that in future we catch this sort of problems.

[~pauloricardomg] could you review? It's a one liner so I've only started CI on 
2.1 and trunk, to save resources.

> cqlsh: COPY FROM ignores NULL values in conversion
> --
>
> Key: CASSANDRA-11549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11549
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> COPY FROM fails to import empty values. 
> For example:
> {code}
> $ cat test.csv
> a,10,20
> b,30,
> c,50,60
> $ cqlsh
> cqlsh> create keyspace if not exists test with replication = {'class': 
> 'SimpleStrategy', 'replication_factor':1};
> cqlsh> create table if not exists test.test (t text primary key, i1 int, i2 
> int);
> cqlsh> copy test.test (t,i1,i2) from 'test.csv';
> {code}
> Imports:
> {code}
> select * from test.test";
>  t | i1 | i2
> ---++
>  a | 10 | 20
>  c | 50 | 60
> (2 rows)
> {code}
> and generates a {{ParseError - invalid literal for int() with base 10: '',  
> given up without retries}} for the row with an empty value.
> It should import the empty value as a {{null}} and there should be no error:
> {code}
> cqlsh> select * from test.test";
>  t | i1 | i2
> ---++--
>  a | 10 |   20
>  c | 50 |   60
>  b | 30 | null
> (3 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11547) Add background thread to check for clock drift

2016-04-11 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-11547:

Status: Patch Available  (was: Open)

> Add background thread to check for clock drift
> --
>
> Key: CASSANDRA-11547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11547
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: clocks, time
>
> The system clock has the potential to drift while a system is running. As a 
> simple way to check if this occurs, we can run a background thread that wakes 
> up every n seconds, reads the system clock, and checks to see if, indeed, n 
> seconds have passed. 
> * If the clock's current time is less than the last recorded time (captured n 
> seconds in the past), we know the clock has jumped backward.
> * If n seconds have not elapsed, we know the system clock is running slow or 
> has moved backward (by a value less than n)
> * If (n + a small offset) seconds have elapsed, we can assume we are within 
> an acceptable window of clock movement. Reasons for including an offset are 
> the clock checking thread might not have been scheduled on time, or garbage 
> collection, and so on.
> * If the clock is greater than (n + a small offset) seconds, we can assume 
> the clock jumped forward.
> In the unhappy cases, we can write a message to the log and increment some 
> metric that the user's monitoring systems can trigger/alert on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11548) Anticompaction not removing old sstables

2016-04-11 Thread Ruoran Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruoran Wang updated CASSANDRA-11548:

Attachment: 0001-cassandra-2.1.13-potential-fix.patch

I only tried unit test for this. Still trying to figure out dtest.

> Anticompaction not removing old sstables
> 
>
> Key: CASSANDRA-11548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11548
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.1.13
>Reporter: Ruoran Wang
> Attachments: 0001-cassandra-2.1.13-potential-fix.patch
>
>
> 1. 12/29/15 https://issues.apache.org/jira/browse/CASSANDRA-10831
> Moved markCompactedSSTablesReplaced out of the loop ```for (SSTableReader 
> sstable : repairedSSTables)```
> 2. 1/18/16 https://issues.apache.org/jira/browse/CASSANDRA-10829
> Added unmarkCompacting into the loop. ```for (SSTableReader sstable : 
> repairedSSTables)```
> I think the effect of those above change might cause the 
> markCompactedSSTablesReplaced fail on 
> DataTracker.java
> {noformat}
>assert newSSTables.size() + newShadowed.size() == newSSTablesSize :
> String.format("Expecting new size of %d, got %d while 
> replacing %s by %s in %s",
>   newSSTablesSize, newSSTables.size() + 
> newShadowed.size(), oldSSTables, replacements, this);
> {noformat}
> Since change CASSANDRA-10831 moved it out. This AssertError won't be caught, 
> leaving the oldsstables not removed. (Then this might cause row out of order 
> error when doing incremental repair if there are L1 un-repaired sstables.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11549) cqlsh: COPY FROM ignores NULL values in conversion

2016-04-11 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236431#comment-15236431
 ] 

Jeremiah Jordan commented on CASSANDRA-11549:
-

This is a regression introduced in CASSANDRA-11053

> cqlsh: COPY FROM ignores NULL values in conversion
> --
>
> Key: CASSANDRA-11549
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11549
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> COPY FROM fails to import empty values. 
> For example:
> {code}
> $ cat test.csv
> a,10,20
> b,30,
> c,50,60
> $ cqlsh
> cqlsh> create keyspace if not exists test with replication = {'class': 
> 'SimpleStrategy', 'replication_factor':1};
> cqlsh> create table if not exists test.test (t text primary key, i1 int, i2 
> int);
> cqlsh> copy test.test (t,i1,i2) from 'test.csv';
> {code}
> Imports:
> {code}
> select * from test.test";
>  t | i1 | i2
> ---++
>  a | 10 | 20
>  c | 50 | 60
> (2 rows)
> {code}
> and generates a {{ParseError - invalid literal for int() with base 10: '',  
> given up without retries}} for the row with an empty value.
> It should import the empty value as a {{null}} and there should be no error:
> {code}
> cqlsh> select * from test.test";
>  t | i1 | i2
> ---++--
>  a | 10 |   20
>  c | 50 |   60
>  b | 30 | null
> (3 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11549) cqlsh: COPY FROM ignores NULL values in conversion

2016-04-11 Thread Stefania (JIRA)
Stefania created CASSANDRA-11549:


 Summary: cqlsh: COPY FROM ignores NULL values in conversion
 Key: CASSANDRA-11549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11549
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Stefania
Assignee: Stefania
 Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x


COPY FROM fails to import empty values. 

For example:

{code}
$ cat test.csv
a,10,20
b,30,
c,50,60
$ cqlsh
cqlsh> create keyspace if not exists test with replication = {'class': 
'SimpleStrategy', 'replication_factor':1};
cqlsh> create table if not exists test.test (t text primary key, i1 int, i2 
int);
cqlsh> copy test.test (t,i1,i2) from 'test.csv';
{code}

Imports:

{code}
select * from test.test";
 t | i1 | i2
---++
 a | 10 | 20
 c | 50 | 60
(2 rows)
{code}

and generates a {{ParseError - invalid literal for int() with base 10: '',  
given up without retries}} for the row with an empty value.

It should import the empty value as a {{null}} and there should be no error:

{code}
cqlsh> select * from test.test";
 t | i1 | i2
---++--
 a | 10 |   20
 c | 50 |   60
 b | 30 | null
(3 rows)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11548) Anticompaction not removing old sstables

2016-04-11 Thread Ruoran Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruoran Wang updated CASSANDRA-11548:

Description: 
1. 12/29/15 https://issues.apache.org/jira/browse/CASSANDRA-10831
Moved markCompactedSSTablesReplaced out of the loop ```for (SSTableReader 
sstable : repairedSSTables)```

2. 1/18/16 https://issues.apache.org/jira/browse/CASSANDRA-10829
Added unmarkCompacting into the loop. ```for (SSTableReader sstable : 
repairedSSTables)```

I think the effect of those above change might cause the 
markCompactedSSTablesReplaced fail on 

DataTracker.java
{noformat}
   assert newSSTables.size() + newShadowed.size() == newSSTablesSize :
String.format("Expecting new size of %d, got %d while replacing 
%s by %s in %s",
  newSSTablesSize, newSSTables.size() + 
newShadowed.size(), oldSSTables, replacements, this);
{noformat}

Since change CASSANDRA-10831 moved it out. This AssertError won't be caught, 
leaving the oldsstables not removed. (Then this might cause row out of order 
error when doing incremental repair if there are L1 un-repaired sstables.)

  was:
1. 12/29/15 https://issues.apache.org/jira/browse/CASSANDRA-10831
Moved markCompactedSSTablesReplaced out of the loop ```for (SSTableReader 
sstable : repairedSSTables)```

2. 1/18/16 https://issues.apache.org/jira/browse/CASSANDRA-10829
Added unmarkCompacting into the loop. ```for (SSTableReader sstable : 
repairedSSTables)```

I think the effect of those above change might cause the 
markCompactedSSTablesReplaced fail on 

DataTracker.java
```
assert newSSTables.size() + newShadowed.size() == newSSTablesSize :
String.format("Expecting new size of %d, got %d while replacing 
%s by %s in %s",
  newSSTablesSize, newSSTables.size() + 
newShadowed.size(), oldSSTables, replacements, this);
```

Since change CASSANDRA-10831 moved it out. This AssertError won't be caught, 
leaving the oldsstables not removed. (Then this might cause row out of order 
error when doing incremental repair if there are L1 un-repaired sstables.)


> Anticompaction not removing old sstables
> 
>
> Key: CASSANDRA-11548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11548
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.1.13
>Reporter: Ruoran Wang
>
> 1. 12/29/15 https://issues.apache.org/jira/browse/CASSANDRA-10831
> Moved markCompactedSSTablesReplaced out of the loop ```for (SSTableReader 
> sstable : repairedSSTables)```
> 2. 1/18/16 https://issues.apache.org/jira/browse/CASSANDRA-10829
> Added unmarkCompacting into the loop. ```for (SSTableReader sstable : 
> repairedSSTables)```
> I think the effect of those above change might cause the 
> markCompactedSSTablesReplaced fail on 
> DataTracker.java
> {noformat}
>assert newSSTables.size() + newShadowed.size() == newSSTablesSize :
> String.format("Expecting new size of %d, got %d while 
> replacing %s by %s in %s",
>   newSSTablesSize, newSSTables.size() + 
> newShadowed.size(), oldSSTables, replacements, this);
> {noformat}
> Since change CASSANDRA-10831 moved it out. This AssertError won't be caught, 
> leaving the oldsstables not removed. (Then this might cause row out of order 
> error when doing incremental repair if there are L1 un-repaired sstables.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11548) Anticompaction not removing old sstables

2016-04-11 Thread Ruoran Wang (JIRA)
Ruoran Wang created CASSANDRA-11548:
---

 Summary: Anticompaction not removing old sstables
 Key: CASSANDRA-11548
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11548
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.1.13
Reporter: Ruoran Wang


1. 12/29/15 https://issues.apache.org/jira/browse/CASSANDRA-10831
Moved markCompactedSSTablesReplaced out of the loop ```for (SSTableReader 
sstable : repairedSSTables)```

2. 1/18/16 https://issues.apache.org/jira/browse/CASSANDRA-10829
Added unmarkCompacting into the loop. ```for (SSTableReader sstable : 
repairedSSTables)```

I think the effect of those above change might cause the 
markCompactedSSTablesReplaced fail on 

DataTracker.java
```
assert newSSTables.size() + newShadowed.size() == newSSTablesSize :
String.format("Expecting new size of %d, got %d while replacing 
%s by %s in %s",
  newSSTablesSize, newSSTables.size() + 
newShadowed.size(), oldSSTables, replacements, this);
```

Since change CASSANDRA-10831 moved it out. This AssertError won't be caught, 
leaving the oldsstables not removed. (Then this might cause row out of order 
error when doing incremental repair if there are L1 un-repaired sstables.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11505) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors

2016-04-11 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11505:
-
Component/s: Tools

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors
> -
>
> Key: CASSANDRA-11505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11505
> Project: Cassandra
>  Issue Type: Test
>  Components: Tools
>Reporter: Michael Shuler
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.1.x, 2.2.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/197/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors
> Failed on CassCI build cassandra-3.0_novnode_dtest #197
> {noformat}
> Error Message
> False is not true
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-c2AJlu
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse errors
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 943, in test_reading_max_parse_errors
> self.assertTrue(num_rows_imported < (num_rows / 2))  # less than the 
> maximum number of valid rows in the csv
>   File "/usr/lib/python2.7/unittest/case.py", line 422, in assertTrue
> raise self.failureException(msg)
> "False is not true\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-c2AJlu\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None,\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse 
> errors\n- >> end captured logging << 
> -"
> Standard Output
> (EE)  Using CQL driver:  '/home/automaton/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>(EE)
>   Using connect timeout: 5 seconds(EE)  Using 'utf-8' encoding(EE)  
> :2:Failed to import 2500 rows: ParseError - could not convert string 
> to float: abc,  given up without retries(EE)  :2:Exceeded maximum 
> number of parse errors 10(EE)  :2:Failed to process 2500 rows; failed 
> rows written to import_ks_testmaxparseerrors.err(EE)  :2:Exceeded 
> maximum number of parse errors 10(EE)  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11505) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors

2016-04-11 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11505:
-
Fix Version/s: 2.2.x
   2.1.x
   Tester: Michael Shuler
   Status: Patch Available  (was: In Progress)

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors
> -
>
> Key: CASSANDRA-11505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11505
> Project: Cassandra
>  Issue Type: Test
>  Components: Tools
>Reporter: Michael Shuler
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.1.x, 2.2.x
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/197/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors
> Failed on CassCI build cassandra-3.0_novnode_dtest #197
> {noformat}
> Error Message
> False is not true
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-c2AJlu
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse errors
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 943, in test_reading_max_parse_errors
> self.assertTrue(num_rows_imported < (num_rows / 2))  # less than the 
> maximum number of valid rows in the csv
>   File "/usr/lib/python2.7/unittest/case.py", line 422, in assertTrue
> raise self.failureException(msg)
> "False is not true\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-c2AJlu\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None,\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse 
> errors\n- >> end captured logging << 
> -"
> Standard Output
> (EE)  Using CQL driver:  '/home/automaton/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>(EE)
>   Using connect timeout: 5 seconds(EE)  Using 'utf-8' encoding(EE)  
> :2:Failed to import 2500 rows: ParseError - could not convert string 
> to float: abc,  given up without retries(EE)  :2:Exceeded maximum 
> number of parse errors 10(EE)  :2:Failed to process 2500 rows; failed 
> rows written to import_ks_testmaxparseerrors.err(EE)  :2:Exceeded 
> maximum number of parse errors 10(EE)  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11505) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors

2016-04-11 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236397#comment-15236397
 ] 

Stefania commented on CASSANDRA-11505:
--

Thanks for testing [~mshuler]! 

I'm pretty sure 2.1 has the same problem and so I've prepared the patch for 
both branches:

||2.1||2.2||
|[patch|https://github.com/stef1927/cassandra/commits/11505-2.1]|[patch|https://github.com/stef1927/cassandra/commits/11505-2.2]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11505-2.1-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11505-2.2-dtest/]|

[~thobbs] would you mind reviewing? 

The following was backported from CASSANDRA-11320:

* the additional thread when sending messages, to make sure a process never 
hangs when sending messages
* the termination of the feeder process when we exceed the maximum number of 
errors
* the static printmsg methods
* the fix on the incorrect stack trace.

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors
> -
>
> Key: CASSANDRA-11505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11505
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Stefania
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/197/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors
> Failed on CassCI build cassandra-3.0_novnode_dtest #197
> {noformat}
> Error Message
> False is not true
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-c2AJlu
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse errors
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 943, in test_reading_max_parse_errors
> self.assertTrue(num_rows_imported < (num_rows / 2))  # less than the 
> maximum number of valid rows in the csv
>   File "/usr/lib/python2.7/unittest/case.py", line 422, in assertTrue
> raise self.failureException(msg)
> "False is not true\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-c2AJlu\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None,\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse 
> errors\n- >> end captured logging << 
> -"
> Standard Output
> (EE)  Using CQL driver:  '/home/automaton/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>(EE)
>   Using connect timeout: 5 seconds(EE)  Using 'utf-8' encoding(EE)  
> :2:Failed to import 2500 rows: ParseError - could not convert string 
> to float: abc,  given up without retries(EE)  :2:Exceeded maximum 
> number of parse errors 10(EE)  :2:Failed to process 2500 rows; failed 
> rows written to import_ks_testmaxparseerrors.err(EE)  :2:Exceeded 
> maximum number of parse errors 10(EE)  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10848) Upgrade paging dtests involving deletion flap on CassCI

2016-04-11 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch reassigned CASSANDRA-10848:
--

Assignee: Russ Hatch

> Upgrade paging dtests involving deletion flap on CassCI
> ---
>
> Key: CASSANDRA-10848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10848
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
> Fix For: 3.0.x, 3.x
>
>
> A number of dtests in the {{upgrade_tests.paging_tests}} that involve 
> deletion flap with the following error:
> {code}
> Requested pages were not delivered before timeout.
> {code}
> This may just be an effect of CASSANDRA-10730, but it's worth having a look 
> at separately. Here are some examples of tests flapping in this way:
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/422/testReport/junit/upgrade_tests.paging_test/TestPagingWithDeletionsNodes2RF1/test_multiple_partition_deletions/
> http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/422/testReport/junit/upgrade_tests.paging_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11412) Many sstablescanners opened during repair

2016-04-11 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236263#comment-15236263
 ] 

Paulo Motta commented on CASSANDRA-11412:
-

Code and tests look good and this is definitely a big improvement from what we 
had before, but as mentioned by [~molsson] we would still need to have 1 
{{ISSTableScanner}} instance per sstable open during the repair process for the 
non-LCS case. Do you think we should worry about optimizing this further by 
lazily opening {{ISSTableScanners}} as partitions are iterated?

Building up from  [~molsson] suggestion, I thought we could change 
{{AbstractCompactionStrategy.getScanners(sstables, ranges)}} to return a 
{{RangeScannerIterator}} instead, that returns a list of overlapping scanners 
at each iteration. This iterator would have an {{OrderedMap, 
Set>}} with a set of overlapping sstables for each exclusive 
subrange, and lazily instantiate {{ISSTableScanner}} as it iterates the 
subranges, maybe reusing {{ISSTableScanner}} from previous iterations and 
discarding them when no longer needed.

We would then need to create a new {{UnfilteredPartitionIterator}} to be used 
during compaction that would operate over {{RangeScannerIterator}} instances, 
merging returned {{ISSTableScanners}} for each exclusive subrange and renewing 
the merge iterator after the previous merge iterator is exhausted.

Benefit is that we would keep a minimum amount of {{ISSTableScanner}} instances 
open during compaction, avoiding things like CASSANDRA-4142 and we would have a 
single solution for both LCS and non-LCS. Downside is probably increased 
complexity and maybe overhead for building exclusive subranges.

Do you think this would work and is worth it? If so, should we do it here or 
open a new ticket for it?

> Many sstablescanners opened during repair
> -
>
> Key: CASSANDRA-11412
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11412
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.0.x, 3.x
>
>
> Since CASSANDRA-5220 we open [one sstablescanner per range per 
> sstable|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/compaction/CompactionStrategyManager.java#L374].
>  If compaction gets behind and you are running vnodes with 256 tokens and 
> RF3, this could become a problem (ie, {{768 * number of sstables}} scanners)
> We could probably refactor this similar to the way we handle scanners with 
> LCS - only open the scanner once we need it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7190) Add schema to snapshot manifest

2016-04-11 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236183#comment-15236183
 ] 

Wei Deng commented on CASSANDRA-7190:
-

I agree this is different than CASSANDRA-9587. This is an issue that causes 
real pain at some Cassandra users I talk to. Since this is such a LHF, if we 
could somehow get it into 2.1 and 3.0, it will help to ease some operation 
pains.

> Add schema to snapshot manifest
> ---
>
> Key: CASSANDRA-7190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jonathan Ellis
>Priority: Minor
>
> followup from CASSANDRA-6326



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7017) allow per-partition LIMIT clause in cql

2016-04-11 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236120#comment-15236120
 ] 

Jeremiah Jordan commented on CASSANDRA-7017:


Yeah that would be fine.

> allow per-partition LIMIT clause in cql
> ---
>
> Key: CASSANDRA-7017
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7017
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Halliday
>Assignee: Alex Petrov
>  Labels: cql
> Fix For: 3.6
>
> Attachments: 0001-Allow-per-partition-limit-in-SELECT-queries.patch, 
> 0001-Allow-per-partition-limit-in-SELECT-queriesV2.patch, 
> 0001-CASSANDRA-7017.patch
>
>
> somewhat related to static columns (#6561) and slicing (#4851), it is 
> desirable to apply a LIMIT on a per-partition rather than per-query basis, 
> such as to retrieve the top (most recent, etc) N clustered values for each 
> partition key, e.g.
> -- for each league, keep a ranked list of users
> create table scores (league text, score int, player text, primary key(league, 
> score, player) );
> -- get the top 3 teams in each league:
> select * from scores staticlimit 3;
> this currently requires issuing one query per partition key, which is tedious 
> if all the key partition key values are known and impossible if they aren't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7017) allow per-partition LIMIT clause in cql

2016-04-11 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236107#comment-15236107
 ] 

Alex Petrov commented on CASSANDRA-7017:


I will get the old API with {{getLimit}} back, and expose
{{getPerPartitionLimit}} in the same manner. The current version of method
will then become private/implementation.

Hope that would work.
On Mon, 11 Apr 2016 at 23:45, Jeremiah Jordan (JIRA) 

-- 
Alex


> allow per-partition LIMIT clause in cql
> ---
>
> Key: CASSANDRA-7017
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7017
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Halliday
>Assignee: Alex Petrov
>  Labels: cql
> Fix For: 3.6
>
> Attachments: 0001-Allow-per-partition-limit-in-SELECT-queries.patch, 
> 0001-Allow-per-partition-limit-in-SELECT-queriesV2.patch, 
> 0001-CASSANDRA-7017.patch
>
>
> somewhat related to static columns (#6561) and slicing (#4851), it is 
> desirable to apply a LIMIT on a per-partition rather than per-query basis, 
> such as to retrieve the top (most recent, etc) N clustered values for each 
> partition key, e.g.
> -- for each league, keep a ranked list of users
> create table scores (league text, score int, player text, primary key(league, 
> score, player) );
> -- get the top 3 teams in each league:
> select * from scores staticlimit 3;
> this currently requires issuing one query per partition key, which is tedious 
> if all the key partition key values are known and impossible if they aren't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11340) Heavy read activity on system_auth tables can cause apparent livelock

2016-04-11 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236042#comment-15236042
 ] 

Russ Hatch commented on CASSANDRA-11340:


Tried another run today with some longwe-running connections and still haven't 
had luck getting a repro. There's got to be something more nuanced going on 
with the perf problem.

> Heavy read activity on system_auth tables can cause apparent livelock
> -
>
> Key: CASSANDRA-11340
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11340
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Aleksey Yeschenko
> Attachments: mass_connect.py, prepare_mass_connect.py
>
>
> Reproduced in at least 2.1.9. 
> It appears possible for queries against system_auth tables to trigger 
> speculative retry, which causes auth to block on traffic going off node. In 
> some cases, it appears possible for threads to become deadlocked, causing 
> load on the nodes to increase sharply. This happens even in clusters with RF 
> of system_auth == N, as all requests being served locally puts the bar for 
> 99% SR pretty low. 
> Incomplete stack trace below, but we haven't yet figured out what exactly is 
> blocking:
> {code}
> Thread 82291: (state = BLOCKED)
>  - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information 
> may be imprecise)
>  - java.util.concurrent.locks.LockSupport.parkNanos(long) @bci=11, line=338 
> (Compiled frame)
>  - 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUntil(long)
>  @bci=28, line=307 (Compiled frame)
>  - org.apache.cassandra.utils.concurrent.SimpleCondition.await(long, 
> java.util.concurrent.TimeUnit) @bci=76, line=63 (Compiled frame)
>  - org.apache.cassandra.service.ReadCallback.await(long, 
> java.util.concurrent.TimeUnit) @bci=25, line=92 (Compiled frame)
>  - 
> org.apache.cassandra.service.AbstractReadExecutor$SpeculatingReadExecutor.maybeTryAdditionalReplicas()
>  @bci=39, line=281 (Compiled frame)
>  - org.apache.cassandra.service.StorageProxy.fetchRows(java.util.List, 
> org.apache.cassandra.db.ConsistencyLevel) @bci=175, line=1338 (Compiled frame)
>  - org.apache.cassandra.service.StorageProxy.readRegular(java.util.List, 
> org.apache.cassandra.db.ConsistencyLevel) @bci=9, line=1274 (Compiled frame)
>  - org.apache.cassandra.service.StorageProxy.read(java.util.List, 
> org.apache.cassandra.db.ConsistencyLevel, 
> org.apache.cassandra.service.ClientState) @bci=57, line=1199 (Compiled frame)
>  - 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(org.apache.cassandra.service.pager.Pageable,
>  org.apache.cassandra.cql3.QueryOptions, int, long, 
> org.apache.cassandra.service.QueryState) @bci=35, line=272 (Compiled frame)
>  - 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(org.apache.cassandra.service.QueryState,
>  org.apache.cassandra.cql3.QueryOptions) @bci=105, line=224 (Compiled frame)
>  - org.apache.cassandra.auth.Auth.selectUser(java.lang.String) @bci=27, 
> line=265 (Compiled frame)
>  - org.apache.cassandra.auth.Auth.isExistingUser(java.lang.String) @bci=1, 
> line=86 (Compiled frame)
>  - 
> org.apache.cassandra.service.ClientState.login(org.apache.cassandra.auth.AuthenticatedUser)
>  @bci=11, line=206 (Compiled frame)
>  - 
> org.apache.cassandra.transport.messages.AuthResponse.execute(org.apache.cassandra.service.QueryState)
>  @bci=58, line=82 (Compiled frame)
>  - 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(io.netty.channel.ChannelHandlerContext,
>  org.apache.cassandra.transport.Message$Request) @bci=75, line=439 (Compiled 
> frame)
>  - 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(io.netty.channel.ChannelHandlerContext,
>  java.lang.Object) @bci=6, line=335 (Compiled frame)
>  - 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(io.netty.channel.ChannelHandlerContext,
>  java.lang.Object) @bci=17, line=105 (Compiled frame)
>  - 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(java.lang.Object)
>  @bci=9, line=333 (Compiled frame)
>  - 
> io.netty.channel.AbstractChannelHandlerContext.access$700(io.netty.channel.AbstractChannelHandlerContext,
>  java.lang.Object) @bci=2, line=32 (Compiled frame)
>  - io.netty.channel.AbstractChannelHandlerContext$8.run() @bci=8, line=324 
> (Compiled frame)
>  - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=511 
> (Compiled frame)
>  - 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run()
>  @bci=5, line=164 (Compiled frame)
>  - org.apache.cassandra.concurrent.SEPWorker.run() @bci=87, line=105 
> (Interpreted frame)
>  - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
> {code}
> In a cluster with many con

[jira] [Comment Edited] (CASSANDRA-7017) allow per-partition LIMIT clause in cql

2016-04-11 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236032#comment-15236032
 ] 

Jeremiah Jordan edited comment on CASSANDRA-7017 at 4/11/16 9:45 PM:
-

SelectStatement#getLimit is a public API so that a custom QueryProcessor can 
get to the limit.  After this change I do not see a way to do that without 
making SelectStatement.limit and SelectStatement.perPartitionLimit public.  Am 
I missing something?  Can we make those public?  Or add accessor functions for 
them that call getLimit with the right inputs.


was (Author: jjordan):
SelectStatement#getLimit is a public API so that a custom QueryProcessor can 
get to the limit.  After this change I do not see a way to do that without 
making SelectStatement.limit and SelectStatement.perPartitionLimit public.  Am 
I missing something?  Can we make those public?

> allow per-partition LIMIT clause in cql
> ---
>
> Key: CASSANDRA-7017
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7017
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Halliday
>Assignee: Alex Petrov
>  Labels: cql
> Fix For: 3.6
>
> Attachments: 0001-Allow-per-partition-limit-in-SELECT-queries.patch, 
> 0001-Allow-per-partition-limit-in-SELECT-queriesV2.patch, 
> 0001-CASSANDRA-7017.patch
>
>
> somewhat related to static columns (#6561) and slicing (#4851), it is 
> desirable to apply a LIMIT on a per-partition rather than per-query basis, 
> such as to retrieve the top (most recent, etc) N clustered values for each 
> partition key, e.g.
> -- for each league, keep a ranked list of users
> create table scores (league text, score int, player text, primary key(league, 
> score, player) );
> -- get the top 3 teams in each league:
> select * from scores staticlimit 3;
> this currently requires issuing one query per partition key, which is tedious 
> if all the key partition key values are known and impossible if they aren't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7017) allow per-partition LIMIT clause in cql

2016-04-11 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236032#comment-15236032
 ] 

Jeremiah Jordan commented on CASSANDRA-7017:


SelectStatement#getLimit is a public API so that a custom QueryProcessor can 
get to the limit.  After this change I do not see a way to do that without 
making SelectStatement.limit and SelectStatement.perPartitionLimit public.  Am 
I missing something?  Can we make those public?

> allow per-partition LIMIT clause in cql
> ---
>
> Key: CASSANDRA-7017
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7017
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Halliday
>Assignee: Alex Petrov
>  Labels: cql
> Fix For: 3.6
>
> Attachments: 0001-Allow-per-partition-limit-in-SELECT-queries.patch, 
> 0001-Allow-per-partition-limit-in-SELECT-queriesV2.patch, 
> 0001-CASSANDRA-7017.patch
>
>
> somewhat related to static columns (#6561) and slicing (#4851), it is 
> desirable to apply a LIMIT on a per-partition rather than per-query basis, 
> such as to retrieve the top (most recent, etc) N clustered values for each 
> partition key, e.g.
> -- for each league, keep a ranked list of users
> create table scores (league text, score int, player text, primary key(league, 
> score, player) );
> -- get the top 3 teams in each league:
> select * from scores staticlimit 3;
> this currently requires issuing one query per partition key, which is tedious 
> if all the key partition key values are known and impossible if they aren't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-7017) allow per-partition LIMIT clause in cql

2016-04-11 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reopened CASSANDRA-7017:


> allow per-partition LIMIT clause in cql
> ---
>
> Key: CASSANDRA-7017
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7017
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Halliday
>Assignee: Alex Petrov
>  Labels: cql
> Fix For: 3.6
>
> Attachments: 0001-Allow-per-partition-limit-in-SELECT-queries.patch, 
> 0001-Allow-per-partition-limit-in-SELECT-queriesV2.patch, 
> 0001-CASSANDRA-7017.patch
>
>
> somewhat related to static columns (#6561) and slicing (#4851), it is 
> desirable to apply a LIMIT on a per-partition rather than per-query basis, 
> such as to retrieve the top (most recent, etc) N clustered values for each 
> partition key, e.g.
> -- for each league, keep a ranked list of users
> create table scores (league text, score int, player text, primary key(league, 
> score, player) );
> -- get the top 3 teams in each league:
> select * from scores staticlimit 3;
> this currently requires issuing one query per partition key, which is tedious 
> if all the key partition key values are known and impossible if they aren't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11097) Idle session timeout for secure environments

2016-04-11 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235982#comment-15235982
 ] 

Stefan Podkowinski commented on CASSANDRA-11097:


Another option would be to address this at native transport protocol level by 
extending the specification to a) allow to indicate if the client session is 
interactive/non-interactive b) ask interactive clients to re-authenticate 
through established connection after inactivity timeout. 


> Idle session timeout for secure environments
> 
>
> Key: CASSANDRA-11097
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11097
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeff Jirsa
>Priority: Minor
>  Labels: lhf, ponies
>
> A thread on the user list pointed out that some use cases may prefer to have 
> a database disconnect sessions after some idle timeout. An example would be 
> an administrator who connected via ssh+cqlsh and then walked away. 
> Disconnecting that user and forcing it to re-authenticate could protect 
> against unauthorized access.
> It seems like it may be possible to do this using a netty 
> {{IdleStateHandler}} in a way that's low risk and perhaps off by default.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11547) Add background thread to check for clock drift

2016-04-11 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235944#comment-15235944
 ] 

Jason Brown commented on CASSANDRA-11547:
-

Code and cassci tests can be found here:

||3.0||3.x||
|[branch|https://github.com/jasobrown/cassandra/tree/clockChecker-3.0]|[branch|https://github.com/jasobrown/cassandra/tree/clockChecker-3.x]|
|[dtest|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-clockChecker-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-clockChecker-3.x-dtest/]|
|[testall|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-clockChecker-3.0-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/jasobrown/job/jasobrown-clockChecker-3.x-testall/]|


> Add background thread to check for clock drift
> --
>
> Key: CASSANDRA-11547
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11547
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: clocks, time
>
> The system clock has the potential to drift while a system is running. As a 
> simple way to check if this occurs, we can run a background thread that wakes 
> up every n seconds, reads the system clock, and checks to see if, indeed, n 
> seconds have passed. 
> * If the clock's current time is less than the last recorded time (captured n 
> seconds in the past), we know the clock has jumped backward.
> * If n seconds have not elapsed, we know the system clock is running slow or 
> has moved backward (by a value less than n)
> * If (n + a small offset) seconds have elapsed, we can assume we are within 
> an acceptable window of clock movement. Reasons for including an offset are 
> the clock checking thread might not have been scheduled on time, or garbage 
> collection, and so on.
> * If the clock is greater than (n + a small offset) seconds, we can assume 
> the clock jumped forward.
> In the unhappy cases, we can write a message to the log and increment some 
> metric that the user's monitoring systems can trigger/alert on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11547) Add background thread to check for clock drift

2016-04-11 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-11547:
---

 Summary: Add background thread to check for clock drift
 Key: CASSANDRA-11547
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11547
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jason Brown
Assignee: Jason Brown
Priority: Minor


The system clock has the potential to drift while a system is running. As a 
simple way to check if this occurs, we can run a background thread that wakes 
up every n seconds, reads the system clock, and checks to see if, indeed, n 
seconds have passed. 

* If the clock's current time is less than the last recorded time (captured n 
seconds in the past), we know the clock has jumped backward.
* If n seconds have not elapsed, we know the system clock is running slow or 
has moved backward (by a value less than n)
* If (n + a small offset) seconds have elapsed, we can assume we are within an 
acceptable window of clock movement. Reasons for including an offset are the 
clock checking thread might not have been scheduled on time, or garbage 
collection, and so on.
* If the clock is greater than (n + a small offset) seconds, we can assume the 
clock jumped forward.

In the unhappy cases, we can write a message to the log and increment some 
metric that the user's monitoring systems can trigger/alert on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11097) Idle session timeout for secure environments

2016-04-11 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235911#comment-15235911
 ] 

Jason Brown commented on CASSANDRA-11097:
-

[~jjirsa] Thanks for looking up what Oracle does. On the whole I think it 
certainly makes sense to log out the admin/cqlsh sessions if the idle threshold 
is exceeded. FTR, I'm totally fine with disconnecting *any* idle connection, 
secure or otherwise, and we could probably fashion a solution here that takes 
secure/non-secure into account. I'll read CASSANDRA-8303  and it's related 
tickets later today. but your suggestion about {{IdleStateHandler}} seems 
pretty reasonable.

> Idle session timeout for secure environments
> 
>
> Key: CASSANDRA-11097
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11097
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeff Jirsa
>Priority: Minor
>  Labels: lhf, ponies
>
> A thread on the user list pointed out that some use cases may prefer to have 
> a database disconnect sessions after some idle timeout. An example would be 
> an administrator who connected via ssh+cqlsh and then walked away. 
> Disconnecting that user and forcing it to re-authenticate could protect 
> against unauthorized access.
> It seems like it may be possible to do this using a netty 
> {{IdleStateHandler}} in a way that's low risk and perhaps off by default.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11529) Checking if an unlogged batch is local is inefficient

2016-04-11 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-11529:
---
Labels: docs-impacting  (was: )

> Checking if an unlogged batch is local is inefficient
> -
>
> Key: CASSANDRA-11529
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11529
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Paulo Motta
>Assignee: Stefania
>Priority: Critical
>  Labels: docs-impacting
> Fix For: 2.1.14, 2.2.6, 3.6, 3.0.6
>
>
> Based on CASSANDRA-11363 report I noticed that on CASSANDRA-9303 we 
> introduced the following check to avoid printing a {{WARN}} in case an 
> unlogged batch statement is local:
> {noformat}
>  for (IMutation im : mutations)
>  {
>  keySet.add(im.key());
>  for (ColumnFamily cf : im.getColumnFamilies())
>  ksCfPairs.add(String.format("%s.%s", 
> cf.metadata().ksName, cf.metadata().cfName));
> +
> +if (localMutationsOnly)
> +localMutationsOnly &= isMutationLocal(localTokensByKs, 
> im);
>  }
>  
> +// CASSANDRA-9303: If we only have local mutations we do not warn
> +if (localMutationsOnly)
> +return;
> +
>  NoSpamLogger.log(logger, NoSpamLogger.Level.WARN, 1, 
> TimeUnit.MINUTES, unloggedBatchWarning,
>   keySet.size(), keySet.size() == 1 ? "" : "s",
>   ksCfPairs.size() == 1 ? "" : "s", ksCfPairs);
> {noformat}
> The {{isMutationLocal}} check uses 
> {{StorageService.instance.getLocalRanges(mutation.getKeyspaceName())}}, which 
> underneaths uses {{AbstractReplication.getAddressRanges}} to calculate local 
> ranges. 
> Recalculating this at every unlogged batch can be pretty inefficient, so we 
> should at the very least cache it every time the ring changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8467) Monitoring UDFs

2016-04-11 Thread Christopher Batey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235805#comment-15235805
 ] 

Christopher Batey commented on CASSANDRA-8467:
--

Having an MBean that I could see total time spent in UDF and a latency 
distribution of UDF execution would be very nice.

> Monitoring UDFs
> ---
>
> Key: CASSANDRA-8467
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8467
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Observability
>Reporter: Robert Stupp
>Priority: Minor
>  Labels: tracing, udf
>
> This thicket's about to add UDF executions to session-tracing.
> Tracing these parameters for UDF invocations could become very interesting.
> * name of UDF
> * # of invocations
> * # of rejected executions
> * min/max/avg execution times
> "Rejected executions" would count UDFs that are not executed because an input 
> parameter is null/empty (CASSANDRA-8374).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11097) Idle session timeout for secure environments

2016-04-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235749#comment-15235749
 ] 

Jeff Jirsa edited comment on CASSANDRA-11097 at 4/11/16 7:21 PM:
-

[~jasobrown] Oracle does this per-user / per-user-profile server side, and 
perhaps a similar logic makes sense for Cassandra via CASSANDRA-8303 (ponies 
territory, perhaps) so that applications can have long-lived idle connections, 
but administrators are logged out if idle:

{code}
alter profile analyst limit
   connect_time 18
   sessions_per_user 2
   ldle_time 1800;
{code}



was (Author: jjirsa):
[~jasobrown] Oracle does this per-user / per-user-profile server side, and 
perhaps a similar logic makes sense for Cassandra (getting into ponies 
territory, perhaps) so that applications can have long-lived idle connections, 
but administrators are logged out if idle:

{code}
alter profile analyst limit
   connect_time 18
   sessions_per_user 2
   ldle_time 1800;
{code}


> Idle session timeout for secure environments
> 
>
> Key: CASSANDRA-11097
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11097
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeff Jirsa
>Priority: Minor
>  Labels: lhf, ponies
>
> A thread on the user list pointed out that some use cases may prefer to have 
> a database disconnect sessions after some idle timeout. An example would be 
> an administrator who connected via ssh+cqlsh and then walked away. 
> Disconnecting that user and forcing it to re-authenticate could protect 
> against unauthorized access.
> It seems like it may be possible to do this using a netty 
> {{IdleStateHandler}} in a way that's low risk and perhaps off by default.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-10433) Reduce contention in CompositeType instance interning

2016-04-11 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta reopened CASSANDRA-10433:
-

Reopening backport patch to 2.1,  given that in certain circumstances (non-text 
thrift composite tables after CASSANDRA-8178) this contention may show up on 
the critical path, potentially causing a performance regression (see background 
on the previous comment). Since the patch merges cleanly and the other versions 
are not affected, I reused this ticket rather than opening a new one.

> Reduce contention in CompositeType instance interning
> -
>
> Key: CASSANDRA-10433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10433
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Cassandra 2.2.1 running on 6 AWS c3.4xlarge nodes, 
> CentOS 6.6
>Reporter: David Schlosnagle
>Assignee: David Schlosnagle
>Priority: Minor
> Fix For: 2.2.4
>
> Attachments: 
> 0001-Avoid-contention-in-CompositeType-instance-interning.patch
>
>
> While running some workload tests on Cassandra 2.2.1 and profiling with 
> flight recorder in a test environment, we have noticed significant contention 
> on the static synchronized 
> org.apache.cassandra.db.marshal.CompositeType.getInstance(List) method.
> We are seeing threads blocked for 22.828 seconds from a 60 second snapshot 
> while under a mix of reads and writes from a Thrift based client.
> I would propose to reduce contention in 
> org.apache.cassandra.db.marshal.CompositeType.getInstance(List) by using a 
> ConcurrentHashMap for the instances cache.
> {code}
> Contention Back Trace
> org.apache.cassandra.db.marshal.CompositeType.getInstance(List)
>   
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.asAbstractType()
> org.apache.cassandra.db.SuperColumns.getComparatorFor(CFMetaData, boolean)
>   org.apache.cassandra.db.SuperColumns.getComparatorFor(CFMetaData, 
> ByteBuffer)
> 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnNames(CFMetaData, 
> ByteBuffer, Iterable)
>   
> org.apache.cassandra.thrift.ThriftValidation.validateColumnPath(CFMetaData, 
> ColumnPath)
> 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnOrSuperColumn(CFMetaData,
>  ByteBuffer, ColumnOrSuperColumn)
>   
> org.apache.cassandra.thrift.ThriftValidation.validateMutation(CFMetaData, 
> ByteBuffer, Mutation)
> 
> org.apache.cassandra.thrift.CassandraServer.createMutationList(ConsistencyLevel,
>  Map, boolean)
>   
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(Map, 
> ConsistencyLevel)
> 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra$Iface,
>  Cassandra$batch_mutate_args)
> 
> org.apache.cassandra.thrift.ThriftValidation.validateRange(CFMetaData, 
> ColumnParent, SliceRange)
>   
> org.apache.cassandra.thrift.ThriftValidation.validatePredicate(CFMetaData, 
> ColumnParent, SlicePredicate)
> 
> org.apache.cassandra.thrift.CassandraServer.get_range_slices(ColumnParent, 
> SlicePredicate, KeyRange, ConsistencyLevel)
>   
> org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra$Iface,
>  Cassandra$get_range_slices_args)
> 
> org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Object,
>  TBase)
>   org.apache.thrift.ProcessFunction.process(int, TProtocol, 
> TProtocol, Object)
> org.apache.thrift.TBaseProcessor.process(TProtocol, 
> TProtocol)
>   
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run()
> 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
>   java.util.concurrent.ThreadPoolExecutor$Worker.run()
> 
> org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(String, 
> List, ColumnParent, long, SlicePredicate, ConsistencyLevel, ClientState)
>   
> org.apache.cassandra.thrift.CassandraServer.multiget_slice(List, 
> ColumnParent, SlicePredicate, ConsistencyLevel)
> 
> org.apache.cassandra.thrift.Cassandra$Processor$multiget_slice.getResult(Cassandra$Iface,
>  Cassandra$multiget_slice_args)
>   
> org.apache.cassandra.thrift.Cassandra$Processor$multiget_slice.getResult(Object,
>  TBase)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10433) Reduce contention in CompositeType instance interning

2016-04-11 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10433:

Status: Patch Available  (was: Reopened)

> Reduce contention in CompositeType instance interning
> -
>
> Key: CASSANDRA-10433
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10433
> Project: Cassandra
>  Issue Type: Improvement
> Environment: Cassandra 2.2.1 running on 6 AWS c3.4xlarge nodes, 
> CentOS 6.6
>Reporter: David Schlosnagle
>Assignee: David Schlosnagle
>Priority: Minor
> Fix For: 2.2.4
>
> Attachments: 
> 0001-Avoid-contention-in-CompositeType-instance-interning.patch
>
>
> While running some workload tests on Cassandra 2.2.1 and profiling with 
> flight recorder in a test environment, we have noticed significant contention 
> on the static synchronized 
> org.apache.cassandra.db.marshal.CompositeType.getInstance(List) method.
> We are seeing threads blocked for 22.828 seconds from a 60 second snapshot 
> while under a mix of reads and writes from a Thrift based client.
> I would propose to reduce contention in 
> org.apache.cassandra.db.marshal.CompositeType.getInstance(List) by using a 
> ConcurrentHashMap for the instances cache.
> {code}
> Contention Back Trace
> org.apache.cassandra.db.marshal.CompositeType.getInstance(List)
>   
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.asAbstractType()
> org.apache.cassandra.db.SuperColumns.getComparatorFor(CFMetaData, boolean)
>   org.apache.cassandra.db.SuperColumns.getComparatorFor(CFMetaData, 
> ByteBuffer)
> 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnNames(CFMetaData, 
> ByteBuffer, Iterable)
>   
> org.apache.cassandra.thrift.ThriftValidation.validateColumnPath(CFMetaData, 
> ColumnPath)
> 
> org.apache.cassandra.thrift.ThriftValidation.validateColumnOrSuperColumn(CFMetaData,
>  ByteBuffer, ColumnOrSuperColumn)
>   
> org.apache.cassandra.thrift.ThriftValidation.validateMutation(CFMetaData, 
> ByteBuffer, Mutation)
> 
> org.apache.cassandra.thrift.CassandraServer.createMutationList(ConsistencyLevel,
>  Map, boolean)
>   
> org.apache.cassandra.thrift.CassandraServer.batch_mutate(Map, 
> ConsistencyLevel)
> 
> org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra$Iface,
>  Cassandra$batch_mutate_args)
> 
> org.apache.cassandra.thrift.ThriftValidation.validateRange(CFMetaData, 
> ColumnParent, SliceRange)
>   
> org.apache.cassandra.thrift.ThriftValidation.validatePredicate(CFMetaData, 
> ColumnParent, SlicePredicate)
> 
> org.apache.cassandra.thrift.CassandraServer.get_range_slices(ColumnParent, 
> SlicePredicate, KeyRange, ConsistencyLevel)
>   
> org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra$Iface,
>  Cassandra$get_range_slices_args)
> 
> org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Object,
>  TBase)
>   org.apache.thrift.ProcessFunction.process(int, TProtocol, 
> TProtocol, Object)
> org.apache.thrift.TBaseProcessor.process(TProtocol, 
> TProtocol)
>   
> org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run()
> 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
>   java.util.concurrent.ThreadPoolExecutor$Worker.run()
> 
> org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(String, 
> List, ColumnParent, long, SlicePredicate, ConsistencyLevel, ClientState)
>   
> org.apache.cassandra.thrift.CassandraServer.multiget_slice(List, 
> ColumnParent, SlicePredicate, ConsistencyLevel)
> 
> org.apache.cassandra.thrift.Cassandra$Processor$multiget_slice.getResult(Cassandra$Iface,
>  Cassandra$multiget_slice_args)
>   
> org.apache.cassandra.thrift.Cassandra$Processor$multiget_slice.getResult(Object,
>  TBase)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11532) CqlConfigHelper requires both truststore and keystore to work with SSL encryption

2016-04-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11532:
--
   Resolution: Fixed
Fix Version/s: 3.0.6
   3.6
   2.2.6
   Status: Resolved  (was: Ready to Commit)

> CqlConfigHelper requires both truststore and keystore to work with SSL 
> encryption
> -
>
> Key: CASSANDRA-11532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11532
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
> Fix For: 2.2.6, 3.6, 3.0.6
>
> Attachments: CASSANDRA_11532.patch
>
>
> {{CqlConfigHelper}} configures SSL in the following way:
> {code:java}
> public static Optional getSSLOptions(Configuration conf)
> {
> Optional truststorePath = 
> getInputNativeSSLTruststorePath(conf);
> Optional keystorePath = getInputNativeSSLKeystorePath(conf);
> Optional truststorePassword = 
> getInputNativeSSLTruststorePassword(conf);
> Optional keystorePassword = 
> getInputNativeSSLKeystorePassword(conf);
> Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
> 
> if (truststorePath.isPresent() && keystorePath.isPresent() && 
> truststorePassword.isPresent() && keystorePassword.isPresent())
> {
> SSLContext context;
> try
> {
> context = getSSLContext(truststorePath.get(), 
> truststorePassword.get(), keystorePath.get(), keystorePassword.get());
> }
> catch (UnrecoverableKeyException | KeyManagementException |
> NoSuchAlgorithmException | KeyStoreException | 
> CertificateException | IOException e)
> {
> throw new RuntimeException(e);
> }
> String[] css = null;
> if (cipherSuites.isPresent())
> css = cipherSuites.get().split(",");
> return Optional.of(JdkSSLOptions.builder()
> .withSSLContext(context)
> .withCipherSuites(css)
> .build());
> }
> return Optional.absent();
> }
> {code}
> which forces you to connect only to trusted nodes and client authentication. 
> This should be made more flexible so that at least client authentication is 
> optional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11532) CqlConfigHelper requires both truststore and keystore to work with SSL encryption

2016-04-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235761#comment-15235761
 ] 

Aleksey Yeschenko commented on CASSANDRA-11532:
---

Thanks for clarifying that. Committed as 
[19b4b637ac79b5d53b9384bd95bed8e08b43f111|https://github.com/apache/cassandra/commit/19b4b637ac79b5d53b9384bd95bed8e08b43f111]
 to 2.2, and merged into 3.0 and trunk.

> CqlConfigHelper requires both truststore and keystore to work with SSL 
> encryption
> -
>
> Key: CASSANDRA-11532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11532
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
> Fix For: 2.2.6, 3.6, 3.0.6
>
> Attachments: CASSANDRA_11532.patch
>
>
> {{CqlConfigHelper}} configures SSL in the following way:
> {code:java}
> public static Optional getSSLOptions(Configuration conf)
> {
> Optional truststorePath = 
> getInputNativeSSLTruststorePath(conf);
> Optional keystorePath = getInputNativeSSLKeystorePath(conf);
> Optional truststorePassword = 
> getInputNativeSSLTruststorePassword(conf);
> Optional keystorePassword = 
> getInputNativeSSLKeystorePassword(conf);
> Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
> 
> if (truststorePath.isPresent() && keystorePath.isPresent() && 
> truststorePassword.isPresent() && keystorePassword.isPresent())
> {
> SSLContext context;
> try
> {
> context = getSSLContext(truststorePath.get(), 
> truststorePassword.get(), keystorePath.get(), keystorePassword.get());
> }
> catch (UnrecoverableKeyException | KeyManagementException |
> NoSuchAlgorithmException | KeyStoreException | 
> CertificateException | IOException e)
> {
> throw new RuntimeException(e);
> }
> String[] css = null;
> if (cipherSuites.isPresent())
> css = cipherSuites.get().split(",");
> return Optional.of(JdkSSLOptions.builder()
> .withSSLContext(context)
> .withCipherSuites(css)
> .build());
> }
> return Optional.absent();
> }
> {code}
> which forces you to connect only to trusted nodes and client authentication. 
> This should be made more flexible so that at least client authentication is 
> optional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-11 Thread aleksey
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4238cdd9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4238cdd9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4238cdd9

Branch: refs/heads/trunk
Commit: 4238cdd99fd58f96a1c933f1c8113cf349300982
Parents: 5dbeef3 19b4b63
Author: Aleksey Yeschenko 
Authored: Mon Apr 11 20:07:04 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 20:07:04 2016 +0100

--
 CHANGES.txt |  2 +
 .../cassandra/hadoop/cql3/CqlConfigHelper.java  | 58 +---
 2 files changed, 41 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4238cdd9/CHANGES.txt
--
diff --cc CHANGES.txt
index 76c9d99,54013a3..ed4c412
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,30 -1,7 +1,32 @@@
 -2.2.6
 +3.0.6
 + * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
+  * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
   * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
   * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 +
++
 +3.0.5
 + * Fix rare NPE on schema upgrade from 2.x to 3.x (CASSANDRA-10943)
 + * Improve backoff policy for cqlsh COPY FROM (CASSANDRA-11320)
 + * Improve IF NOT EXISTS check in CREATE INDEX (CASSANDRA-11131)
 + * Upgrade ohc to 0.4.3
 + * Enable SO_REUSEADDR for JMX RMI server sockets (CASSANDRA-11093)
 + * Allocate merkletrees with the correct size (CASSANDRA-11390)
 + * Support streaming pre-3.0 sstables (CASSANDRA-10990)
 + * Add backpressure to compressed commit log (CASSANDRA-10971)
 + * SSTableExport supports secondary index tables (CASSANDRA-11330)
 + * Fix sstabledump to include missing info in debug output (CASSANDRA-11321)
 + * Establish and implement canonical bulk reading workload(s) 
(CASSANDRA-10331)
 + * Fix paging for IN queries on tables without clustering columns 
(CASSANDRA-11208)
 + * Remove recursive call from CompositesSearcher (CASSANDRA-11304)
 + * Fix filtering on non-primary key columns for queries without index 
(CASSANDRA-6377)
 + * Fix sstableloader fail when using materialized view (CASSANDRA-11275)
 +Merged from 2.2:
   * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
   * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
   * Add cassandra-stress keystore option (CASSANDRA-9325)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4238cdd9/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
--



[3/6] cassandra git commit: CqlConfigHelper no longer requires both a keystore and truststore to work.

2016-04-11 Thread aleksey
CqlConfigHelper no longer requires both a keystore and truststore to work.

patch by Jacek Lewandowski; reviewed by Jeremiah Jordan for CASSANDRA-11532


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/19b4b637
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/19b4b637
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/19b4b637

Branch: refs/heads/trunk
Commit: 19b4b637ac79b5d53b9384bd95bed8e08b43f111
Parents: ab2b8a6
Author: Jacek Lewandowski 
Authored: Fri Apr 8 10:31:00 2016 -0500
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 20:02:27 2016 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/hadoop/cql3/CqlConfigHelper.java  | 58 +---
 2 files changed, 40 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/19b4b637/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 419ed21..54013a3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.6
+ * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
  * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
  * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
  * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/19b4b637/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
index fe62ea7..35cdca8 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
@@ -517,13 +517,13 @@ public class CqlConfigHelper
 Optional truststorePassword = 
getInputNativeSSLTruststorePassword(conf);
 Optional keystorePassword = 
getInputNativeSSLKeystorePassword(conf);
 Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
-
-if (truststorePath.isPresent() && keystorePath.isPresent() && 
truststorePassword.isPresent() && keystorePassword.isPresent())
+
+if (truststorePath.isPresent())
 {
 SSLContext context;
 try
 {
-context = getSSLContext(truststorePath.get(), 
truststorePassword.get(), keystorePath.get(), keystorePassword.get());
+context = getSSLContext(truststorePath, truststorePassword, 
keystorePath, keystorePassword);
 }
 catch (UnrecoverableKeyException | KeyManagementException |
 NoSuchAlgorithmException | KeyStoreException | 
CertificateException | IOException e)
@@ -585,26 +585,46 @@ public class CqlConfigHelper
 }
 }
 
-private static SSLContext getSSLContext(String truststorePath, String 
truststorePassword, String keystorePath, String keystorePassword)
-throws NoSuchAlgorithmException, KeyStoreException, 
CertificateException, IOException, UnrecoverableKeyException, 
KeyManagementException
+private static SSLContext getSSLContext(Optional truststorePath,
+Optional 
truststorePassword,
+Optional keystorePath,
+Optional keystorePassword)
+throws NoSuchAlgorithmException,
+   KeyStoreException,
+   CertificateException,
+   IOException,
+   UnrecoverableKeyException,
+   KeyManagementException
 {
-SSLContext ctx;
-try (FileInputStream tsf = new FileInputStream(truststorePath); 
FileInputStream ksf = new FileInputStream(keystorePath))
-{
-ctx = SSLContext.getInstance("SSL");
+SSLContext ctx = SSLContext.getInstance("SSL");
 
-KeyStore ts = KeyStore.getInstance("JKS");
-ts.load(tsf, truststorePassword.toCharArray());
-TrustManagerFactory tmf = 
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
-tmf.init(ts);
-
-KeyStore ks = KeyStore.getInstance("JKS");
-ks.load(ksf, keystorePassword.toCharArray());
-KeyManagerFactory kmf = 
KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
-kmf.init(ks, keystorePassword.toCharArray());
+TrustManagerFactory tmf = null;
+if (truststorePath.isPresent())
+{
+try (FileInputStream tsf = new 
FileInputStream(truststo

[2/6] cassandra git commit: CqlConfigHelper no longer requires both a keystore and truststore to work.

2016-04-11 Thread aleksey
CqlConfigHelper no longer requires both a keystore and truststore to work.

patch by Jacek Lewandowski; reviewed by Jeremiah Jordan for CASSANDRA-11532


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/19b4b637
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/19b4b637
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/19b4b637

Branch: refs/heads/cassandra-3.0
Commit: 19b4b637ac79b5d53b9384bd95bed8e08b43f111
Parents: ab2b8a6
Author: Jacek Lewandowski 
Authored: Fri Apr 8 10:31:00 2016 -0500
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 20:02:27 2016 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/hadoop/cql3/CqlConfigHelper.java  | 58 +---
 2 files changed, 40 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/19b4b637/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 419ed21..54013a3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.6
+ * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
  * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
  * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
  * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/19b4b637/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
index fe62ea7..35cdca8 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
@@ -517,13 +517,13 @@ public class CqlConfigHelper
 Optional truststorePassword = 
getInputNativeSSLTruststorePassword(conf);
 Optional keystorePassword = 
getInputNativeSSLKeystorePassword(conf);
 Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
-
-if (truststorePath.isPresent() && keystorePath.isPresent() && 
truststorePassword.isPresent() && keystorePassword.isPresent())
+
+if (truststorePath.isPresent())
 {
 SSLContext context;
 try
 {
-context = getSSLContext(truststorePath.get(), 
truststorePassword.get(), keystorePath.get(), keystorePassword.get());
+context = getSSLContext(truststorePath, truststorePassword, 
keystorePath, keystorePassword);
 }
 catch (UnrecoverableKeyException | KeyManagementException |
 NoSuchAlgorithmException | KeyStoreException | 
CertificateException | IOException e)
@@ -585,26 +585,46 @@ public class CqlConfigHelper
 }
 }
 
-private static SSLContext getSSLContext(String truststorePath, String 
truststorePassword, String keystorePath, String keystorePassword)
-throws NoSuchAlgorithmException, KeyStoreException, 
CertificateException, IOException, UnrecoverableKeyException, 
KeyManagementException
+private static SSLContext getSSLContext(Optional truststorePath,
+Optional 
truststorePassword,
+Optional keystorePath,
+Optional keystorePassword)
+throws NoSuchAlgorithmException,
+   KeyStoreException,
+   CertificateException,
+   IOException,
+   UnrecoverableKeyException,
+   KeyManagementException
 {
-SSLContext ctx;
-try (FileInputStream tsf = new FileInputStream(truststorePath); 
FileInputStream ksf = new FileInputStream(keystorePath))
-{
-ctx = SSLContext.getInstance("SSL");
+SSLContext ctx = SSLContext.getInstance("SSL");
 
-KeyStore ts = KeyStore.getInstance("JKS");
-ts.load(tsf, truststorePassword.toCharArray());
-TrustManagerFactory tmf = 
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
-tmf.init(ts);
-
-KeyStore ks = KeyStore.getInstance("JKS");
-ks.load(ksf, keystorePassword.toCharArray());
-KeyManagerFactory kmf = 
KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
-kmf.init(ks, keystorePassword.toCharArray());
+TrustManagerFactory tmf = null;
+if (truststorePath.isPresent())
+{
+try (FileInputStream tsf = new 
FileInputStream(

[1/6] cassandra git commit: CqlConfigHelper no longer requires both a keystore and truststore to work.

2016-04-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 ab2b8a60c -> 19b4b637a
  refs/heads/cassandra-3.0 5dbeef3f5 -> 4238cdd99
  refs/heads/trunk cb1a63474 -> 6d43fc981


CqlConfigHelper no longer requires both a keystore and truststore to work.

patch by Jacek Lewandowski; reviewed by Jeremiah Jordan for CASSANDRA-11532


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/19b4b637
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/19b4b637
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/19b4b637

Branch: refs/heads/cassandra-2.2
Commit: 19b4b637ac79b5d53b9384bd95bed8e08b43f111
Parents: ab2b8a6
Author: Jacek Lewandowski 
Authored: Fri Apr 8 10:31:00 2016 -0500
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 20:02:27 2016 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/hadoop/cql3/CqlConfigHelper.java  | 58 +---
 2 files changed, 40 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/19b4b637/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 419ed21..54013a3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.6
+ * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
  * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
  * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
  * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/19b4b637/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
index fe62ea7..35cdca8 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
@@ -517,13 +517,13 @@ public class CqlConfigHelper
 Optional truststorePassword = 
getInputNativeSSLTruststorePassword(conf);
 Optional keystorePassword = 
getInputNativeSSLKeystorePassword(conf);
 Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
-
-if (truststorePath.isPresent() && keystorePath.isPresent() && 
truststorePassword.isPresent() && keystorePassword.isPresent())
+
+if (truststorePath.isPresent())
 {
 SSLContext context;
 try
 {
-context = getSSLContext(truststorePath.get(), 
truststorePassword.get(), keystorePath.get(), keystorePassword.get());
+context = getSSLContext(truststorePath, truststorePassword, 
keystorePath, keystorePassword);
 }
 catch (UnrecoverableKeyException | KeyManagementException |
 NoSuchAlgorithmException | KeyStoreException | 
CertificateException | IOException e)
@@ -585,26 +585,46 @@ public class CqlConfigHelper
 }
 }
 
-private static SSLContext getSSLContext(String truststorePath, String 
truststorePassword, String keystorePath, String keystorePassword)
-throws NoSuchAlgorithmException, KeyStoreException, 
CertificateException, IOException, UnrecoverableKeyException, 
KeyManagementException
+private static SSLContext getSSLContext(Optional truststorePath,
+Optional 
truststorePassword,
+Optional keystorePath,
+Optional keystorePassword)
+throws NoSuchAlgorithmException,
+   KeyStoreException,
+   CertificateException,
+   IOException,
+   UnrecoverableKeyException,
+   KeyManagementException
 {
-SSLContext ctx;
-try (FileInputStream tsf = new FileInputStream(truststorePath); 
FileInputStream ksf = new FileInputStream(keystorePath))
-{
-ctx = SSLContext.getInstance("SSL");
+SSLContext ctx = SSLContext.getInstance("SSL");
 
-KeyStore ts = KeyStore.getInstance("JKS");
-ts.load(tsf, truststorePassword.toCharArray());
-TrustManagerFactory tmf = 
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
-tmf.init(ts);
-
-KeyStore ks = KeyStore.getInstance("JKS");
-ks.load(ksf, keystorePassword.toCharArray());
-KeyManagerFactory kmf = 
KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
-kmf.init(ks, key

[5/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-11 Thread aleksey
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4238cdd9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4238cdd9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4238cdd9

Branch: refs/heads/cassandra-3.0
Commit: 4238cdd99fd58f96a1c933f1c8113cf349300982
Parents: 5dbeef3 19b4b63
Author: Aleksey Yeschenko 
Authored: Mon Apr 11 20:07:04 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 20:07:04 2016 +0100

--
 CHANGES.txt |  2 +
 .../cassandra/hadoop/cql3/CqlConfigHelper.java  | 58 +---
 2 files changed, 41 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4238cdd9/CHANGES.txt
--
diff --cc CHANGES.txt
index 76c9d99,54013a3..ed4c412
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,30 -1,7 +1,32 @@@
 -2.2.6
 +3.0.6
 + * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
+  * CqlConfigHelper no longer requires both a keystore and truststore to work 
(CASSANDRA-11532)
   * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
   * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 +
++
 +3.0.5
 + * Fix rare NPE on schema upgrade from 2.x to 3.x (CASSANDRA-10943)
 + * Improve backoff policy for cqlsh COPY FROM (CASSANDRA-11320)
 + * Improve IF NOT EXISTS check in CREATE INDEX (CASSANDRA-11131)
 + * Upgrade ohc to 0.4.3
 + * Enable SO_REUSEADDR for JMX RMI server sockets (CASSANDRA-11093)
 + * Allocate merkletrees with the correct size (CASSANDRA-11390)
 + * Support streaming pre-3.0 sstables (CASSANDRA-10990)
 + * Add backpressure to compressed commit log (CASSANDRA-10971)
 + * SSTableExport supports secondary index tables (CASSANDRA-11330)
 + * Fix sstabledump to include missing info in debug output (CASSANDRA-11321)
 + * Establish and implement canonical bulk reading workload(s) 
(CASSANDRA-10331)
 + * Fix paging for IN queries on tables without clustering columns 
(CASSANDRA-11208)
 + * Remove recursive call from CompositesSearcher (CASSANDRA-11304)
 + * Fix filtering on non-primary key columns for queries without index 
(CASSANDRA-6377)
 + * Fix sstableloader fail when using materialized view (CASSANDRA-11275)
 +Merged from 2.2:
   * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
   * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
   * Add cassandra-stress keystore option (CASSANDRA-9325)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4238cdd9/src/java/org/apache/cassandra/hadoop/cql3/CqlConfigHelper.java
--



[6/6] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-11 Thread aleksey
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d43fc98
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d43fc98
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d43fc98

Branch: refs/heads/trunk
Commit: 6d43fc981299eb3eabc781af8572bdfc9f3cb37e
Parents: cb1a634 4238cdd
Author: Aleksey Yeschenko 
Authored: Mon Apr 11 20:07:50 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 20:07:50 2016 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/hadoop/cql3/CqlConfigHelper.java  | 58 +---
 2 files changed, 40 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d43fc98/CHANGES.txt
--



[jira] [Commented] (CASSANDRA-11097) Idle session timeout for secure environments

2016-04-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235749#comment-15235749
 ] 

Jeff Jirsa commented on CASSANDRA-11097:


[~jasobrown] Oracle does this per-user / per-user-profile server side, and 
perhaps a similar logic makes sense for Cassandra (getting into ponies 
territory, perhaps) so that applications can have long-lived idle connections, 
but administrators are logged out if idle:

{code}
alter profile analyst limit
   connect_time 18
   sessions_per_user 2
   ldle_time 1800;
{code}


> Idle session timeout for secure environments
> 
>
> Key: CASSANDRA-11097
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11097
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeff Jirsa
>Priority: Minor
>  Labels: lhf, ponies
>
> A thread on the user list pointed out that some use cases may prefer to have 
> a database disconnect sessions after some idle timeout. An example would be 
> an administrator who connected via ssh+cqlsh and then walked away. 
> Disconnecting that user and forcing it to re-authenticate could protect 
> against unauthorized access.
> It seems like it may be possible to do this using a netty 
> {{IdleStateHandler}} in a way that's low risk and perhaps off by default.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11532) CqlConfigHelper requires both truststore and keystore to work with SSL encryption

2016-04-11 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235746#comment-15235746
 ] 

Jeremiah Jordan commented on CASSANDRA-11532:
-

I force pushed the CASSANDRA-11532-22 branch with an updated commit message.  
The rest are clean merge forwards with just needing to fix CHANGES.txt.

> CqlConfigHelper requires both truststore and keystore to work with SSL 
> encryption
> -
>
> Key: CASSANDRA-11532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11532
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
> Attachments: CASSANDRA_11532.patch
>
>
> {{CqlConfigHelper}} configures SSL in the following way:
> {code:java}
> public static Optional getSSLOptions(Configuration conf)
> {
> Optional truststorePath = 
> getInputNativeSSLTruststorePath(conf);
> Optional keystorePath = getInputNativeSSLKeystorePath(conf);
> Optional truststorePassword = 
> getInputNativeSSLTruststorePassword(conf);
> Optional keystorePassword = 
> getInputNativeSSLKeystorePassword(conf);
> Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
> 
> if (truststorePath.isPresent() && keystorePath.isPresent() && 
> truststorePassword.isPresent() && keystorePassword.isPresent())
> {
> SSLContext context;
> try
> {
> context = getSSLContext(truststorePath.get(), 
> truststorePassword.get(), keystorePath.get(), keystorePassword.get());
> }
> catch (UnrecoverableKeyException | KeyManagementException |
> NoSuchAlgorithmException | KeyStoreException | 
> CertificateException | IOException e)
> {
> throw new RuntimeException(e);
> }
> String[] css = null;
> if (cipherSuites.isPresent())
> css = cipherSuites.get().split(",");
> return Optional.of(JdkSSLOptions.builder()
> .withSSLContext(context)
> .withCipherSuites(css)
> .build());
> }
> return Optional.absent();
> }
> {code}
> which forces you to connect only to trusted nodes and client authentication. 
> This should be made more flexible so that at least client authentication is 
> optional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10528) Proposal: Integrate RxJava

2016-04-11 Thread David Karnok (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235734#comment-15235734
 ] 

David Karnok commented on CASSANDRA-10528:
--

I'm glad you want to switch to the reactive world, however, depending on the 
urgency, I'd design for a Reactive-Streams based API. (Reactive-Streams is an 
initiative to have a standard set of interfaces and a protocol for reactive 
dataflows on the JVM that allows interoperation between compatible libraries).

The RxJava 1.x library is mature indeed but has reached its performance 
potential limit due to its architecture. Version 2, which is fully 
Reactive-Streams, has generally better performance. Unfortunately, Netflix 
takes its sweet time with it and since Ben left the project, there is no one to 
push it forward. It could take a year to have a stable API.

Alternatively, the Project Reactor (also fully Reactive-Streams) seems to be 
closest to a stable release with its version 2.5. It is kind of an RxJava lite 
but also has some non-overlapping features with RxJava. It is the most advanced 
and most performant RS library currently available. The unfortunate thing is 
that its API is still in flux (pun intended) as bad old habits get ironed out, 
therefore, expect its snapshot to change significantly from time to time. 
Version 2.5 should be ready within 6 months I presume.

Honorable mention is the Akka-Streams, which has a framework attached to it: 
Akka. That means you would be at the mercy of the actor system most of the 
time, not to mention, its architecture has a ton of mandatory async boundaries 
that lower performance considerably.

I wouldn't recommend writing your own RS library. Writing correct, 
backpressure-enabled operators is 1-2 orders of magnitude more complicated than 
the complication of the steep learning curve of RxJava.

In conclusion, if you can wait a few months before work is started on this, you 
can use Reactor for the internal implementation and expose the functionalities 
as standard RS interface(s).

If you have questions about reactive topics (I don't know or use Cassandra 
btw), let me know.

> Proposal: Integrate RxJava
> --
>
> Key: CASSANDRA-10528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10528
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.x
>
> Attachments: rxjava-stress.png
>
>
> The purpose of this ticket is to discuss the merits of integrating the 
> [RxJava|https://github.com/ReactiveX/RxJava] framework into C*.  Enabling us 
> to incrementally make the internals of C* async and move away from SEDA to a 
> more modern thread per core architecture. 
> Related tickets:
>* CASSANDRA-8520
>* CASSANDRA-8457
>* CASSANDRA-5239
>* CASSANDRA-7040
>* CASSANDRA-5863
>* CASSANDRA-6696
>* CASSANDRA-7392
> My *primary* goals in raising this issue are to provide a way of:
> *  *Incrementally* making the backend async
> *  Avoiding code complexity/readability issues
> *  Avoiding NIH where possible
> *  Building on an extendable library
> My *non*-goals in raising this issue are:
> 
>* Rewrite the entire database in one big bang
>* Write our own async api/framework
> 
> -
> I've attempted to integrate RxJava a while back and found it not ready mainly 
> due to our lack of lambda support.  Now with Java 8 I've found it very 
> enjoyable and have not hit any performance issues. A gentle introduction to 
> RxJava is [here|http://blog.danlew.net/2014/09/15/grokking-rxjava-part-1/] as 
> well as their 
> [wiki|https://github.com/ReactiveX/RxJava/wiki/Additional-Reading].  The 
> primary concept of RX is the 
> [Obervable|http://reactivex.io/documentation/observable.html] which is 
> essentially a stream of stuff you can subscribe to and act on, chain, etc. 
> This is quite similar to [Java 8 streams 
> api|http://www.oracle.com/technetwork/articles/java/ma14-java-se-8-streams-2177646.html]
>  (or I should say streams api is similar to it).  The difference is java 8 
> streams can't be used for asynchronous events while RxJava can.
> Another improvement since I last tried integrating RxJava is the completion 
> of CASSANDRA-8099 which provides is a very iterable/incremental approach to 
> our storage engine.  *Iterators and Observables are well paired conceptually 
> so morphing our current Storage engine to be async is much simpler now.*
> In an effort to show how one can incrementally change our backend I've done a 
> quick POC with RxJava and replaced our non-paging read requests to become 
> non-blocking.
> https://github.com/apache/cassandra/compare/trunk...tjake:rxjava-3.0
> 

[jira] [Commented] (CASSANDRA-11532) CqlConfigHelper requires both truststore and keystore to work with SSL encryption

2016-04-11 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235724#comment-15235724
 ] 

Jeremiah Jordan commented on CASSANDRA-11532:
-

[~iamaleksey] sorry, looks like I had a fail pasting in the commit message.  
The code is right.

> CqlConfigHelper requires both truststore and keystore to work with SSL 
> encryption
> -
>
> Key: CASSANDRA-11532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11532
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
> Attachments: CASSANDRA_11532.patch
>
>
> {{CqlConfigHelper}} configures SSL in the following way:
> {code:java}
> public static Optional getSSLOptions(Configuration conf)
> {
> Optional truststorePath = 
> getInputNativeSSLTruststorePath(conf);
> Optional keystorePath = getInputNativeSSLKeystorePath(conf);
> Optional truststorePassword = 
> getInputNativeSSLTruststorePassword(conf);
> Optional keystorePassword = 
> getInputNativeSSLKeystorePassword(conf);
> Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
> 
> if (truststorePath.isPresent() && keystorePath.isPresent() && 
> truststorePassword.isPresent() && keystorePassword.isPresent())
> {
> SSLContext context;
> try
> {
> context = getSSLContext(truststorePath.get(), 
> truststorePassword.get(), keystorePath.get(), keystorePassword.get());
> }
> catch (UnrecoverableKeyException | KeyManagementException |
> NoSuchAlgorithmException | KeyStoreException | 
> CertificateException | IOException e)
> {
> throw new RuntimeException(e);
> }
> String[] css = null;
> if (cipherSuites.isPresent())
> css = cipherSuites.get().split(",");
> return Optional.of(JdkSSLOptions.builder()
> .withSSLContext(context)
> .withCipherSuites(css)
> .build());
> }
> return Optional.absent();
> }
> {code}
> which forces you to connect only to trusted nodes and client authentication. 
> This should be made more flexible so that at least client authentication is 
> optional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11542) Create a benchmark to compare HDFS and Cassandra bulk read times

2016-04-11 Thread vincent.poncet (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235701#comment-15235701
 ] 

vincent.poncet commented on CASSANDRA-11542:


If you plan to do parquet tests, you have to make sure you are not only doing 
count or min/max tests.
Parquet is columnar, so only read relevant fields for the query and has 
statistics on fields (min,max) about group of rows and Spark uses it to skip a 
group of rows if that fits the where predicates.
https://mail-archives.apache.org/mod_mbox/spark-user/201508.mbox/%3c55cc562c.6050...@gmail.com%3E

DataBricks delivered a TPC-DS performance test for Spark SQL.
https://github.com/databricks/spark-sql-perf
TPC-DS is supposed to bring real life relevant datawarehouse style queries and 
all SQL-on-Hadoop articles are around TPC-DS.

If you want to do pure bulk reading, make sure to disable predicate pushdown 
using spark.sql.parquet.filterPushdown setting.


> Create a benchmark to compare HDFS and Cassandra bulk read times
> 
>
> Key: CASSANDRA-11542
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11542
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Testing
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> I propose creating a benchmark for comparing Cassandra and HDFS bulk reading 
> performance. Simple Spark queries will be performed on data stored in HDFS or 
> Cassandra, and the entire duration will be measured. An example query would 
> be the max or min of a column or a count\(*\).
> This benchmark should allow determining the impact of:
> * partition size
> * number of clustering columns
> * number of value columns (cells)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10624) Support UDT in CQLSSTableWriter

2016-04-11 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235699#comment-15235699
 ] 

Alex Petrov commented on CASSANDRA-10624:
-

Thank you [~iamaleksey]! Verified it with a unit test, on my way to fix it. 
Great catch, didn't think about the nested UDTs...

> Support UDT in CQLSSTableWriter
> ---
>
> Key: CASSANDRA-10624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10624
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.x
>
> Attachments: 0001-Add-support-for-UDTs-to-CQLSStableWriter.patch, 
> 0001-Support-UDTs-in-CQLSStableWriterV2.patch
>
>
> As far as I can tell, there is not way to use a UDT with {{CQLSSTableWriter}} 
> since there is no way to declare it and thus {{CQLSSTableWriter.Builder}} 
> knows of no UDT when parsing the {{CREATE TABLE}} statement passed.
> In terms of API, I think the simplest would be to allow to pass types to the 
> builder in the same way we pass the table definition. So something like:
> {noformat}
> String type = "CREATE TYPE myKs.vertex (x int, y int, z int)";
> String schema = "CREATE TABLE myKs.myTable ("
>   + "  k int PRIMARY KEY,"
>   + "  s set"
>   + ")";
> String insert = ...;
> CQLSSTableWriter writer = CQLSSTableWriter.builder()
>   .inDirectory("path/to/directory")
>   .withType(type)
>   .forTable(schema)
>   .using(insert).build();
> {noformat}
> I'll note that implementation wise, this might be a bit simpler after the 
> changes of CASSANDRA-10365 (as it makes it easy to passe specific types 
> during the preparation of the create statement).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11532) CqlConfigHelper requires both truststore and keystore to work with SSL encryption

2016-04-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235687#comment-15235687
 ] 

Aleksey Yeschenko commented on CASSANDRA-11532:
---

[~jjordan] You sure everything's alright in that branch? The commit message 
looks fishy.

> CqlConfigHelper requires both truststore and keystore to work with SSL 
> encryption
> -
>
> Key: CASSANDRA-11532
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11532
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
> Attachments: CASSANDRA_11532.patch
>
>
> {{CqlConfigHelper}} configures SSL in the following way:
> {code:java}
> public static Optional getSSLOptions(Configuration conf)
> {
> Optional truststorePath = 
> getInputNativeSSLTruststorePath(conf);
> Optional keystorePath = getInputNativeSSLKeystorePath(conf);
> Optional truststorePassword = 
> getInputNativeSSLTruststorePassword(conf);
> Optional keystorePassword = 
> getInputNativeSSLKeystorePassword(conf);
> Optional cipherSuites = getInputNativeSSLCipherSuites(conf);
> 
> if (truststorePath.isPresent() && keystorePath.isPresent() && 
> truststorePassword.isPresent() && keystorePassword.isPresent())
> {
> SSLContext context;
> try
> {
> context = getSSLContext(truststorePath.get(), 
> truststorePassword.get(), keystorePath.get(), keystorePassword.get());
> }
> catch (UnrecoverableKeyException | KeyManagementException |
> NoSuchAlgorithmException | KeyStoreException | 
> CertificateException | IOException e)
> {
> throw new RuntimeException(e);
> }
> String[] css = null;
> if (cipherSuites.isPresent())
> css = cipherSuites.get().split(",");
> return Optional.of(JdkSSLOptions.builder()
> .withSSLContext(context)
> .withCipherSuites(css)
> .build());
> }
> return Optional.absent();
> }
> {code}
> which forces you to connect only to trusted nodes and client authentication. 
> This should be made more flexible so that at least client authentication is 
> optional. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11529) Checking if an unlogged batch is local is inefficient

2016-04-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11529:
--
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 2.1.x)
   (was: 3.x)
   3.0.6
   3.6
   2.2.6
   2.1.14
   Status: Resolved  (was: Ready to Commit)

> Checking if an unlogged batch is local is inefficient
> -
>
> Key: CASSANDRA-11529
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11529
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Paulo Motta
>Assignee: Stefania
>Priority: Critical
> Fix For: 2.1.14, 2.2.6, 3.6, 3.0.6
>
>
> Based on CASSANDRA-11363 report I noticed that on CASSANDRA-9303 we 
> introduced the following check to avoid printing a {{WARN}} in case an 
> unlogged batch statement is local:
> {noformat}
>  for (IMutation im : mutations)
>  {
>  keySet.add(im.key());
>  for (ColumnFamily cf : im.getColumnFamilies())
>  ksCfPairs.add(String.format("%s.%s", 
> cf.metadata().ksName, cf.metadata().cfName));
> +
> +if (localMutationsOnly)
> +localMutationsOnly &= isMutationLocal(localTokensByKs, 
> im);
>  }
>  
> +// CASSANDRA-9303: If we only have local mutations we do not warn
> +if (localMutationsOnly)
> +return;
> +
>  NoSpamLogger.log(logger, NoSpamLogger.Level.WARN, 1, 
> TimeUnit.MINUTES, unloggedBatchWarning,
>   keySet.size(), keySet.size() == 1 ? "" : "s",
>   ksCfPairs.size() == 1 ? "" : "s", ksCfPairs);
> {noformat}
> The {{isMutationLocal}} check uses 
> {{StorageService.instance.getLocalRanges(mutation.getKeyspaceName())}}, which 
> underneaths uses {{AbstractReplication.getAddressRanges}} to calculate local 
> ranges. 
> Recalculating this at every unlogged batch can be pretty inefficient, so we 
> should at the very least cache it every time the ring changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11529) Checking if an unlogged batch is local is inefficient

2016-04-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235660#comment-15235660
 ] 

Aleksey Yeschenko commented on CASSANDRA-11529:
---

Committed as 
[c1b1d3bccf30a7ee1deb633d2bc2dfbd7b9c542f|https://github.com/apache/cassandra/commit/c1b1d3bccf30a7ee1deb633d2bc2dfbd7b9c542f]
 to 2.1 and merged upwards into 2.2, 3.0, and trunk, thanks.

dtest PR merged.

> Checking if an unlogged batch is local is inefficient
> -
>
> Key: CASSANDRA-11529
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11529
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Paulo Motta
>Assignee: Stefania
>Priority: Critical
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Based on CASSANDRA-11363 report I noticed that on CASSANDRA-9303 we 
> introduced the following check to avoid printing a {{WARN}} in case an 
> unlogged batch statement is local:
> {noformat}
>  for (IMutation im : mutations)
>  {
>  keySet.add(im.key());
>  for (ColumnFamily cf : im.getColumnFamilies())
>  ksCfPairs.add(String.format("%s.%s", 
> cf.metadata().ksName, cf.metadata().cfName));
> +
> +if (localMutationsOnly)
> +localMutationsOnly &= isMutationLocal(localTokensByKs, 
> im);
>  }
>  
> +// CASSANDRA-9303: If we only have local mutations we do not warn
> +if (localMutationsOnly)
> +return;
> +
>  NoSpamLogger.log(logger, NoSpamLogger.Level.WARN, 1, 
> TimeUnit.MINUTES, unloggedBatchWarning,
>   keySet.size(), keySet.size() == 1 ? "" : "s",
>   ksCfPairs.size() == 1 ? "" : "s", ksCfPairs);
> {noformat}
> The {{isMutationLocal}} check uses 
> {{StorageService.instance.getLocalRanges(mutation.getKeyspaceName())}}, which 
> underneaths uses {{AbstractReplication.getAddressRanges}} to calculate local 
> ranges. 
> Recalculating this at every unlogged batch can be pretty inefficient, so we 
> should at the very least cache it every time the ring changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[01/10] cassandra git commit: Checking if an unlogged batch is local is inefficient

2016-04-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 2dd244b43 -> c1b1d3bcc
  refs/heads/cassandra-2.2 3557d2e05 -> ab2b8a60c
  refs/heads/cassandra-3.0 f0cd3261b -> 5dbeef3f5
  refs/heads/trunk c2acf4716 -> cb1a63474


Checking if an unlogged batch is local is inefficient

patch by Stefania Alborghetti; reviewed by Paulo Motta for CASSANDRA-11529


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1b1d3bc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1b1d3bc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1b1d3bc

Branch: refs/heads/cassandra-2.1
Commit: c1b1d3bccf30a7ee1deb633d2bc2dfbd7b9c542f
Parents: 2dd244b
Author: Stefania Alborghetti 
Authored: Fri Apr 8 11:52:17 2016 +0800
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 19:12:25 2016 +0100

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml |  4 +++
 .../org/apache/cassandra/config/Config.java |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../cql3/statements/BatchStatement.java | 38 ++--
 5 files changed, 21 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 113da17..6385509 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.14
+ * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
  * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448)
  * Backport CASSANDRA-10859 (CASSANDRA-11415)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 0da4800..90c5be4 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -555,6 +555,10 @@ column_index_size_in_kb: 64
 # Caution should be taken on increasing the size of this threshold as it can 
lead to node instability.
 batch_size_warn_threshold_in_kb: 5
 
+
+# Log WARN on any batches not of type LOGGED than span across more partitions 
than this limit
+unlogged_batch_across_partitions_warn_threshold: 10
+
 # Number of simultaneous compactions to allow, NOT including
 # validation "compactions" for anti-entropy repair.  Simultaneous
 # compactions can help preserve read performance in a mixed read/write

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 63bbf96..9ff7096 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -144,6 +144,7 @@ public class Config
 /* if the size of columns or super-columns are more than this, indexing 
will kick in */
 public Integer column_index_size_in_kb = 64;
 public Integer batch_size_warn_threshold_in_kb = 5;
+public Integer unlogged_batch_across_partitions_warn_threshold = 10;
 public Integer concurrent_compactors;
 public volatile Integer compaction_throughput_mb_per_sec = 16;
 public volatile Integer compaction_large_partition_warning_threshold_mb = 
100;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 84381a0..166ce7e 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -860,6 +860,11 @@ public class DatabaseDescriptor
 return conf.batch_size_warn_threshold_in_kb * 1024;
 }
 
+public static int getUnloggedBatchAcrossPartitionsWarnThreshold()
+{
+return conf.unlogged_batch_across_partitions_warn_threshold;
+}
+
 public static Collection getInitialTokens()
 {
 return tokensFromString(System.getProperty("cassandra.initial_token", 
conf.initial_token));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStateme

[04/10] cassandra git commit: Checking if an unlogged batch is local is inefficient

2016-04-11 Thread aleksey
Checking if an unlogged batch is local is inefficient

patch by Stefania Alborghetti; reviewed by Paulo Motta for CASSANDRA-11529


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1b1d3bc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1b1d3bc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1b1d3bc

Branch: refs/heads/trunk
Commit: c1b1d3bccf30a7ee1deb633d2bc2dfbd7b9c542f
Parents: 2dd244b
Author: Stefania Alborghetti 
Authored: Fri Apr 8 11:52:17 2016 +0800
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 19:12:25 2016 +0100

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml |  4 +++
 .../org/apache/cassandra/config/Config.java |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../cql3/statements/BatchStatement.java | 38 ++--
 5 files changed, 21 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 113da17..6385509 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.14
+ * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
  * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448)
  * Backport CASSANDRA-10859 (CASSANDRA-11415)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 0da4800..90c5be4 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -555,6 +555,10 @@ column_index_size_in_kb: 64
 # Caution should be taken on increasing the size of this threshold as it can 
lead to node instability.
 batch_size_warn_threshold_in_kb: 5
 
+
+# Log WARN on any batches not of type LOGGED than span across more partitions 
than this limit
+unlogged_batch_across_partitions_warn_threshold: 10
+
 # Number of simultaneous compactions to allow, NOT including
 # validation "compactions" for anti-entropy repair.  Simultaneous
 # compactions can help preserve read performance in a mixed read/write

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 63bbf96..9ff7096 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -144,6 +144,7 @@ public class Config
 /* if the size of columns or super-columns are more than this, indexing 
will kick in */
 public Integer column_index_size_in_kb = 64;
 public Integer batch_size_warn_threshold_in_kb = 5;
+public Integer unlogged_batch_across_partitions_warn_threshold = 10;
 public Integer concurrent_compactors;
 public volatile Integer compaction_throughput_mb_per_sec = 16;
 public volatile Integer compaction_large_partition_warning_threshold_mb = 
100;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 84381a0..166ce7e 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -860,6 +860,11 @@ public class DatabaseDescriptor
 return conf.batch_size_warn_threshold_in_kb * 1024;
 }
 
+public static int getUnloggedBatchAcrossPartitionsWarnThreshold()
+{
+return conf.unlogged_batch_across_partitions_warn_threshold;
+}
+
 public static Collection getInitialTokens()
 {
 return tokensFromString(System.getProperty("cassandra.initial_token", 
conf.initial_token));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index fb76c8d..ada8d91 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -33,13 +33,10 @@ import org.apache.cassandra.config.Colum

[06/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-11 Thread aleksey
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab2b8a60
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab2b8a60
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab2b8a60

Branch: refs/heads/cassandra-2.2
Commit: ab2b8a60c4b6d27081d632fefa0e19ee13816e2c
Parents: 3557d2e c1b1d3b
Author: Aleksey Yeschenko 
Authored: Mon Apr 11 19:14:41 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 19:15:47 2016 +0100

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml |  3 ++
 .../org/apache/cassandra/config/Config.java |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../cql3/statements/BatchStatement.java | 42 +---
 5 files changed, 21 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab2b8a60/CHANGES.txt
--
diff --cc CHANGES.txt
index e935e57,6385509..419ed21
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,51 -1,9 +1,52 @@@
 -2.1.14
 +2.2.6
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
 + * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
 + * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
 + * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
 + * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
 +   (CASSANDRA-10010)
 +Merged from 2.1:
+  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
   * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
 - * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448)
 - * Backport CASSANDRA-10859 (CASSANDRA-11415)
 - * COPY FROM fails when importing blob (CASSANDRA-11375)
 + * Add a -j parameter to scrub/cleanup/upgradesstables to state how
 +   many threads t

[09/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-11 Thread aleksey
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5dbeef3f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5dbeef3f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5dbeef3f

Branch: refs/heads/trunk
Commit: 5dbeef3f51e61525c90de35c521142f7db340fe5
Parents: f0cd326 ab2b8a6
Author: Aleksey Yeschenko 
Authored: Mon Apr 11 19:16:03 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 19:16:39 2016 +0100

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml |  3 ++
 .../org/apache/cassandra/config/Config.java |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../cql3/statements/BatchStatement.java | 37 ++--
 5 files changed, 20 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dbeef3f/CHANGES.txt
--
diff --cc CHANGES.txt
index 8c40e63,419ed21..76c9d99
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -89,9 -42,12 +89,10 @@@ Merged from 2.2
   * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
   * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
 (CASSANDRA-10010)
 + * cqlsh: change default encoding to UTF-8 (CASSANDRA-11124)
  Merged from 2.1:
+  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
 - * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
 - * Add a -j parameter to scrub/cleanup/upgradesstables to state how
 -   many threads to use (CASSANDRA-11179)
 - * Backport CASSANDRA-10679 (CASSANDRA-9598)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
   * Don't do defragmentation if reading from repaired sstables 
(CASSANDRA-10342)
   * Fix streaming_socket_timeout_in_ms not enforced (CASSANDRA-11286)
   * Avoid dropping message too quickly due to missing unit conversion 
(CASSANDRA-11302)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dbeef3f/conf/cassandra.yaml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dbeef3f/src/java/org/apache/cassandra/config/Config.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dbeef3f/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dbeef3f/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index 47396fb,76e389b..1c395a5
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@@ -32,12 -32,12 +32,10 @@@ import org.apache.cassandra.config.Colu
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.cql3.*;
  import org.apache.cassandra.db.*;
 -import org.apache.cassandra.db.composites.Composite;
 +import org.apache.cassandra.db.partitions.PartitionUpdate;
 +import org.apache.cassandra.db.rows.RowIterator;
- import org.apache.cassandra.dht.Range;
- import org.apache.cassandra.dht.Token;
  import org.apache.cassandra.exceptions.*;
 -import org.apache.cassandra.service.ClientState;
 -import org.apache.cassandra.service.ClientWarn;
 -import org.apache.cassandra.service.QueryState;
 -import org.apache.cassandra.service.StorageProxy;
 +import org.apache.cassandra.service.*;
  import org.apache.cassandra.tracing.Tracing;
  import org.apache.cassandra.transport.messages.ResultMessage;
  import org.apache.cassandra.utils.NoSpamLogger;
@@@ -69,16 -59,8 +67,16 @@@ public class BatchStatement implements 
  private final Attributes attrs;
  private final boolean hasConditions;
  private static final Logger logger = 
LoggerFactory.getLogger(BatchStatement.class);
 -private static final String unloggedBatchWarning = "Unlogged batch 
covering {} partitions detected against table{} {}. " +
 -   "You should use a 
logged batch for atomicity, or asynchronous writes for performance.";
 +
- private static final String UNLOGGED_BATCH_WARNING = "Unlogged batch 
covering {} partition{} detected " +
++private static final String UNLOGGED_BATCH_WARNING = "Unlogged batch 
covering {} partitions detected " +
 + "against table{} {}. 
You should use a logged batch for " +
 +  

[07/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-11 Thread aleksey
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab2b8a60
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab2b8a60
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab2b8a60

Branch: refs/heads/trunk
Commit: ab2b8a60c4b6d27081d632fefa0e19ee13816e2c
Parents: 3557d2e c1b1d3b
Author: Aleksey Yeschenko 
Authored: Mon Apr 11 19:14:41 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 19:15:47 2016 +0100

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml |  3 ++
 .../org/apache/cassandra/config/Config.java |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../cql3/statements/BatchStatement.java | 42 +---
 5 files changed, 21 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab2b8a60/CHANGES.txt
--
diff --cc CHANGES.txt
index e935e57,6385509..419ed21
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,51 -1,9 +1,52 @@@
 -2.1.14
 +2.2.6
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
 + * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
 + * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
 + * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
 + * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
 +   (CASSANDRA-10010)
 +Merged from 2.1:
+  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
   * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
 - * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448)
 - * Backport CASSANDRA-10859 (CASSANDRA-11415)
 - * COPY FROM fails when importing blob (CASSANDRA-11375)
 + * Add a -j parameter to scrub/cleanup/upgradesstables to state how
 +   many threads to use (C

[03/10] cassandra git commit: Checking if an unlogged batch is local is inefficient

2016-04-11 Thread aleksey
Checking if an unlogged batch is local is inefficient

patch by Stefania Alborghetti; reviewed by Paulo Motta for CASSANDRA-11529


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1b1d3bc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1b1d3bc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1b1d3bc

Branch: refs/heads/cassandra-3.0
Commit: c1b1d3bccf30a7ee1deb633d2bc2dfbd7b9c542f
Parents: 2dd244b
Author: Stefania Alborghetti 
Authored: Fri Apr 8 11:52:17 2016 +0800
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 19:12:25 2016 +0100

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml |  4 +++
 .../org/apache/cassandra/config/Config.java |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../cql3/statements/BatchStatement.java | 38 ++--
 5 files changed, 21 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 113da17..6385509 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.14
+ * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
  * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448)
  * Backport CASSANDRA-10859 (CASSANDRA-11415)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 0da4800..90c5be4 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -555,6 +555,10 @@ column_index_size_in_kb: 64
 # Caution should be taken on increasing the size of this threshold as it can 
lead to node instability.
 batch_size_warn_threshold_in_kb: 5
 
+
+# Log WARN on any batches not of type LOGGED than span across more partitions 
than this limit
+unlogged_batch_across_partitions_warn_threshold: 10
+
 # Number of simultaneous compactions to allow, NOT including
 # validation "compactions" for anti-entropy repair.  Simultaneous
 # compactions can help preserve read performance in a mixed read/write

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 63bbf96..9ff7096 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -144,6 +144,7 @@ public class Config
 /* if the size of columns or super-columns are more than this, indexing 
will kick in */
 public Integer column_index_size_in_kb = 64;
 public Integer batch_size_warn_threshold_in_kb = 5;
+public Integer unlogged_batch_across_partitions_warn_threshold = 10;
 public Integer concurrent_compactors;
 public volatile Integer compaction_throughput_mb_per_sec = 16;
 public volatile Integer compaction_large_partition_warning_threshold_mb = 
100;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 84381a0..166ce7e 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -860,6 +860,11 @@ public class DatabaseDescriptor
 return conf.batch_size_warn_threshold_in_kb * 1024;
 }
 
+public static int getUnloggedBatchAcrossPartitionsWarnThreshold()
+{
+return conf.unlogged_batch_across_partitions_warn_threshold;
+}
+
 public static Collection getInitialTokens()
 {
 return tokensFromString(System.getProperty("cassandra.initial_token", 
conf.initial_token));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index fb76c8d..ada8d91 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -33,13 +33,10 @@ import org.apache.cassandra.conf

[02/10] cassandra git commit: Checking if an unlogged batch is local is inefficient

2016-04-11 Thread aleksey
Checking if an unlogged batch is local is inefficient

patch by Stefania Alborghetti; reviewed by Paulo Motta for CASSANDRA-11529


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1b1d3bc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1b1d3bc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1b1d3bc

Branch: refs/heads/cassandra-2.2
Commit: c1b1d3bccf30a7ee1deb633d2bc2dfbd7b9c542f
Parents: 2dd244b
Author: Stefania Alborghetti 
Authored: Fri Apr 8 11:52:17 2016 +0800
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 19:12:25 2016 +0100

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml |  4 +++
 .../org/apache/cassandra/config/Config.java |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../cql3/statements/BatchStatement.java | 38 ++--
 5 files changed, 21 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 113da17..6385509 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.14
+ * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
  * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
  * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448)
  * Backport CASSANDRA-10859 (CASSANDRA-11415)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 0da4800..90c5be4 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -555,6 +555,10 @@ column_index_size_in_kb: 64
 # Caution should be taken on increasing the size of this threshold as it can 
lead to node instability.
 batch_size_warn_threshold_in_kb: 5
 
+
+# Log WARN on any batches not of type LOGGED than span across more partitions 
than this limit
+unlogged_batch_across_partitions_warn_threshold: 10
+
 # Number of simultaneous compactions to allow, NOT including
 # validation "compactions" for anti-entropy repair.  Simultaneous
 # compactions can help preserve read performance in a mixed read/write

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index 63bbf96..9ff7096 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -144,6 +144,7 @@ public class Config
 /* if the size of columns or super-columns are more than this, indexing 
will kick in */
 public Integer column_index_size_in_kb = 64;
 public Integer batch_size_warn_threshold_in_kb = 5;
+public Integer unlogged_batch_across_partitions_warn_threshold = 10;
 public Integer concurrent_compactors;
 public volatile Integer compaction_throughput_mb_per_sec = 16;
 public volatile Integer compaction_large_partition_warning_threshold_mb = 
100;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 84381a0..166ce7e 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -860,6 +860,11 @@ public class DatabaseDescriptor
 return conf.batch_size_warn_threshold_in_kb * 1024;
 }
 
+public static int getUnloggedBatchAcrossPartitionsWarnThreshold()
+{
+return conf.unlogged_batch_across_partitions_warn_threshold;
+}
+
 public static Collection getInitialTokens()
 {
 return tokensFromString(System.getProperty("cassandra.initial_token", 
conf.initial_token));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1b1d3bc/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index fb76c8d..ada8d91 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -33,13 +33,10 @@ import org.apache.cassandra.conf

[10/10] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-11 Thread aleksey
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb1a6347
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb1a6347
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb1a6347

Branch: refs/heads/trunk
Commit: cb1a63474d37d9bf0525d4c5be7c30ddd2ec6965
Parents: c2acf47 5dbeef3
Author: Aleksey Yeschenko 
Authored: Mon Apr 11 19:18:21 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 19:18:42 2016 +0100

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml | 62 +++-
 .../org/apache/cassandra/config/Config.java |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 ++
 .../cql3/statements/BatchStatement.java | 39 
 5 files changed, 51 insertions(+), 57 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb1a6347/CHANGES.txt
--
diff --cc CHANGES.txt
index f399fd9,76c9d99..69ad7da
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -167,9 -89,9 +167,10 @@@ Merged from 2.2
   * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
   * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
 (CASSANDRA-10010)
 + * (cqlsh) Support timezone conversion using pytz (CASSANDRA-10397)
   * cqlsh: change default encoding to UTF-8 (CASSANDRA-11124)
  Merged from 2.1:
+  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
   * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
   * Don't do defragmentation if reading from repaired sstables 
(CASSANDRA-10342)
   * Fix streaming_socket_timeout_in_ms not enforced (CASSANDRA-11286)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb1a6347/conf/cassandra.yaml
--
diff --cc conf/cassandra.yaml
index 58bd1b6,f81c1e5..f9be453
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@@ -649,18 -630,18 +649,6 @@@ snapshot_before_compaction: fals
  # lose data on truncation or drop.
  auto_snapshot: true
  
--# When executing a scan, within or across a partition, we need to keep the
--# tombstones seen in memory so we can return them to the coordinator, which
--# will use them to make sure other replicas also know about the deleted rows.
--# With workloads that generate a lot of tombstones, this can cause performance
--# problems and even exaust the server heap.
--# 
(http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets)
--# Adjust the thresholds here if you understand the dangers and want to
--# scan more tombstones anyway.  These thresholds may also be adjusted at 
runtime
--# using the StorageService mbean.
--tombstone_warn_threshold: 1000
--tombstone_failure_threshold: 10
--
  # Granularity of the collation index of rows within a partition.
  # Increase if your rows are large, or if you have a very large
  # number of rows per partition.  The competing goals are these:
@@@ -672,14 -653,17 +660,6 @@@
  #  you can cache more hot rows
  column_index_size_in_kb: 64
  
--
--# Log WARN on any batch size exceeding this value. 5kb per batch by default.
--# Caution should be taken on increasing the size of this threshold as it can 
lead to node instability.
--batch_size_warn_threshold_in_kb: 5
--
--# Fail any batch exceeding this value. 50kb (10x warn threshold) by default.
--batch_size_fail_threshold_in_kb: 50
 -
 -# Log WARN on any batches not of type LOGGED than span across more partitions 
than this limit
 -unlogged_batch_across_partitions_warn_threshold: 10
--
  # Number of simultaneous compactions to allow, NOT including
  # validation "compactions" for anti-entropy repair.  Simultaneous
  # compactions can help preserve read performance in a mixed read/write
@@@ -704,9 -688,9 +684,6 @@@
  # of compaction, including validation compaction.
  compaction_throughput_mb_per_sec: 16
  
--# Log a warning when compacting partitions larger than this value
--compaction_large_partition_warning_threshold_mb: 100
--
  # When compacting, the replacement sstable(s) can be opened before they
  # are completely written, and used in place of the prior sstables for
  # any range that has been written. This helps to smoothly transfer reads 
@@@ -942,11 -921,11 +919,6 @@@ inter_dc_tcp_nodelay: fals
  tracetype_query_ttl: 86400
  tracetype_repair_ttl: 604800
  
--# GC Pauses greater than gc_warn_threshold_in_ms will be logged at WARN level
--# Adjust the threshold based on your application throughput requirement
--# By default, Cassandra logs GC Pauses greater than 200 ms at INFO level
--gc_warn_threshold_in_ms: 1000
--
  # UDFs (user defined function

[05/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-11 Thread aleksey
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab2b8a60
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab2b8a60
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab2b8a60

Branch: refs/heads/cassandra-3.0
Commit: ab2b8a60c4b6d27081d632fefa0e19ee13816e2c
Parents: 3557d2e c1b1d3b
Author: Aleksey Yeschenko 
Authored: Mon Apr 11 19:14:41 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 19:15:47 2016 +0100

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml |  3 ++
 .../org/apache/cassandra/config/Config.java |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../cql3/statements/BatchStatement.java | 42 +---
 5 files changed, 21 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab2b8a60/CHANGES.txt
--
diff --cc CHANGES.txt
index e935e57,6385509..419ed21
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,51 -1,9 +1,52 @@@
 -2.1.14
 +2.2.6
 + * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
 + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 + * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
 + * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
 + * Add cassandra-stress keystore option (CASSANDRA-9325)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
 + * Dont mark sstables as repairing with sub range repairs (CASSANDRA-11451)
 + * Fix use of NullUpdater for 2i during compaction (CASSANDRA-11450)
 + * Notify when sstables change after cancelling compaction (CASSANDRA-11373)
 + * cqlsh: COPY FROM should check that explicit column names are valid 
(CASSANDRA-11333)
 + * Add -Dcassandra.start_gossip startup option (CASSANDRA-10809)
 + * Fix UTF8Validator.validate() for modified UTF-8 (CASSANDRA-10748)
 + * Clarify that now() function is calculated on the coordinator node in CQL 
documentation (CASSANDRA-10900)
 + * Fix bloom filter sizing with LCS (CASSANDRA-11344)
 + * (cqlsh) Fix error when result is 0 rows with EXPAND ON (CASSANDRA-11092)
 + * Fix intra-node serialization issue for multicolumn-restrictions 
(CASSANDRA-11196)
 + * Non-obsoleting compaction operations over compressed files can impose rate 
limit on normal reads (CASSANDRA-11301)
 + * Add missing newline at end of bin/cqlsh (CASSANDRA-11325)
 + * Fix AE in nodetool cfstats (backport CASSANDRA-10859) (CASSANDRA-11297)
 + * Unresolved hostname leads to replace being ignored (CASSANDRA-11210)
 + * Fix filtering on non-primary key columns for thrift static column families
 +   (CASSANDRA-6377)
 + * Only log yaml config once, at startup (CASSANDRA-11217)
 + * Preserve order for preferred SSL cipher suites (CASSANDRA-11164)
 + * Reference leak with parallel repairs on the same table (CASSANDRA-11215)
 + * Range.compareTo() violates the contract of Comparable (CASSANDRA-11216)
 + * Avoid NPE when serializing ErrorMessage with null message (CASSANDRA-11167)
 + * Replacing an aggregate with a new version doesn't reset INITCOND 
(CASSANDRA-10840)
 + * (cqlsh) cqlsh cannot be called through symlink (CASSANDRA-11037)
 + * fix ohc and java-driver pom dependencies in build.xml (CASSANDRA-10793)
 + * Protect from keyspace dropped during repair (CASSANDRA-11065)
 + * Handle adding fields to a UDT in SELECT JSON and toJson() (CASSANDRA-11146)
 + * Better error message for cleanup (CASSANDRA-10991)
 + * cqlsh pg-style-strings broken if line ends with ';' (CASSANDRA-11123)
 + * Use cloned TokenMetadata in size estimates to avoid race against 
membership check
 +   (CASSANDRA-10736)
 + * Always persist upsampled index summaries (CASSANDRA-10512)
 + * (cqlsh) Fix inconsistent auto-complete (CASSANDRA-10733)
 + * Make SELECT JSON and toJson() threadsafe (CASSANDRA-11048)
 + * Fix SELECT on tuple relations for mixed ASC/DESC clustering order 
(CASSANDRA-7281)
 + * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
 + * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
 +   (CASSANDRA-10010)
 +Merged from 2.1:
+  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
   * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
 - * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448)
 - * Backport CASSANDRA-10859 (CASSANDRA-11415)
 - * COPY FROM fails when importing blob (CASSANDRA-11375)
 + * Add a -j parameter to scrub/cleanup/upgradesstables to state how
 +   many threads t

[08/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-11 Thread aleksey
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5dbeef3f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5dbeef3f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5dbeef3f

Branch: refs/heads/cassandra-3.0
Commit: 5dbeef3f51e61525c90de35c521142f7db340fe5
Parents: f0cd326 ab2b8a6
Author: Aleksey Yeschenko 
Authored: Mon Apr 11 19:16:03 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 19:16:39 2016 +0100

--
 CHANGES.txt |  1 +
 conf/cassandra.yaml |  3 ++
 .../org/apache/cassandra/config/Config.java |  1 +
 .../cassandra/config/DatabaseDescriptor.java|  5 +++
 .../cql3/statements/BatchStatement.java | 37 ++--
 5 files changed, 20 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dbeef3f/CHANGES.txt
--
diff --cc CHANGES.txt
index 8c40e63,419ed21..76c9d99
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -89,9 -42,12 +89,10 @@@ Merged from 2.2
   * (cqlsh) Support utf-8/cp65001 encoding on Windows (CASSANDRA-11030)
   * Fix paging on DISTINCT queries repeats result when first row in partition 
changes
 (CASSANDRA-10010)
 + * cqlsh: change default encoding to UTF-8 (CASSANDRA-11124)
  Merged from 2.1:
+  * Checking if an unlogged batch is local is inefficient (CASSANDRA-11529)
 - * Fix paging for COMPACT tables without clustering columns (CASSANDRA-11467)
 - * Add a -j parameter to scrub/cleanup/upgradesstables to state how
 -   many threads to use (CASSANDRA-11179)
 - * Backport CASSANDRA-10679 (CASSANDRA-9598)
 + * Fix out-of-space error treatment in memtable flushing (CASSANDRA-11448).
   * Don't do defragmentation if reading from repaired sstables 
(CASSANDRA-10342)
   * Fix streaming_socket_timeout_in_ms not enforced (CASSANDRA-11286)
   * Avoid dropping message too quickly due to missing unit conversion 
(CASSANDRA-11302)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dbeef3f/conf/cassandra.yaml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dbeef3f/src/java/org/apache/cassandra/config/Config.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dbeef3f/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5dbeef3f/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index 47396fb,76e389b..1c395a5
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@@ -32,12 -32,12 +32,10 @@@ import org.apache.cassandra.config.Colu
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.cql3.*;
  import org.apache.cassandra.db.*;
 -import org.apache.cassandra.db.composites.Composite;
 +import org.apache.cassandra.db.partitions.PartitionUpdate;
 +import org.apache.cassandra.db.rows.RowIterator;
- import org.apache.cassandra.dht.Range;
- import org.apache.cassandra.dht.Token;
  import org.apache.cassandra.exceptions.*;
 -import org.apache.cassandra.service.ClientState;
 -import org.apache.cassandra.service.ClientWarn;
 -import org.apache.cassandra.service.QueryState;
 -import org.apache.cassandra.service.StorageProxy;
 +import org.apache.cassandra.service.*;
  import org.apache.cassandra.tracing.Tracing;
  import org.apache.cassandra.transport.messages.ResultMessage;
  import org.apache.cassandra.utils.NoSpamLogger;
@@@ -69,16 -59,8 +67,16 @@@ public class BatchStatement implements 
  private final Attributes attrs;
  private final boolean hasConditions;
  private static final Logger logger = 
LoggerFactory.getLogger(BatchStatement.class);
 -private static final String unloggedBatchWarning = "Unlogged batch 
covering {} partitions detected against table{} {}. " +
 -   "You should use a 
logged batch for atomicity, or asynchronous writes for performance.";
 +
- private static final String UNLOGGED_BATCH_WARNING = "Unlogged batch 
covering {} partition{} detected " +
++private static final String UNLOGGED_BATCH_WARNING = "Unlogged batch 
covering {} partitions detected " +
 + "against table{} {}. 
You should use a logged batch for 

[jira] [Commented] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-11 Thread Brian Hess (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235653#comment-15235653
 ] 

 Brian Hess commented on CASSANDRA-11521:
-

This is configurable, but the default for reads is LOCAL_ONE and on writes is 
LOCAL_QUORUM. 

> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11505) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors

2016-04-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235641#comment-15235641
 ] 

Michael Shuler commented on CASSANDRA-11505:


+1 for backporting to 2.2.

I was unable to get 2.2 to hang on test_reading_max_parse_errors looping over 
the test for a long time.

3.0 passes running in a loop. I think we're good here with your patch!

> dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_max_parse_errors
> -
>
> Key: CASSANDRA-11505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11505
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Stefania
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/197/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors
> Failed on CassCI build cassandra-3.0_novnode_dtest #197
> {noformat}
> Error Message
> False is not true
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-c2AJlu
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse errors
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_copy_tests.py", 
> line 943, in test_reading_max_parse_errors
> self.assertTrue(num_rows_imported < (num_rows / 2))  # less than the 
> maximum number of valid rows in the csv
>   File "/usr/lib/python2.7/unittest/case.py", line 422, in assertTrue
> raise self.failureException(msg)
> "False is not true\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-c2AJlu\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None,\n'phi_convict_threshold': 5,\n
> 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': 
> 1,\n'request_timeout_in_ms': 1,\n
> 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\ndtest: DEBUG: Importing csv file /mnt/tmp/tmp2O43PH with 10 max parse 
> errors\n- >> end captured logging << 
> -"
> Standard Output
> (EE)  Using CQL driver:  '/home/automaton/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>(EE)
>   Using connect timeout: 5 seconds(EE)  Using 'utf-8' encoding(EE)  
> :2:Failed to import 2500 rows: ParseError - could not convert string 
> to float: abc,  given up without retries(EE)  :2:Exceeded maximum 
> number of parse errors 10(EE)  :2:Failed to process 2500 rows; failed 
> rows written to import_ks_testmaxparseerrors.err(EE)  :2:Exceeded 
> maximum number of parse errors 10(EE)  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10528) Proposal: Integrate RxJava

2016-04-11 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235634#comment-15235634
 ] 

T Jake Luciani commented on CASSANDRA-10528:


That's useful to know thx. I guess that makes all the requests run immediately. 
 I'll try that next time I run it.

As an aside, the latest code is on this branch 
https://github.com/tjake/cassandra/tree/rxjava2-trunk




> Proposal: Integrate RxJava
> --
>
> Key: CASSANDRA-10528
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10528
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.x
>
> Attachments: rxjava-stress.png
>
>
> The purpose of this ticket is to discuss the merits of integrating the 
> [RxJava|https://github.com/ReactiveX/RxJava] framework into C*.  Enabling us 
> to incrementally make the internals of C* async and move away from SEDA to a 
> more modern thread per core architecture. 
> Related tickets:
>* CASSANDRA-8520
>* CASSANDRA-8457
>* CASSANDRA-5239
>* CASSANDRA-7040
>* CASSANDRA-5863
>* CASSANDRA-6696
>* CASSANDRA-7392
> My *primary* goals in raising this issue are to provide a way of:
> *  *Incrementally* making the backend async
> *  Avoiding code complexity/readability issues
> *  Avoiding NIH where possible
> *  Building on an extendable library
> My *non*-goals in raising this issue are:
> 
>* Rewrite the entire database in one big bang
>* Write our own async api/framework
> 
> -
> I've attempted to integrate RxJava a while back and found it not ready mainly 
> due to our lack of lambda support.  Now with Java 8 I've found it very 
> enjoyable and have not hit any performance issues. A gentle introduction to 
> RxJava is [here|http://blog.danlew.net/2014/09/15/grokking-rxjava-part-1/] as 
> well as their 
> [wiki|https://github.com/ReactiveX/RxJava/wiki/Additional-Reading].  The 
> primary concept of RX is the 
> [Obervable|http://reactivex.io/documentation/observable.html] which is 
> essentially a stream of stuff you can subscribe to and act on, chain, etc. 
> This is quite similar to [Java 8 streams 
> api|http://www.oracle.com/technetwork/articles/java/ma14-java-se-8-streams-2177646.html]
>  (or I should say streams api is similar to it).  The difference is java 8 
> streams can't be used for asynchronous events while RxJava can.
> Another improvement since I last tried integrating RxJava is the completion 
> of CASSANDRA-8099 which provides is a very iterable/incremental approach to 
> our storage engine.  *Iterators and Observables are well paired conceptually 
> so morphing our current Storage engine to be async is much simpler now.*
> In an effort to show how one can incrementally change our backend I've done a 
> quick POC with RxJava and replaced our non-paging read requests to become 
> non-blocking.
> https://github.com/apache/cassandra/compare/trunk...tjake:rxjava-3.0
> As you can probably see the code is straight-forward and sometimes quite nice!
> *Old*
> {code}
> private static PartitionIterator 
> fetchRows(List> commands, ConsistencyLevel 
> consistencyLevel)
> throws UnavailableException, ReadFailureException, ReadTimeoutException
> {
> int cmdCount = commands.size();
> SinglePartitionReadLifecycle[] reads = new 
> SinglePartitionReadLifecycle[cmdCount];
> for (int i = 0; i < cmdCount; i++)
> reads[i] = new SinglePartitionReadLifecycle(commands.get(i), 
> consistencyLevel);
> for (int i = 0; i < cmdCount; i++)
> reads[i].doInitialQueries();
> for (int i = 0; i < cmdCount; i++)
> reads[i].maybeTryAdditionalReplicas();
> for (int i = 0; i < cmdCount; i++)
> reads[i].awaitRes
> ultsAndRetryOnDigestMismatch();
> for (int i = 0; i < cmdCount; i++)
> if (!reads[i].isDone())
> reads[i].maybeAwaitFullDataRead();
> List results = new ArrayList<>(cmdCount);
> for (int i = 0; i < cmdCount; i++)
> {
> assert reads[i].isDone();
> results.add(reads[i].getResult());
> }
> return PartitionIterators.concat(results);
> }
> {code}
>  *New*
> {code}
> private static Observable 
> fetchRows(List> commands, ConsistencyLevel 
> consistencyLevel)
> throws UnavailableException, ReadFailureException, ReadTimeoutException
> {
> return Observable.from(commands)
>  .map(command -> new 
> SinglePartitionReadLifecycle(command, consistencyLevel))
>  .flatMap(read -> read.getPartitionIterator())
>  .toList()
>  

[jira] [Comment Edited] (CASSANDRA-9842) Inconsistent behavior for '= null' conditions on static columns

2016-04-11 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15232315#comment-15232315
 ] 

Alex Petrov edited comment on CASSANDRA-9842 at 4/11/16 6:04 PM:
-

To sum up, there's no distinction between the non-existing row and a static 
column containing {{null}} value, so both an update to non-existing row and row 
with null in static column will succeed. 

Inconsistent behaviour is only in {{2.1}} and {{2.2}}, although I've added same 
tests to {{3.0}} and {{trunk}}. 

|| ||2.1||2.2||3.0||trunk|
||code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]|
||utest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-testall/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-testall/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-testall/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-testall/]|
||dtest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-dtest/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-dtest/]|


was (Author: ifesdjeen):
To sum up, there's no distinction between the non-existing row and a static 
column containing {{null}} value, so both an update to non-existing row and row 
with null in static column will succeed. 

Inconsistent behaviour is only in {{2.1}} and {{2.2}}, although I've added same 
tests to {{3.0}} and {{trunk}}. 

|| ||2.1||2.2||3.0||trunk|
||code|[2.1|https://github.com/ifesdjeen/cassandra/tree/9842-2.1]|[2.2|https://github.com/ifesdjeen/cassandra/tree/9842-2.2]|[3.0|https://github.com/ifesdjeen/cassandra/tree/9842-3.0]|[trunk|https://github.com/ifesdjeen/cassandra/tree/9842-trunk]|
||utest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-testall/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-testall/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-testall/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-testall/]|
||dtest|[2.1|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.1-dtest/]|[2.2|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-2.2-dtest/]|[3.0|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-3.0-dtest/]|[trunk|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-9842-trunk-dtest/]|

Waiting for CI results.

> Inconsistent behavior for '= null' conditions on static columns
> ---
>
> Key: CASSANDRA-9842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9842
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-2.1.8 on Ubuntu 15.04
>Reporter: Chandra Sekar
>Assignee: Alex Petrov
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Both inserting a row (in a non-existent partition) and updating a static 
> column in the same LWT fails. Creating the partition before performing the 
> LWT works.
> h3. Table Definition
> {code}
> create table txtable(pcol bigint, ccol bigint, scol bigint static, ncol text, 
> primary key((pcol), ccol));
> {code}
> h3. Inserting row in non-existent partition and updating static column in one 
> LWT
> {code}
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  False
> {code}
> h3. Creating partition before LWT
> {code}
> insert into txtable (pcol, scol) values (1, null) if not exists;
> begin batch
> insert into txtable (pcol, ccol, ncol) values (1, 1, 'A');
> update txtable set scol = 1 where pcol = 1 if scol = null;
> apply batch;
> [applied]
> ---
>  True
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235605#comment-15235605
 ] 

Aleksey Yeschenko commented on CASSANDRA-11521:
---

bq. I had this remark a long time ago back in 2014 and people told me that 
thanks to network compression there is no much wasted bandwidth indeed.

Not much wasted bandwidth, no. But a lot of wasted work on both ser and deser.

> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-11 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235601#comment-15235601
 ] 

DOAN DuyHai commented on CASSANDRA-11521:
-

bq. with each row, we both repeat all the clustering columns - even if many 
rows share them - and the partition key columns. Could get rid of it, and all 
related redundant serialisation, if not building on top of ResultSet.

I had this remark a long time ago back in 2014 and people told me that thanks 
to network compression there is no much wasted bandwidth indeed.

What I had in mind back then was to send **raw** data to the driver and the 
driver will be responsible to de-serialize and re-format the data to have a 
proper _CQL row representation_

But it means putting a bunch of extra-logic and overhead on the client side, 
not sure the core team agrees on this point

> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-11 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235601#comment-15235601
 ] 

DOAN DuyHai edited comment on CASSANDRA-11521 at 4/11/16 5:57 PM:
--

bq. with each row, we both repeat all the clustering columns - even if many 
rows share them - and the partition key columns. Could get rid of it, and all 
related redundant serialisation, if not building on top of ResultSet.

I had this remark a long time ago back in 2014 and people told me that thanks 
to network compression there is no much wasted bandwidth indeed.

What I had in mind back then was to send *raw* data to the driver and the 
driver will be responsible to de-serialize and re-format the data to have a 
proper _CQL row representation_

But it means putting a bunch of extra-logic and overhead on the client side, 
not sure the core team agrees on this point


was (Author: doanduyhai):
bq. with each row, we both repeat all the clustering columns - even if many 
rows share them - and the partition key columns. Could get rid of it, and all 
related redundant serialisation, if not building on top of ResultSet.

I had this remark a long time ago back in 2014 and people told me that thanks 
to network compression there is no much wasted bandwidth indeed.

What I had in mind back then was to send **raw** data to the driver and the 
driver will be responsible to de-serialize and re-format the data to have a 
proper _CQL row representation_

But it means putting a bunch of extra-logic and overhead on the client side, 
not sure the core team agrees on this point

> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10624) Support UDT in CQLSSTableWriter

2016-04-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10624:
--
Status: Awaiting Feedback  (was: Open)

> Support UDT in CQLSSTableWriter
> ---
>
> Key: CASSANDRA-10624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10624
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.x
>
> Attachments: 0001-Add-support-for-UDTs-to-CQLSStableWriter.patch, 
> 0001-Support-UDTs-in-CQLSStableWriterV2.patch
>
>
> As far as I can tell, there is not way to use a UDT with {{CQLSSTableWriter}} 
> since there is no way to declare it and thus {{CQLSSTableWriter.Builder}} 
> knows of no UDT when parsing the {{CREATE TABLE}} statement passed.
> In terms of API, I think the simplest would be to allow to pass types to the 
> builder in the same way we pass the table definition. So something like:
> {noformat}
> String type = "CREATE TYPE myKs.vertex (x int, y int, z int)";
> String schema = "CREATE TABLE myKs.myTable ("
>   + "  k int PRIMARY KEY,"
>   + "  s set"
>   + ")";
> String insert = ...;
> CQLSSTableWriter writer = CQLSSTableWriter.builder()
>   .inDirectory("path/to/directory")
>   .withType(type)
>   .forTable(schema)
>   .using(insert).build();
> {noformat}
> I'll note that implementation wise, this might be a bit simpler after the 
> changes of CASSANDRA-10365 (as it makes it easy to passe specific types 
> during the preparation of the create statement).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10624) Support UDT in CQLSSTableWriter

2016-04-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-10624:
--
Status: Open  (was: Ready to Commit)

> Support UDT in CQLSSTableWriter
> ---
>
> Key: CASSANDRA-10624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10624
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.x
>
> Attachments: 0001-Add-support-for-UDTs-to-CQLSStableWriter.patch, 
> 0001-Support-UDTs-in-CQLSStableWriterV2.patch
>
>
> As far as I can tell, there is not way to use a UDT with {{CQLSSTableWriter}} 
> since there is no way to declare it and thus {{CQLSSTableWriter.Builder}} 
> knows of no UDT when parsing the {{CREATE TABLE}} statement passed.
> In terms of API, I think the simplest would be to allow to pass types to the 
> builder in the same way we pass the table definition. So something like:
> {noformat}
> String type = "CREATE TYPE myKs.vertex (x int, y int, z int)";
> String schema = "CREATE TABLE myKs.myTable ("
>   + "  k int PRIMARY KEY,"
>   + "  s set"
>   + ")";
> String insert = ...;
> CQLSSTableWriter writer = CQLSSTableWriter.builder()
>   .inDirectory("path/to/directory")
>   .withType(type)
>   .forTable(schema)
>   .using(insert).build();
> {noformat}
> I'll note that implementation wise, this might be a bit simpler after the 
> changes of CASSANDRA-10365 (as it makes it easy to passe specific types 
> during the preparation of the create statement).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10624) Support UDT in CQLSSTableWriter

2016-04-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235583#comment-15235583
 ] 

Aleksey Yeschenko commented on CASSANDRA-10624:
---

While skimming quickly before commit, I realised that this won't work with 
multiple types that depend on each other, unless you are really careful with 
the order you define them (and if you aren't, it'll fail with a cryptic error).

Ideally we should delay type parsing and resolution until {{build()}} call (and 
by necessity do the same for table parsing).

Also, is there a good reason to slap {{synchronized}} there?

> Support UDT in CQLSSTableWriter
> ---
>
> Key: CASSANDRA-10624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10624
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Sylvain Lebresne
>Assignee: Alex Petrov
> Fix For: 3.x
>
> Attachments: 0001-Add-support-for-UDTs-to-CQLSStableWriter.patch, 
> 0001-Support-UDTs-in-CQLSStableWriterV2.patch
>
>
> As far as I can tell, there is not way to use a UDT with {{CQLSSTableWriter}} 
> since there is no way to declare it and thus {{CQLSSTableWriter.Builder}} 
> knows of no UDT when parsing the {{CREATE TABLE}} statement passed.
> In terms of API, I think the simplest would be to allow to pass types to the 
> builder in the same way we pass the table definition. So something like:
> {noformat}
> String type = "CREATE TYPE myKs.vertex (x int, y int, z int)";
> String schema = "CREATE TABLE myKs.myTable ("
>   + "  k int PRIMARY KEY,"
>   + "  s set"
>   + ")";
> String insert = ...;
> CQLSSTableWriter writer = CQLSSTableWriter.builder()
>   .inDirectory("path/to/directory")
>   .withType(type)
>   .forTable(schema)
>   .using(insert).build();
> {noformat}
> I'll note that implementation wise, this might be a bit simpler after the 
> changes of CASSANDRA-10365 (as it makes it easy to passe specific types 
> during the preparation of the create statement).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8216) Select Count with Limit returns wrong value

2016-04-11 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235570#comment-15235570
 ] 

Philip Thompson commented on CASSANDRA-8216:


Yes, those docs are wrong. However, those docs belong to DataStax, and not the 
Apache Cassandra project. They have already been contacted about this specific 
inaccuracy, and there's nothing more we can do on our end. I'm sorry that you 
have been mislead on how this feature works.

> Select Count with Limit returns wrong value
> ---
>
> Key: CASSANDRA-8216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8216
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: qa-resolved
> Fix For: 2.2.0 beta 1
>
>
> The dtest cql_tests.py:TestCQL.select_count_paging_test is failing on trunk 
> HEAD but not 2.1-HEAD.
> The query {code} select count(*) from test where field3 = false limit 1; 
> {code} is returning 2, where obviously it should only return 1 because of the 
> limit. This may end up having the same root cause of #8214, I will be 
> bisecting them both soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8216) Select Count with Limit returns wrong value

2016-04-11 Thread Vadim TSes'ko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235564#comment-15235564
 ] 

Vadim TSes'ko edited comment on CASSANDRA-8216 at 4/11/16 5:39 PM:
---

I'm sorry. Then [the 
docs|http://docs.datastax.com/en/cql/3.3/cql/cql_reference/select_r.html] 
should be fixed, because they say:
{code}
Specifying rows returned using LIMIT 
Using the LIMIT option, you can specify that the query return a limited number 
of rows.

SELECT COUNT(*) FROM big_table LIMIT 5;
SELECT COUNT(*) FROM big_table LIMIT 20;
The output of these statements if you had 105,291 rows in the database would 
be: 5, and 105,291. The cqlsh shell has a default row limit of 10,000. The 
Cassandra server and native protocol do not limit the number of rows that can 
be returned, although a timeout stops running queries to protect against 
running malformed queries that would cause system instability.
{code}


was (Author: incubos):
I'm sorry.

> Select Count with Limit returns wrong value
> ---
>
> Key: CASSANDRA-8216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8216
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: qa-resolved
> Fix For: 2.2.0 beta 1
>
>
> The dtest cql_tests.py:TestCQL.select_count_paging_test is failing on trunk 
> HEAD but not 2.1-HEAD.
> The query {code} select count(*) from test where field3 = false limit 1; 
> {code} is returning 2, where obviously it should only return 1 because of the 
> limit. This may end up having the same root cause of #8214, I will be 
> bisecting them both soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8216) Select Count with Limit returns wrong value

2016-04-11 Thread Vadim TSes'ko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235564#comment-15235564
 ] 

Vadim TSes'ko commented on CASSANDRA-8216:
--

I'm sorry.

> Select Count with Limit returns wrong value
> ---
>
> Key: CASSANDRA-8216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8216
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: qa-resolved
> Fix For: 2.2.0 beta 1
>
>
> The dtest cql_tests.py:TestCQL.select_count_paging_test is failing on trunk 
> HEAD but not 2.1-HEAD.
> The query {code} select count(*) from test where field3 = false limit 1; 
> {code} is returning 2, where obviously it should only return 1 because of the 
> limit. This may end up having the same root cause of #8214, I will be 
> bisecting them both soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11526) Make ResultSetBuilder.rowToJson public

2016-04-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11526:
--
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Ready to Commit)

> Make ResultSetBuilder.rowToJson public
> --
>
> Key: CASSANDRA-11526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11526
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jeremiah Jordan
>Assignee: Berenguer Blasi
> Fix For: 3.6
>
> Attachments: CASSANDRA-11526.txt
>
>
> Make ResultSetBuilder.rowToJson public.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11526) Make ResultSetBuilder.rowToJson public

2016-04-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235561#comment-15235561
 ] 

Aleksey Yeschenko commented on CASSANDRA-11526:
---

CI went fine. Committed as 
[c2acf47168d3f03af0cd68cbd5570c84a321d713|https://github.com/apache/cassandra/commit/c2acf47168d3f03af0cd68cbd5570c84a321d713]
 to trunk, thanks.

> Make ResultSetBuilder.rowToJson public
> --
>
> Key: CASSANDRA-11526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11526
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jeremiah Jordan
>Assignee: Berenguer Blasi
> Fix For: 3.6
>
> Attachments: CASSANDRA-11526.txt
>
>
> Make ResultSetBuilder.rowToJson public.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Make ResultSetBuilder.rowToJson public [Forced Update!]

2016-04-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 41c11262a -> c2acf4716 (forced update)


Make ResultSetBuilder.rowToJson public

patch by Berenguer Blasi; reviewed by Aleksey Yeschenko for
CASSANDRA-11526


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c2acf471
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c2acf471
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c2acf471

Branch: refs/heads/trunk
Commit: c2acf47168d3f03af0cd68cbd5570c84a321d713
Parents: 1aeeff4
Author: Bereng 
Authored: Thu Apr 7 17:52:16 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 18:32:37 2016 +0100

--
 .../cassandra/cql3/selection/Selection.java | 54 ++--
 1 file changed, 27 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2acf471/src/java/org/apache/cassandra/cql3/selection/Selection.java
--
diff --git a/src/java/org/apache/cassandra/cql3/selection/Selection.java 
b/src/java/org/apache/cassandra/cql3/selection/Selection.java
index e0e1bd8..3bee743 100644
--- a/src/java/org/apache/cassandra/cql3/selection/Selection.java
+++ b/src/java/org/apache/cassandra/cql3/selection/Selection.java
@@ -258,6 +258,32 @@ public abstract class Selection
 .toString();
 }
 
+public static List rowToJson(List row, int 
protocolVersion, ResultSet.ResultMetadata metadata)
+{
+StringBuilder sb = new StringBuilder("{");
+for (int i = 0; i < metadata.names.size(); i++)
+{
+if (i > 0)
+sb.append(", ");
+
+ColumnSpecification spec = metadata.names.get(i);
+String columnName = spec.name.toString();
+if (!columnName.equals(columnName.toLowerCase(Locale.US)))
+columnName = "\"" + columnName + "\"";
+
+ByteBuffer buffer = row.get(i);
+sb.append('"');
+sb.append(Json.quoteAsJsonString(columnName));
+sb.append("\": ");
+if (buffer == null)
+sb.append("null");
+else
+sb.append(spec.type.toJSONString(buffer, protocolVersion));
+}
+sb.append("}");
+return 
Collections.singletonList(UTF8Type.instance.getSerializer().serialize(sb.toString()));
+}
+
 public class ResultSetBuilder
 {
 private final ResultSet resultSet;
@@ -367,35 +393,9 @@ public abstract class Selection
 private List getOutputRow(int protocolVersion)
 {
 List outputRow = 
selectors.getOutputRow(protocolVersion);
-return isJson ? rowToJson(outputRow, protocolVersion)
+return isJson ? rowToJson(outputRow, protocolVersion, metadata)
   : outputRow;
 }
-
-private List rowToJson(List row, int 
protocolVersion)
-{
-StringBuilder sb = new StringBuilder("{");
-for (int i = 0; i < metadata.names.size(); i++)
-{
-if (i > 0)
-sb.append(", ");
-
-ColumnSpecification spec = metadata.names.get(i);
-String columnName = spec.name.toString();
-if (!columnName.equals(columnName.toLowerCase(Locale.US)))
-columnName = "\"" + columnName + "\"";
-
-ByteBuffer buffer = row.get(i);
-sb.append('"');
-sb.append(Json.quoteAsJsonString(columnName));
-sb.append("\": ");
-if (buffer == null)
-sb.append("null");
-else
-sb.append(spec.type.toJSONString(buffer, protocolVersion));
-}
-sb.append("}");
-return 
Collections.singletonList(UTF8Type.instance.getSerializer().serialize(sb.toString()));
-}
 }
 
 private static interface Selectors



cassandra git commit: Make ResultSetBuilder.rowToJson public

2016-04-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1aeeff47a -> 41c11262a


Make ResultSetBuilder.rowToJson public


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/41c11262
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/41c11262
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/41c11262

Branch: refs/heads/trunk
Commit: 41c11262a41151e6ebcd9fbe94d30619adfa1a24
Parents: 1aeeff4
Author: Bereng 
Authored: Thu Apr 7 17:52:16 2016 +0100
Committer: Aleksey Yeschenko 
Committed: Mon Apr 11 18:30:35 2016 +0100

--
 .../cassandra/cql3/selection/Selection.java | 54 ++--
 1 file changed, 27 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/41c11262/src/java/org/apache/cassandra/cql3/selection/Selection.java
--
diff --git a/src/java/org/apache/cassandra/cql3/selection/Selection.java 
b/src/java/org/apache/cassandra/cql3/selection/Selection.java
index e0e1bd8..3bee743 100644
--- a/src/java/org/apache/cassandra/cql3/selection/Selection.java
+++ b/src/java/org/apache/cassandra/cql3/selection/Selection.java
@@ -258,6 +258,32 @@ public abstract class Selection
 .toString();
 }
 
+public static List rowToJson(List row, int 
protocolVersion, ResultSet.ResultMetadata metadata)
+{
+StringBuilder sb = new StringBuilder("{");
+for (int i = 0; i < metadata.names.size(); i++)
+{
+if (i > 0)
+sb.append(", ");
+
+ColumnSpecification spec = metadata.names.get(i);
+String columnName = spec.name.toString();
+if (!columnName.equals(columnName.toLowerCase(Locale.US)))
+columnName = "\"" + columnName + "\"";
+
+ByteBuffer buffer = row.get(i);
+sb.append('"');
+sb.append(Json.quoteAsJsonString(columnName));
+sb.append("\": ");
+if (buffer == null)
+sb.append("null");
+else
+sb.append(spec.type.toJSONString(buffer, protocolVersion));
+}
+sb.append("}");
+return 
Collections.singletonList(UTF8Type.instance.getSerializer().serialize(sb.toString()));
+}
+
 public class ResultSetBuilder
 {
 private final ResultSet resultSet;
@@ -367,35 +393,9 @@ public abstract class Selection
 private List getOutputRow(int protocolVersion)
 {
 List outputRow = 
selectors.getOutputRow(protocolVersion);
-return isJson ? rowToJson(outputRow, protocolVersion)
+return isJson ? rowToJson(outputRow, protocolVersion, metadata)
   : outputRow;
 }
-
-private List rowToJson(List row, int 
protocolVersion)
-{
-StringBuilder sb = new StringBuilder("{");
-for (int i = 0; i < metadata.names.size(); i++)
-{
-if (i > 0)
-sb.append(", ");
-
-ColumnSpecification spec = metadata.names.get(i);
-String columnName = spec.name.toString();
-if (!columnName.equals(columnName.toLowerCase(Locale.US)))
-columnName = "\"" + columnName + "\"";
-
-ByteBuffer buffer = row.get(i);
-sb.append('"');
-sb.append(Json.quoteAsJsonString(columnName));
-sb.append("\": ");
-if (buffer == null)
-sb.append("null");
-else
-sb.append(spec.type.toJSONString(buffer, protocolVersion));
-}
-sb.append("}");
-return 
Collections.singletonList(UTF8Type.instance.getSerializer().serialize(sb.toString()));
-}
 }
 
 private static interface Selectors



[jira] [Commented] (CASSANDRA-8216) Select Count with Limit returns wrong value

2016-04-11 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235543#comment-15235543
 ] 

Philip Thompson commented on CASSANDRA-8216:


[~incubos], this is not a bug, as explained in this ticket.

> Select Count with Limit returns wrong value
> ---
>
> Key: CASSANDRA-8216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8216
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: qa-resolved
> Fix For: 2.2.0 beta 1
>
>
> The dtest cql_tests.py:TestCQL.select_count_paging_test is failing on trunk 
> HEAD but not 2.1-HEAD.
> The query {code} select count(*) from test where field3 = false limit 1; 
> {code} is returning 2, where obviously it should only return 1 because of the 
> limit. This may end up having the same root cause of #8214, I will be 
> bisecting them both soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10380) SELECT count within a partition does not respect LIMIT

2016-04-11 Thread Vadim TSes'ko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235544#comment-15235544
 ] 

Vadim TSes'ko commented on CASSANDRA-10380:
---

I managed to reproduce the bug using Cassandra 2.2.5.
The table schema is:
{code:sql}
CREATE TABLE my_table (
u text,
t timeuuid,
PRIMARY KEY (u, t)
) WITH CLUSTERING ORDER BY (t DESC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'min_threshold': '2', 'class': 
'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy', 
'base_time_seconds': '1'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 10
AND gc_grace_seconds = 0
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
{code}
A query with wrong result:
{code:sql}
> select count(*) from my_table where u = 'user-0' limit 1;

 count
---
 8

(1 rows)
{code}

> SELECT count within a partition does not respect LIMIT
> --
>
> Key: CASSANDRA-10380
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10380
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Adam Holmberg
>Assignee: Benjamin Lerer
>Priority: Minor
> Attachments: 10380.txt
>
>
> {code}
> cassandra@cqlsh> create KEYSPACE test WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': '1'};
> cassandra@cqlsh> use test;
> cassandra@cqlsh:test> create table t (k int, c int, v int, primary key (k, 
> c));
> cassandra@cqlsh:test> INSERT INTO t (k, c, v) VALUES (0, 0, 0);
> cassandra@cqlsh:test> INSERT INTO t (k, c, v) VALUES (0, 1, 0);
> cassandra@cqlsh:test> INSERT INTO t (k, c, v) VALUES (0, 2, 0);
> cassandra@cqlsh:test> select * from t where k = 0;
>  k | c | v
> ---+---+---
>  0 | 0 | 0
>  0 | 1 | 0
>  0 | 2 | 0
> (3 rows)
> cassandra@cqlsh:test> select count(*) from t where k = 0 limit 2;
>  count
> ---
>  3
> (1 rows)
> {code}
> Expected: count should return 2, according to limit.
> Actual: count of all rows in partition
> This manifests in 3.0, does not appear in 2.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8216) Select Count with Limit returns wrong value

2016-04-11 Thread Vadim TSes'ko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235535#comment-15235535
 ] 

Vadim TSes'ko commented on CASSANDRA-8216:
--

I managed to reproduce the bug using Cassandra 2.2.5.
The table schema is:
{code:sql}
CREATE TABLE my_table (
u text,
t timeuuid,
PRIMARY KEY (u, t)
) WITH CLUSTERING ORDER BY (t DESC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'min_threshold': '2', 'class': 
'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy', 
'base_time_seconds': '1'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 10
AND gc_grace_seconds = 0
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
{code}
A query with wrong result:
{code:sql}
> select count(*) from my_table where u = 'user-0' limit 1;

 count
---
 8

(1 rows)
{code}

> Select Count with Limit returns wrong value
> ---
>
> Key: CASSANDRA-8216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8216
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: qa-resolved
> Fix For: 2.2.0 beta 1
>
>
> The dtest cql_tests.py:TestCQL.select_count_paging_test is failing on trunk 
> HEAD but not 2.1-HEAD.
> The query {code} select count(*) from test where field3 = false limit 1; 
> {code} is returning 2, where obviously it should only return 1 because of the 
> limit. This may end up having the same root cause of #8214, I will be 
> bisecting them both soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11402) Alignment wrong in tpstats output for PerDiskMemtableFlushWriter

2016-04-11 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-11402:
--
Fix Version/s: 3.x

> Alignment wrong in tpstats output for PerDiskMemtableFlushWriter
> 
>
> Key: CASSANDRA-11402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11402
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Ryan Magnusson
>Priority: Trivial
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 11402-trunk.txt
>
>
> With the accompanying designation of which memtableflushwriter it is, this 
> threadpool name is too long for the hardcoded padding in tpstats output.
> We should dynamically calculate padding so that we don't need to check this 
> every time we add a threadpool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11402) Alignment wrong in tpstats output for PerDiskMemtableFlushWriter

2016-04-11 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235522#comment-15235522
 ] 

Joel Knighton commented on CASSANDRA-11402:
---

Hey Ryan, this definitely looks like the right idea, but my main reservation is 
that it duplicates some functionality we almost already have.

Inside the org.apache.cassandra.tools.nodetool.formatter package, there's a 
TableBuilder used to output tables for some other nodetool commands. I think it 
would be great to reuse this in printing the table for tpstats.

We also should fix the output of this table in 
org.apache.cassandra.utils.StatusLogger - to do so, we probably should bring 
the TableBuilder in from nodetool-descended packages and add it to something 
like FBUtilities instead.

Would you be interested in switching your patch to this approach?

> Alignment wrong in tpstats output for PerDiskMemtableFlushWriter
> 
>
> Key: CASSANDRA-11402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11402
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Ryan Magnusson
>Priority: Trivial
>  Labels: lhf
> Attachments: 11402-trunk.txt
>
>
> With the accompanying designation of which memtableflushwriter it is, this 
> threadpool name is too long for the hardcoded padding in tpstats output.
> We should dynamically calculate padding so that we don't need to check this 
> every time we add a threadpool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11402) Alignment wrong in tpstats output for PerDiskMemtableFlushWriter

2016-04-11 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-11402:
--
Assignee: Ryan Magnusson

> Alignment wrong in tpstats output for PerDiskMemtableFlushWriter
> 
>
> Key: CASSANDRA-11402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11402
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Ryan Magnusson
>Priority: Trivial
>  Labels: lhf
> Attachments: 11402-trunk.txt
>
>
> With the accompanying designation of which memtableflushwriter it is, this 
> threadpool name is too long for the hardcoded padding in tpstats output.
> We should dynamically calculate padding so that we don't need to check this 
> every time we add a threadpool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11526) Make ResultSetBuilder.rowToJson public

2016-04-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11526:
--
Status: Ready to Commit  (was: Patch Available)

> Make ResultSetBuilder.rowToJson public
> --
>
> Key: CASSANDRA-11526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11526
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jeremiah Jordan
>Assignee: Berenguer Blasi
> Fix For: 3.x
>
> Attachments: CASSANDRA-11526.txt
>
>
> Make ResultSetBuilder.rowToJson public.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11097) Idle session timeout for secure environments

2016-04-11 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235498#comment-15235498
 ] 

Jason Brown commented on CASSANDRA-11097:
-

bq. This is probably better handled by the client

I tend to disagree with this statement as it's then up to whatever the 
driver/connection agent wants to implement. That being said, if we could find 
out what other databases/systems do, it might be instructive for us as to how 
to proceed.

> Idle session timeout for secure environments
> 
>
> Key: CASSANDRA-11097
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11097
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeff Jirsa
>Priority: Minor
>  Labels: lhf, ponies
>
> A thread on the user list pointed out that some use cases may prefer to have 
> a database disconnect sessions after some idle timeout. An example would be 
> an administrator who connected via ssh+cqlsh and then walked away. 
> Disconnecting that user and forcing it to re-authenticate could protect 
> against unauthorized access.
> It seems like it may be possible to do this using a netty 
> {{IdleStateHandler}} in a way that's low risk and perhaps off by default.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11521) Implement streaming for bulk read requests

2016-04-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235496#comment-15235496
 ] 

Aleksey Yeschenko commented on CASSANDRA-11521:
---

[~brianmhess] Does C*-Spark integration use CL.LOCAL_ONE for reads? I know we 
do use QUORUM for writes, as a method for overload control.

A small hint on top of regular {{SELECT}} is a decent first step, but there is 
so much more we can do, in general, to make streaming faster, if we go for 
something purpose-built instead (even if built on top of Native protocol) - 
with proper support from the driver.

Among other things, the protocol is very wasteful for the cases where you 
stream all the data, especially if you have big partitions and a few clustering 
columns. While clustering column repetition as part of cell names is now fully 
gone from sstables and in-memory representation, in the protocol itself, with 
each row, we both repeat all the clustering columns - even if many rows share 
them - and the partition key columns. Could get rid of it, and all related 
redundant serialisation, if not building on top of ResultSet.

Secondly, it's not common at all to multiplex a single session between 
transactional and analytical workloads. So a single Spark java driver session 
is going to only be dealing with streaming itself (maybe even only single 
stream at a time?). We could add a new command ({{STREAM}}), with query and, 
say, throughput limit, or maximum # of unacknowledged rows/bytes, and just 
server-side push as much as we can without violating the limits. The stream 
would be cancellable.

Also, ideally, once we switch to the user-space page cache, these queries 
should not be polluting it.

> Implement streaming for bulk read requests
> --
>
> Key: CASSANDRA-11521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11521
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Local Write-Read Paths
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
>
> Allow clients to stream data from a C* host, bypassing the coordination layer 
> and eliminating the need to query individual pages one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11430) Add legacy notifications backward-support on deprecated repair methods

2016-04-11 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-11430:
---
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   3.0.6
   3.6
   2.2.6

> Add legacy notifications backward-support on deprecated repair methods
> --
>
> Key: CASSANDRA-11430
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11430
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
>Assignee: Paulo Motta
> Fix For: 2.2.6, 3.6, 3.0.6
>
>
> forceRepairRangeAsync is deprecated in 2.2/3.x series. It's still available 
> for older clients though. Unfortunately it sometimes hangs when you call it. 
> It looks like it completes fine but the notification to the client that the 
> operation is done is never sent. This is easiest to see by using nodetool 
> from 2.1 against a 3.x cluster.
> {noformat}
> [Nicks-MacBook-Pro:16:06:21 cassandra-2.1] cassandra$ ./bin/nodetool repair 
> -st 0 -et 1 OpsCenter
> [2016-03-24 16:06:50,165] Nothing to repair for keyspace 'OpsCenter'
> [Nicks-MacBook-Pro:16:06:50 cassandra-2.1] cassandra$
> [Nicks-MacBook-Pro:16:06:55 cassandra-2.1] cassandra$
> [Nicks-MacBook-Pro:16:06:55 cassandra-2.1] cassandra$ ./bin/nodetool repair 
> -st 0 -et 1 system_distributed
> ...
> ...
> {noformat}
> (I added the ellipses)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11430) Add legacy notifications backward-support on deprecated repair methods

2016-04-11 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-11430:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Update and test look good.
Committed as {{3557d2e05c8d1059562de2a91c1b33b4fcfcc6eb}}.
Thanks!

> Add legacy notifications backward-support on deprecated repair methods
> --
>
> Key: CASSANDRA-11430
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11430
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
>Assignee: Paulo Motta
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> forceRepairRangeAsync is deprecated in 2.2/3.x series. It's still available 
> for older clients though. Unfortunately it sometimes hangs when you call it. 
> It looks like it completes fine but the notification to the client that the 
> operation is done is never sent. This is easiest to see by using nodetool 
> from 2.1 against a 3.x cluster.
> {noformat}
> [Nicks-MacBook-Pro:16:06:21 cassandra-2.1] cassandra$ ./bin/nodetool repair 
> -st 0 -et 1 OpsCenter
> [2016-03-24 16:06:50,165] Nothing to repair for keyspace 'OpsCenter'
> [Nicks-MacBook-Pro:16:06:50 cassandra-2.1] cassandra$
> [Nicks-MacBook-Pro:16:06:55 cassandra-2.1] cassandra$
> [Nicks-MacBook-Pro:16:06:55 cassandra-2.1] cassandra$ ./bin/nodetool repair 
> -st 0 -et 1 system_distributed
> ...
> ...
> {noformat}
> (I added the ellipses)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[6/6] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-11 Thread yukim
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1aeeff47
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1aeeff47
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1aeeff47

Branch: refs/heads/trunk
Commit: 1aeeff47ada8a81965d5fd70f8a40c6511dd0cb8
Parents: 2ae587f f0cd326
Author: Yuki Morishita 
Authored: Mon Apr 11 11:31:56 2016 -0500
Committer: Yuki Morishita 
Committed: Mon Apr 11 11:31:56 2016 -0500

--
 CHANGES.txt |   1 +
 .../apache/cassandra/repair/RepairRunnable.java |  10 ++
 .../cassandra/service/ActiveRepairService.java  |  11 ++
 .../cassandra/service/StorageService.java   |  28 -
 .../progress/jmx/LegacyJMXProgressSupport.java  | 107 +
 .../jmx/LegacyJMXProgressSupportTest.java   | 118 +++
 6 files changed, 269 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1aeeff47/CHANGES.txt
--
diff --cc CHANGES.txt
index 3adde47,8c40e63..f399fd9
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -46,15 -5,10 +46,16 @@@ Merged from 3.0
 header is received (CASSANDRA-11464)
   * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
  Merged from 2.2:
+  * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
   * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
  
 -3.0.5
 +
 +3.5
 + * StaticTokenTreeBuilder should respect posibility of duplicate tokens 
(CASSANDRA-11525)
 + * Correctly fix potential assertion error during compaction (CASSANDRA-11353)
 + * Avoid index segment stitching in RAM which lead to OOM on big SSTable 
files (CASSANDRA-11383)
 + * Fix clustering and row filters for LIKE queries on clustering columns 
(CASSANDRA-11397)
 +Merged from 3.0:
   * Fix rare NPE on schema upgrade from 2.x to 3.x (CASSANDRA-10943)
   * Improve backoff policy for cqlsh COPY FROM (CASSANDRA-11320)
   * Improve IF NOT EXISTS check in CREATE INDEX (CASSANDRA-11131)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1aeeff47/src/java/org/apache/cassandra/repair/RepairRunnable.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1aeeff47/src/java/org/apache/cassandra/service/StorageService.java
--



[4/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-11 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f0cd3261
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f0cd3261
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f0cd3261

Branch: refs/heads/trunk
Commit: f0cd3261be946fd9835e8c978841bc930d1e07d9
Parents: 063b376 3557d2e
Author: Yuki Morishita 
Authored: Mon Apr 11 11:30:28 2016 -0500
Committer: Yuki Morishita 
Committed: Mon Apr 11 11:30:28 2016 -0500

--
 CHANGES.txt |   1 +
 .../apache/cassandra/repair/RepairRunnable.java |  10 ++
 .../cassandra/service/ActiveRepairService.java  |  11 ++
 .../cassandra/service/StorageService.java   |  28 -
 .../progress/jmx/LegacyJMXProgressSupport.java  | 107 +
 .../jmx/LegacyJMXProgressSupportTest.java   | 118 +++
 6 files changed, 269 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0cd3261/CHANGES.txt
--
diff --cc CHANGES.txt
index 47e6105,e935e57..8c40e63
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,29 -1,6 +1,30 @@@
 -2.2.6
 +3.0.6
 + * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
+  * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
   * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 +
 +3.0.5
 + * Fix rare NPE on schema upgrade from 2.x to 3.x (CASSANDRA-10943)
 + * Improve backoff policy for cqlsh COPY FROM (CASSANDRA-11320)
 + * Improve IF NOT EXISTS check in CREATE INDEX (CASSANDRA-11131)
 + * Upgrade ohc to 0.4.3
 + * Enable SO_REUSEADDR for JMX RMI server sockets (CASSANDRA-11093)
 + * Allocate merkletrees with the correct size (CASSANDRA-11390)
 + * Support streaming pre-3.0 sstables (CASSANDRA-10990)
 + * Add backpressure to compressed commit log (CASSANDRA-10971)
 + * SSTableExport supports secondary index tables (CASSANDRA-11330)
 + * Fix sstabledump to include missing info in debug output (CASSANDRA-11321)
 + * Establish and implement canonical bulk reading workload(s) 
(CASSANDRA-10331)
 + * Fix paging for IN queries on tables without clustering columns 
(CASSANDRA-11208)
 + * Remove recursive call from CompositesSearcher (CASSANDRA-11304)
 + * Fix filtering on non-primary key columns for queries without index 
(CASSANDRA-6377)
 + * Fix sstableloader fail when using materialized view (CASSANDRA-11275)
 +Merged from 2.2:
   * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
   * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
   * Add cassandra-stress keystore option (CASSANDRA-9325)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0cd3261/src/java/org/apache/cassandra/repair/RepairRunnable.java
--
diff --cc src/java/org/apache/cassandra/repair/RepairRunnable.java
index eb25457,d2b6ab6..354cb2a
--- a/src/java/org/apache/cassandra/repair/RepairRunnable.java
+++ b/src/java/org/apache/cassandra/repair/RepairRunnable.java
@@@ -230,8 -227,13 +230,13 @@@ public class RepairRunnable extends Wra
  {
  public void onSuccess(RepairSessionResult result)
  {
+ /**
+  * If the success message below is modified, it must also 
be updated on
+  * {@link 
org.apache.cassandra.utils.progress.jmx.LegacyJMXProgressSupport}
+  * for backward-compatibility support.
+  */
  String message = String.format("Repair session %s for 
range %s finished", session.getId(),
 -   
session.getRange().toString());
 +   
session.getRanges().toString());
  logger.info(message);
  fireProgressEvent(tag, new 
ProgressEvent(ProgressEventType.PROGRESS,
   
progress.incrementAndGet(),
@@@ -241,8 -243,13 +246,13 @@@
  
  public void onFailure(Throwable t)
  {
+ /**
+  * If the failure message below is modified, it must also 
be updated on
+ 

[5/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-11 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f0cd3261
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f0cd3261
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f0cd3261

Branch: refs/heads/cassandra-3.0
Commit: f0cd3261be946fd9835e8c978841bc930d1e07d9
Parents: 063b376 3557d2e
Author: Yuki Morishita 
Authored: Mon Apr 11 11:30:28 2016 -0500
Committer: Yuki Morishita 
Committed: Mon Apr 11 11:30:28 2016 -0500

--
 CHANGES.txt |   1 +
 .../apache/cassandra/repair/RepairRunnable.java |  10 ++
 .../cassandra/service/ActiveRepairService.java  |  11 ++
 .../cassandra/service/StorageService.java   |  28 -
 .../progress/jmx/LegacyJMXProgressSupport.java  | 107 +
 .../jmx/LegacyJMXProgressSupportTest.java   | 118 +++
 6 files changed, 269 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0cd3261/CHANGES.txt
--
diff --cc CHANGES.txt
index 47e6105,e935e57..8c40e63
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,29 -1,6 +1,30 @@@
 -2.2.6
 +3.0.6
 + * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
 + * Notify indexers of expired rows during compaction (CASSANDRA-11329)
 + * Properly respond with ProtocolError when a v1/v2 native protocol
 +   header is received (CASSANDRA-11464)
 + * Validate that num_tokens and initial_token are consistent with one another 
(CASSANDRA-10120)
 +Merged from 2.2:
+  * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
   * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
 +
 +3.0.5
 + * Fix rare NPE on schema upgrade from 2.x to 3.x (CASSANDRA-10943)
 + * Improve backoff policy for cqlsh COPY FROM (CASSANDRA-11320)
 + * Improve IF NOT EXISTS check in CREATE INDEX (CASSANDRA-11131)
 + * Upgrade ohc to 0.4.3
 + * Enable SO_REUSEADDR for JMX RMI server sockets (CASSANDRA-11093)
 + * Allocate merkletrees with the correct size (CASSANDRA-11390)
 + * Support streaming pre-3.0 sstables (CASSANDRA-10990)
 + * Add backpressure to compressed commit log (CASSANDRA-10971)
 + * SSTableExport supports secondary index tables (CASSANDRA-11330)
 + * Fix sstabledump to include missing info in debug output (CASSANDRA-11321)
 + * Establish and implement canonical bulk reading workload(s) 
(CASSANDRA-10331)
 + * Fix paging for IN queries on tables without clustering columns 
(CASSANDRA-11208)
 + * Remove recursive call from CompositesSearcher (CASSANDRA-11304)
 + * Fix filtering on non-primary key columns for queries without index 
(CASSANDRA-6377)
 + * Fix sstableloader fail when using materialized view (CASSANDRA-11275)
 +Merged from 2.2:
   * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
   * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)
   * Add cassandra-stress keystore option (CASSANDRA-9325)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f0cd3261/src/java/org/apache/cassandra/repair/RepairRunnable.java
--
diff --cc src/java/org/apache/cassandra/repair/RepairRunnable.java
index eb25457,d2b6ab6..354cb2a
--- a/src/java/org/apache/cassandra/repair/RepairRunnable.java
+++ b/src/java/org/apache/cassandra/repair/RepairRunnable.java
@@@ -230,8 -227,13 +230,13 @@@ public class RepairRunnable extends Wra
  {
  public void onSuccess(RepairSessionResult result)
  {
+ /**
+  * If the success message below is modified, it must also 
be updated on
+  * {@link 
org.apache.cassandra.utils.progress.jmx.LegacyJMXProgressSupport}
+  * for backward-compatibility support.
+  */
  String message = String.format("Repair session %s for 
range %s finished", session.getId(),
 -   
session.getRange().toString());
 +   
session.getRanges().toString());
  logger.info(message);
  fireProgressEvent(tag, new 
ProgressEvent(ProgressEventType.PROGRESS,
   
progress.incrementAndGet(),
@@@ -241,8 -243,13 +246,13 @@@
  
  public void onFailure(Throwable t)
  {
+ /**
+  * If the failure message below is modified, it must also 
be updated o

[2/6] cassandra git commit: Make deprecated repair methods backward-compatible with previous notification service

2016-04-11 Thread yukim
Make deprecated repair methods backward-compatible with previous notification 
service

patch by Paulo Motta; reviewed by Yuki Morishita for CASSANDRA-11430


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3557d2e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3557d2e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3557d2e0

Branch: refs/heads/cassandra-3.0
Commit: 3557d2e05c8d1059562de2a91c1b33b4fcfcc6eb
Parents: e22faeb
Author: Paulo Motta 
Authored: Tue Apr 5 16:58:06 2016 -0300
Committer: Yuki Morishita 
Committed: Mon Apr 11 11:28:44 2016 -0500

--
 CHANGES.txt |   1 +
 .../apache/cassandra/repair/RepairRunnable.java |  10 ++
 .../cassandra/service/ActiveRepairService.java  |  11 ++
 .../cassandra/service/StorageService.java   |  28 -
 .../progress/jmx/LegacyJMXProgressSupport.java  | 108 +
 .../jmx/LegacyJMXProgressSupportTest.java   | 118 +++
 6 files changed, 270 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3557d2e0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b6438b8..e935e57 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.6
+ * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
  * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
  * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
  * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3557d2e0/src/java/org/apache/cassandra/repair/RepairRunnable.java
--
diff --git a/src/java/org/apache/cassandra/repair/RepairRunnable.java 
b/src/java/org/apache/cassandra/repair/RepairRunnable.java
index 91ac82a..d2b6ab6 100644
--- a/src/java/org/apache/cassandra/repair/RepairRunnable.java
+++ b/src/java/org/apache/cassandra/repair/RepairRunnable.java
@@ -227,6 +227,11 @@ public class RepairRunnable extends WrappedRunnable 
implements ProgressEventNoti
 {
 public void onSuccess(RepairSessionResult result)
 {
+/**
+ * If the success message below is modified, it must also 
be updated on
+ * {@link 
org.apache.cassandra.utils.progress.jmx.LegacyJMXProgressSupport}
+ * for backward-compatibility support.
+ */
 String message = String.format("Repair session %s for 
range %s finished", session.getId(),

session.getRange().toString());
 logger.info(message);
@@ -238,6 +243,11 @@ public class RepairRunnable extends WrappedRunnable 
implements ProgressEventNoti
 
 public void onFailure(Throwable t)
 {
+/**
+ * If the failure message below is modified, it must also 
be updated on
+ * {@link 
org.apache.cassandra.utils.progress.jmx.LegacyJMXProgressSupport}
+ * for backward-compatibility support.
+ */
 String message = String.format("Repair session %s for 
range %s failed with error %s",
session.getId(), 
session.getRange().toString(), t.getMessage());
 logger.error(message, t);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3557d2e0/src/java/org/apache/cassandra/service/ActiveRepairService.java
--
diff --git a/src/java/org/apache/cassandra/service/ActiveRepairService.java 
b/src/java/org/apache/cassandra/service/ActiveRepairService.java
index 39be051..21cdeae 100644
--- a/src/java/org/apache/cassandra/service/ActiveRepairService.java
+++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java
@@ -77,6 +77,17 @@ import org.apache.cassandra.utils.concurrent.Refs;
  */
 public class ActiveRepairService
 {
+/**
+ * @deprecated this statuses are from the previous JMX notification 
service,
+ * which will be deprecated on 4.0. For statuses of the new notification
+ * service, see {@link 
org.apache.cassandra.streaming.StreamEvent.ProgressEvent}
+ */
+@Deprecated
+public static enum Status
+{
+STARTED, SESSION_SUCCESS, SESSION_FAILED, FINISHED
+}
+
 public static CassandraVersion SUPPORTS_GLOBAL_PREPARE_FLAG_VERSION = new 
CassandraVersion("2.2.1");
 
 priva

[1/6] cassandra git commit: Make deprecated repair methods backward-compatible with previous notification service

2016-04-11 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 e22faeb8c -> 3557d2e05
  refs/heads/cassandra-3.0 063b37614 -> f0cd3261b
  refs/heads/trunk 2ae587f5c -> 1aeeff47a


Make deprecated repair methods backward-compatible with previous notification 
service

patch by Paulo Motta; reviewed by Yuki Morishita for CASSANDRA-11430


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3557d2e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3557d2e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3557d2e0

Branch: refs/heads/cassandra-2.2
Commit: 3557d2e05c8d1059562de2a91c1b33b4fcfcc6eb
Parents: e22faeb
Author: Paulo Motta 
Authored: Tue Apr 5 16:58:06 2016 -0300
Committer: Yuki Morishita 
Committed: Mon Apr 11 11:28:44 2016 -0500

--
 CHANGES.txt |   1 +
 .../apache/cassandra/repair/RepairRunnable.java |  10 ++
 .../cassandra/service/ActiveRepairService.java  |  11 ++
 .../cassandra/service/StorageService.java   |  28 -
 .../progress/jmx/LegacyJMXProgressSupport.java  | 108 +
 .../jmx/LegacyJMXProgressSupportTest.java   | 118 +++
 6 files changed, 270 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3557d2e0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b6438b8..e935e57 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.6
+ * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
  * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
  * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
  * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3557d2e0/src/java/org/apache/cassandra/repair/RepairRunnable.java
--
diff --git a/src/java/org/apache/cassandra/repair/RepairRunnable.java 
b/src/java/org/apache/cassandra/repair/RepairRunnable.java
index 91ac82a..d2b6ab6 100644
--- a/src/java/org/apache/cassandra/repair/RepairRunnable.java
+++ b/src/java/org/apache/cassandra/repair/RepairRunnable.java
@@ -227,6 +227,11 @@ public class RepairRunnable extends WrappedRunnable 
implements ProgressEventNoti
 {
 public void onSuccess(RepairSessionResult result)
 {
+/**
+ * If the success message below is modified, it must also 
be updated on
+ * {@link 
org.apache.cassandra.utils.progress.jmx.LegacyJMXProgressSupport}
+ * for backward-compatibility support.
+ */
 String message = String.format("Repair session %s for 
range %s finished", session.getId(),

session.getRange().toString());
 logger.info(message);
@@ -238,6 +243,11 @@ public class RepairRunnable extends WrappedRunnable 
implements ProgressEventNoti
 
 public void onFailure(Throwable t)
 {
+/**
+ * If the failure message below is modified, it must also 
be updated on
+ * {@link 
org.apache.cassandra.utils.progress.jmx.LegacyJMXProgressSupport}
+ * for backward-compatibility support.
+ */
 String message = String.format("Repair session %s for 
range %s failed with error %s",
session.getId(), 
session.getRange().toString(), t.getMessage());
 logger.error(message, t);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3557d2e0/src/java/org/apache/cassandra/service/ActiveRepairService.java
--
diff --git a/src/java/org/apache/cassandra/service/ActiveRepairService.java 
b/src/java/org/apache/cassandra/service/ActiveRepairService.java
index 39be051..21cdeae 100644
--- a/src/java/org/apache/cassandra/service/ActiveRepairService.java
+++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java
@@ -77,6 +77,17 @@ import org.apache.cassandra.utils.concurrent.Refs;
  */
 public class ActiveRepairService
 {
+/**
+ * @deprecated this statuses are from the previous JMX notification 
service,
+ * which will be deprecated on 4.0. For statuses of the new notification
+ * service, see {@link 
org.apache.cassandra.streaming.StreamEvent.ProgressEvent}
+ */
+@Deprecated
+public static enum Status
+{
+   

[3/6] cassandra git commit: Make deprecated repair methods backward-compatible with previous notification service

2016-04-11 Thread yukim
Make deprecated repair methods backward-compatible with previous notification 
service

patch by Paulo Motta; reviewed by Yuki Morishita for CASSANDRA-11430


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3557d2e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3557d2e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3557d2e0

Branch: refs/heads/trunk
Commit: 3557d2e05c8d1059562de2a91c1b33b4fcfcc6eb
Parents: e22faeb
Author: Paulo Motta 
Authored: Tue Apr 5 16:58:06 2016 -0300
Committer: Yuki Morishita 
Committed: Mon Apr 11 11:28:44 2016 -0500

--
 CHANGES.txt |   1 +
 .../apache/cassandra/repair/RepairRunnable.java |  10 ++
 .../cassandra/service/ActiveRepairService.java  |  11 ++
 .../cassandra/service/StorageService.java   |  28 -
 .../progress/jmx/LegacyJMXProgressSupport.java  | 108 +
 .../jmx/LegacyJMXProgressSupportTest.java   | 118 +++
 6 files changed, 270 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3557d2e0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b6438b8..e935e57 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.6
+ * Make deprecated repair methods backward-compatible with previous 
notification service (CASSANDRA-11430)
  * IncomingStreamingConnection version check message wrong (CASSANDRA-11462)
  * DatabaseDescriptor should log stacktrace in case of Eception during seed 
provider creation (CASSANDRA-11312)
  * Use canonical path for directory in SSTable descriptor (CASSANDRA-10587)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3557d2e0/src/java/org/apache/cassandra/repair/RepairRunnable.java
--
diff --git a/src/java/org/apache/cassandra/repair/RepairRunnable.java 
b/src/java/org/apache/cassandra/repair/RepairRunnable.java
index 91ac82a..d2b6ab6 100644
--- a/src/java/org/apache/cassandra/repair/RepairRunnable.java
+++ b/src/java/org/apache/cassandra/repair/RepairRunnable.java
@@ -227,6 +227,11 @@ public class RepairRunnable extends WrappedRunnable 
implements ProgressEventNoti
 {
 public void onSuccess(RepairSessionResult result)
 {
+/**
+ * If the success message below is modified, it must also 
be updated on
+ * {@link 
org.apache.cassandra.utils.progress.jmx.LegacyJMXProgressSupport}
+ * for backward-compatibility support.
+ */
 String message = String.format("Repair session %s for 
range %s finished", session.getId(),

session.getRange().toString());
 logger.info(message);
@@ -238,6 +243,11 @@ public class RepairRunnable extends WrappedRunnable 
implements ProgressEventNoti
 
 public void onFailure(Throwable t)
 {
+/**
+ * If the failure message below is modified, it must also 
be updated on
+ * {@link 
org.apache.cassandra.utils.progress.jmx.LegacyJMXProgressSupport}
+ * for backward-compatibility support.
+ */
 String message = String.format("Repair session %s for 
range %s failed with error %s",
session.getId(), 
session.getRange().toString(), t.getMessage());
 logger.error(message, t);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3557d2e0/src/java/org/apache/cassandra/service/ActiveRepairService.java
--
diff --git a/src/java/org/apache/cassandra/service/ActiveRepairService.java 
b/src/java/org/apache/cassandra/service/ActiveRepairService.java
index 39be051..21cdeae 100644
--- a/src/java/org/apache/cassandra/service/ActiveRepairService.java
+++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java
@@ -77,6 +77,17 @@ import org.apache.cassandra.utils.concurrent.Refs;
  */
 public class ActiveRepairService
 {
+/**
+ * @deprecated this statuses are from the previous JMX notification 
service,
+ * which will be deprecated on 4.0. For statuses of the new notification
+ * service, see {@link 
org.apache.cassandra.streaming.StreamEvent.ProgressEvent}
+ */
+@Deprecated
+public static enum Status
+{
+STARTED, SESSION_SUCCESS, SESSION_FAILED, FINISHED
+}
+
 public static CassandraVersion SUPPORTS_GLOBAL_PREPARE_FLAG_VERSION = new 
CassandraVersion("2.2.1");
 
 private stati

[jira] [Updated] (CASSANDRA-11546) Stress doesn't respect case-sensitive column names when building insert queries

2016-04-11 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-11546:
--
Reproduced In: 3.0.5, 2.2.5, 3.6  (was: 2.2.5, 3.0.5, 3.6)
   Labels: lhf  (was: )

> Stress doesn't respect case-sensitive column names when building insert 
> queries
> ---
>
> Key: CASSANDRA-11546
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11546
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Joel Knighton
>Priority: Trivial
>  Labels: lhf
>
> When using a custom stress profile, if the schema uses case sensitive column 
> names, stress doesn't respect case sensitivity when building insert/update 
> statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >