[jira] [Commented] (CASSANDRA-10848) Upgrade paging dtests involving deletion flap on CassCI

2016-08-05 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409018#comment-15409018
 ] 

Sylvain Lebresne commented on CASSANDRA-10848:
--

bq. I created a dtest fix for the 2.2/3.0 problem

We should really do that for any 2.x -> 3.0 upgrade, so 2.1/3.0 too.

> Upgrade paging dtests involving deletion flap on CassCI
> ---
>
> Key: CASSANDRA-10848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10848
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: DS Test Eng
>  Labels: dtest
> Fix For: 3.0.x, 3.x
>
>
> A number of dtests in the {{upgrade_tests.paging_tests}} that involve 
> deletion flap with the following error:
> {code}
> Requested pages were not delivered before timeout.
> {code}
> This may just be an effect of CASSANDRA-10730, but it's worth having a look 
> at separately. Here are some examples of tests flapping in this way:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12387) List Append order is wrong

2016-08-05 Thread ZhaoYang (JIRA)
ZhaoYang created CASSANDRA-12387:


 Summary: List Append order is wrong
 Key: CASSANDRA-12387
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12387
 Project: Cassandra
  Issue Type: Bug
 Environment: cassandra 2.1.13
Reporter: ZhaoYang
 Fix For: 2.1.16


"INSERT INTO collection_type(key,normal_column,list_column) VALUES 
('k','value',[ '#293847','#323442' ]);"

"UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"

Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
'#293847','#323442'

Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
'#293847','#323442', '#611987'

The error happened in 3 node cluster. In local, one node is working properly.
(all Cassandra 2.1.13. It also happened to 3.0.x driver)

Is it related to internal message processing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12379) CQLSH completion test broken by #12236

2016-08-05 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409039#comment-15409039
 ] 

Sylvain Lebresne commented on CASSANDRA-12379:
--

bq. Is a {{cherry-pick}} plus {{merge -s ours}} the correct way to backport 
this commit to the 3.8 branch

Yes. Can I let you do that and check it does fix the issue?

> CQLSH completion test broken by #12236
> --
>
> Key: CASSANDRA-12379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12379
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Stefania
>
> The commit of CASSANDRA-12236 appears to have broken [cqlsh completion 
> tests|http://cassci.datastax.com/job/cassandra-3.8_cqlsh_tests/6/cython=yes,label=ctool-lab/testReport/junit/cqlshlib.test.test_cqlsh_completion/TestCqlshCompletion/test_complete_in_create_columnfamily/].
>  For the error message I suspect this may have to do with something like the 
> test comparing the completion output to what DESCRIBE shows and the later now 
> doesn't include the {{cdc}} option by default.
> Anyway, I'm not really familiar with cqlsh completion nor it's test so I'm 
> not sure what's the best option. I don't think we want to remove {{cdc}} from 
> completion so I suspect we want to either special case the test somehow (no 
> clue how to do that), or make the test run with cdc enabled so it doesn't 
> complain (which I think mostly apply a change to the CI environment since it 
> seems the tests themselves don't spin up the cluster).
> Anyway, pushing that fix to someone else as I'm not competent here and I 
> have't even be able to run those cqlsh test so far (getting stuck at the test 
> telling me that "No appropriate python interpreter found", even though I 
> totally have an appropriate interpreter and cqlsh works perfectly if I 
> execute it directly). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Make it possible to compact a given token range

2016-08-05 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk c49bc639f -> a5d095e62


Make it possible to compact a given token range

Patch by Vishy Kasar; reviewed by marcuse for CASSANDRA-10643


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a5d095e6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a5d095e6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a5d095e6

Branch: refs/heads/trunk
Commit: a5d095e62ed459aefbc8c25e2bbcd46969a48eec
Parents: c49bc63
Author: Vishy Kasar 
Authored: Thu Aug 4 11:06:48 2016 +0200
Committer: Marcus Eriksson 
Committed: Fri Aug 5 09:21:04 2016 +0200

--
 CHANGES.txt |  1 +
 doc/source/operating/compaction.rst |  6 ++
 .../apache/cassandra/db/ColumnFamilyStore.java  |  6 +-
 .../cassandra/db/ColumnFamilyStoreMBean.java|  8 +++
 .../db/compaction/CompactionManager.java| 53 ++
 .../compaction/CompactionStrategyManager.java   |  1 -
 .../compaction/LeveledCompactionStrategy.java   |  9 +--
 .../cassandra/service/StorageService.java   | 13 +++-
 .../cassandra/service/StorageServiceMBean.java  |  5 ++
 .../org/apache/cassandra/tools/NodeProbe.java   |  5 ++
 .../cassandra/tools/nodetool/Compact.java   | 30 ++--
 .../LeveledCompactionStrategyTest.java  | 75 
 12 files changed, 201 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5d095e6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index db2e221..23a6eb0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.10
+ * Make it possible to compact a given token range (CASSANDRA-10643)
  * Allow updating DynamicEndpointSnitch properties via JMX (CASSANDRA-12179)
  * Collect metrics on queries by consistency level (CASSANDRA-7384)
  * Add support for GROUP BY to SELECT statement (CASSANDRA-10707)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5d095e6/doc/source/operating/compaction.rst
--
diff --git a/doc/source/operating/compaction.rst 
b/doc/source/operating/compaction.rst
index 8d70a41..b0f97c4 100644
--- a/doc/source/operating/compaction.rst
+++ b/doc/source/operating/compaction.rst
@@ -45,6 +45,12 @@ Secondary index rebuild
 rebuild the secondary indexes on the node.
 Anticompaction
 after repair the ranges that were actually repaired are split out of the 
sstables that existed when repair started.
+Sub range compaction
+It is possible to only compact a given sub range - this could be useful if 
you know a token that has been
+misbehaving - either gathering many updates or many deletes. (``nodetool 
compact -st x -et y``) will pick
+all sstables containing the range between x and y and issue a compaction 
for those sstables. For STCS this will
+most likely include all sstables but with LCS it can issue the compaction 
for a subset of the sstables. With LCS
+the resulting sstable will end up in L0.
 
 When is a minor compaction triggered?
 ^

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5d095e6/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 53f5305..84fcb86 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2109,12 +2109,16 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 forceMajorCompaction(false);
 }
 
-
 public void forceMajorCompaction(boolean splitOutput) throws 
InterruptedException, ExecutionException
 {
 CompactionManager.instance.performMaximal(this, splitOutput);
 }
 
+public void forceCompactionForTokenRange(Collection> 
tokenRanges) throws ExecutionException, InterruptedException
+{
+CompactionManager.instance.forceCompactionForTokenRange(this, 
tokenRanges);
+}
+
 public static Iterable all()
 {
 List> stores = new 
ArrayList>(Schema.instance.getKeyspaces().size());

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a5d095e6/src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java
index 4df9f8d..ccaacf6 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStoreMBean.java
+++ b/src/java/org/apache/cassandra/

[3/3] cassandra git commit: Merge branch 'cassandra-3.8' into cassandra-3.9

2016-08-05 Thread stefania
Merge branch 'cassandra-3.8' into cassandra-3.9


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5e319bb6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5e319bb6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5e319bb6

Branch: refs/heads/cassandra-3.9
Commit: 5e319bb697e381e333b22d88ec6e445dd19c473d
Parents: c9df18c 18c357b
Author: Stefania Alborghetti 
Authored: Fri Aug 5 15:19:11 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Aug 5 15:19:11 2016 +0800

--

--




[1/3] cassandra git commit: Ninja: update cqlsh completion tests for CASSANDRA-8844

2016-08-05 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.8 26838063d -> 18c357b86
  refs/heads/cassandra-3.9 c9df18c2e -> 5e319bb69


Ninja: update cqlsh completion tests for CASSANDRA-8844

In CASSANDRA-8844, a new 'cdc' table option was added.  The python
driver added this as a recognized option, which caused it to show up in
cqlsh autocomplete suggestions.  However, the cqlsh tests were not
updated to match this.

This should fix the following failing tests:
 - test_complete_in_create_table
 - test_complete_in_create_columnfamily


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18c357b8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18c357b8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18c357b8

Branch: refs/heads/cassandra-3.8
Commit: 18c357b8634fd5e846d96b674aa7d55071f29f9f
Parents: 2683806
Author: Tyler Hobbs 
Authored: Tue Jul 19 14:40:55 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Aug 5 15:18:37 2016 +0800

--
 pylib/cqlshlib/test/test_cqlsh_completion.py | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18c357b8/pylib/cqlshlib/test/test_cqlsh_completion.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_completion.py 
b/pylib/cqlshlib/test/test_cqlsh_completion.py
index 8485ff0..21eb088 100644
--- a/pylib/cqlshlib/test/test_cqlsh_completion.py
+++ b/pylib/cqlshlib/test/test_cqlsh_completion.py
@@ -595,7 +595,7 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'memtable_flush_period_in_ms',
  'read_repair_chance', 'CLUSTERING',
  'COMPACT', 'caching', 'comment',
- 'min_index_interval', 
'speculative_retry'])
+ 'min_index_interval', 
'speculative_retry', 'cdc'])
 self.trycompletions(prefix + ' new_table (col_a int PRIMARY KEY) WITH 
',
 choices=['bloom_filter_fp_chance', 'compaction',
  'compression',
@@ -605,7 +605,7 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'memtable_flush_period_in_ms',
  'read_repair_chance', 'CLUSTERING',
  'COMPACT', 'caching', 'comment',
- 'min_index_interval', 
'speculative_retry'])
+ 'min_index_interval', 
'speculative_retry', 'cdc'])
 self.trycompletions(prefix + ' new_table (col_a int PRIMARY KEY) WITH 
bloom_filter_fp_chance ',
 immediate='= ')
 self.trycompletions(prefix + ' new_table (col_a int PRIMARY KEY) WITH 
bloom_filter_fp_chance = ',
@@ -653,7 +653,7 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'memtable_flush_period_in_ms',
  'read_repair_chance', 'CLUSTERING',
  'COMPACT', 'caching', 'comment',
- 'min_index_interval', 
'speculative_retry'])
+ 'min_index_interval', 
'speculative_retry', 'cdc'])
 self.trycompletions(prefix + " new_table (col_a int PRIMARY KEY) WITH 
compaction = "
 + "{'class': 'DateTieredCompactionStrategy', '",
 choices=['base_time_seconds', 
'max_sstable_age_days',
@@ -669,7 +669,6 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'enabled', 
'unchecked_tombstone_compaction',
  'only_purge_repaired_tombstones'])
 
-
 def test_complete_in_create_columnfamily(self):
 self.trycompletions('CREATE C', choices=['COLUMNFAMILY', 'CUSTOM'])
 self.trycompletions('CREATE CO', immediate='LUMNFAMILY ')



[2/3] cassandra git commit: Ninja: update cqlsh completion tests for CASSANDRA-8844

2016-08-05 Thread stefania
Ninja: update cqlsh completion tests for CASSANDRA-8844

In CASSANDRA-8844, a new 'cdc' table option was added.  The python
driver added this as a recognized option, which caused it to show up in
cqlsh autocomplete suggestions.  However, the cqlsh tests were not
updated to match this.

This should fix the following failing tests:
 - test_complete_in_create_table
 - test_complete_in_create_columnfamily


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18c357b8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18c357b8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18c357b8

Branch: refs/heads/cassandra-3.9
Commit: 18c357b8634fd5e846d96b674aa7d55071f29f9f
Parents: 2683806
Author: Tyler Hobbs 
Authored: Tue Jul 19 14:40:55 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Aug 5 15:18:37 2016 +0800

--
 pylib/cqlshlib/test/test_cqlsh_completion.py | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18c357b8/pylib/cqlshlib/test/test_cqlsh_completion.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_completion.py 
b/pylib/cqlshlib/test/test_cqlsh_completion.py
index 8485ff0..21eb088 100644
--- a/pylib/cqlshlib/test/test_cqlsh_completion.py
+++ b/pylib/cqlshlib/test/test_cqlsh_completion.py
@@ -595,7 +595,7 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'memtable_flush_period_in_ms',
  'read_repair_chance', 'CLUSTERING',
  'COMPACT', 'caching', 'comment',
- 'min_index_interval', 
'speculative_retry'])
+ 'min_index_interval', 
'speculative_retry', 'cdc'])
 self.trycompletions(prefix + ' new_table (col_a int PRIMARY KEY) WITH 
',
 choices=['bloom_filter_fp_chance', 'compaction',
  'compression',
@@ -605,7 +605,7 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'memtable_flush_period_in_ms',
  'read_repair_chance', 'CLUSTERING',
  'COMPACT', 'caching', 'comment',
- 'min_index_interval', 
'speculative_retry'])
+ 'min_index_interval', 
'speculative_retry', 'cdc'])
 self.trycompletions(prefix + ' new_table (col_a int PRIMARY KEY) WITH 
bloom_filter_fp_chance ',
 immediate='= ')
 self.trycompletions(prefix + ' new_table (col_a int PRIMARY KEY) WITH 
bloom_filter_fp_chance = ',
@@ -653,7 +653,7 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'memtable_flush_period_in_ms',
  'read_repair_chance', 'CLUSTERING',
  'COMPACT', 'caching', 'comment',
- 'min_index_interval', 
'speculative_retry'])
+ 'min_index_interval', 
'speculative_retry', 'cdc'])
 self.trycompletions(prefix + " new_table (col_a int PRIMARY KEY) WITH 
compaction = "
 + "{'class': 'DateTieredCompactionStrategy', '",
 choices=['base_time_seconds', 
'max_sstable_age_days',
@@ -669,7 +669,6 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'enabled', 
'unchecked_tombstone_compaction',
  'only_purge_repaired_tombstones'])
 
-
 def test_complete_in_create_columnfamily(self):
 self.trycompletions('CREATE C', choices=['COLUMNFAMILY', 'CUSTOM'])
 self.trycompletions('CREATE CO', immediate='LUMNFAMILY ')



[jira] [Updated] (CASSANDRA-10643) Implement compaction for a specific token range

2016-08-05 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-10643:

   Resolution: Fixed
Fix Version/s: 3.10
   Status: Resolved  (was: Ready to Commit)

committed with a documentation entry about this, thanks!

> Implement compaction for a specific token range
> ---
>
> Key: CASSANDRA-10643
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10643
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Vishy Kasar
>Assignee: Vishy Kasar
>  Labels: lcs
> Fix For: 3.10
>
> Attachments: 10643-trunk-REV01.txt, 10643-trunk-REV02.txt, 
> 10643-trunk-REV03.txt
>
>
> We see repeated cases in production (using LCS) where small number of users 
> generate a large number repeated updates or tombstones. Reading data of such 
> users brings in large amounts of data in to java process. Apart from the read 
> itself being slow for the user, the excessive GC affects other users as well. 
> Our solution so far is to move from LCS to SCS and back. This takes long and 
> is an over kill if the number of outliers is small. For such cases, we can 
> implement the point compaction of a token range. We make the nodetool compact 
> take a starting and ending token range and compact all the SSTables that fall 
> with in that range. We can refuse to compact if the number of sstables is 
> beyond a max_limit.
> Example: 
> nodetool -st 3948291562518219268 -et 3948291562518219269 compact keyspace 
> table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-3.9' into trunk

2016-08-05 Thread stefania
Merge branch 'cassandra-3.9' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b5dfa309
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b5dfa309
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b5dfa309

Branch: refs/heads/trunk
Commit: b5dfa30969880391b9acec7c35d357537471da39
Parents: a5d095e 5e319bb
Author: Stefania Alborghetti 
Authored: Fri Aug 5 15:28:42 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Aug 5 15:28:42 2016 +0800

--

--




[2/3] cassandra git commit: Merge branch 'cassandra-3.8' into cassandra-3.9

2016-08-05 Thread stefania
Merge branch 'cassandra-3.8' into cassandra-3.9


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5e319bb6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5e319bb6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5e319bb6

Branch: refs/heads/trunk
Commit: 5e319bb697e381e333b22d88ec6e445dd19c473d
Parents: c9df18c 18c357b
Author: Stefania Alborghetti 
Authored: Fri Aug 5 15:19:11 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Aug 5 15:19:11 2016 +0800

--

--




[1/3] cassandra git commit: Ninja: update cqlsh completion tests for CASSANDRA-8844

2016-08-05 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/trunk a5d095e62 -> b5dfa3096


Ninja: update cqlsh completion tests for CASSANDRA-8844

In CASSANDRA-8844, a new 'cdc' table option was added.  The python
driver added this as a recognized option, which caused it to show up in
cqlsh autocomplete suggestions.  However, the cqlsh tests were not
updated to match this.

This should fix the following failing tests:
 - test_complete_in_create_table
 - test_complete_in_create_columnfamily


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18c357b8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18c357b8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18c357b8

Branch: refs/heads/trunk
Commit: 18c357b8634fd5e846d96b674aa7d55071f29f9f
Parents: 2683806
Author: Tyler Hobbs 
Authored: Tue Jul 19 14:40:55 2016 -0500
Committer: Stefania Alborghetti 
Committed: Fri Aug 5 15:18:37 2016 +0800

--
 pylib/cqlshlib/test/test_cqlsh_completion.py | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18c357b8/pylib/cqlshlib/test/test_cqlsh_completion.py
--
diff --git a/pylib/cqlshlib/test/test_cqlsh_completion.py 
b/pylib/cqlshlib/test/test_cqlsh_completion.py
index 8485ff0..21eb088 100644
--- a/pylib/cqlshlib/test/test_cqlsh_completion.py
+++ b/pylib/cqlshlib/test/test_cqlsh_completion.py
@@ -595,7 +595,7 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'memtable_flush_period_in_ms',
  'read_repair_chance', 'CLUSTERING',
  'COMPACT', 'caching', 'comment',
- 'min_index_interval', 
'speculative_retry'])
+ 'min_index_interval', 
'speculative_retry', 'cdc'])
 self.trycompletions(prefix + ' new_table (col_a int PRIMARY KEY) WITH 
',
 choices=['bloom_filter_fp_chance', 'compaction',
  'compression',
@@ -605,7 +605,7 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'memtable_flush_period_in_ms',
  'read_repair_chance', 'CLUSTERING',
  'COMPACT', 'caching', 'comment',
- 'min_index_interval', 
'speculative_retry'])
+ 'min_index_interval', 
'speculative_retry', 'cdc'])
 self.trycompletions(prefix + ' new_table (col_a int PRIMARY KEY) WITH 
bloom_filter_fp_chance ',
 immediate='= ')
 self.trycompletions(prefix + ' new_table (col_a int PRIMARY KEY) WITH 
bloom_filter_fp_chance = ',
@@ -653,7 +653,7 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'memtable_flush_period_in_ms',
  'read_repair_chance', 'CLUSTERING',
  'COMPACT', 'caching', 'comment',
- 'min_index_interval', 
'speculative_retry'])
+ 'min_index_interval', 
'speculative_retry', 'cdc'])
 self.trycompletions(prefix + " new_table (col_a int PRIMARY KEY) WITH 
compaction = "
 + "{'class': 'DateTieredCompactionStrategy', '",
 choices=['base_time_seconds', 
'max_sstable_age_days',
@@ -669,7 +669,6 @@ class TestCqlshCompletion(CqlshCompletionCase):
  'enabled', 
'unchecked_tombstone_compaction',
  'only_purge_repaired_tombstones'])
 
-
 def test_complete_in_create_columnfamily(self):
 self.trycompletions('CREATE C', choices=['COLUMNFAMILY', 'CUSTOM'])
 self.trycompletions('CREATE CO', immediate='LUMNFAMILY ')



[jira] [Commented] (CASSANDRA-12379) CQLSH completion test broken by #12236

2016-08-05 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409070#comment-15409070
 ] 

Stefania commented on CASSANDRA-12379:
--

Thanks. Committed to 3.8 as 18c357b8634fd5e846d96b674aa7d55071f29f9f and merged 
upwards with -s ours. 

I've launched the [3.8 
tests|http://cassci.datastax.com/view/cassandra-3.9/job/cassandra-3.8_cqlsh_tests/7/]
 and will verify the result later.

There's a new 
[failure|http://cassci.datastax.com/view/cassandra-3.9/job/cassandra-3.9_cqlsh_tests/lastCompletedBuild/cython=yes,label=ctool-lab/testReport/cqlshlib.test.test_cqlsh_output/TestCqlshOutput/test_describe_columnfamily_output/]
 in 3.9 and it will appear on trunk as well. This indeed is caused by #12236 
and the solution is to edit the 3.9 and trunk cqlsh jobs to set 
{{cdc_enabled=true}} in cassandra.yaml, cc [~philipthompson]. It does not apply 
to the 3.8 branch because the CDC change relative to this failure was not added 
to the cqlshlib tests for 3.8.


> CQLSH completion test broken by #12236
> --
>
> Key: CASSANDRA-12379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12379
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Stefania
>
> The commit of CASSANDRA-12236 appears to have broken [cqlsh completion 
> tests|http://cassci.datastax.com/job/cassandra-3.8_cqlsh_tests/6/cython=yes,label=ctool-lab/testReport/junit/cqlshlib.test.test_cqlsh_completion/TestCqlshCompletion/test_complete_in_create_columnfamily/].
>  For the error message I suspect this may have to do with something like the 
> test comparing the completion output to what DESCRIBE shows and the later now 
> doesn't include the {{cdc}} option by default.
> Anyway, I'm not really familiar with cqlsh completion nor it's test so I'm 
> not sure what's the best option. I don't think we want to remove {{cdc}} from 
> completion so I suspect we want to either special case the test somehow (no 
> clue how to do that), or make the test run with cdc enabled so it doesn't 
> complain (which I think mostly apply a change to the CI environment since it 
> seems the tests themselves don't spin up the cluster).
> Anyway, pushing that fix to someone else as I'm not competent here and I 
> have't even be able to run those cqlsh test so far (getting stuck at the test 
> telling me that "No appropriate python interpreter found", even though I 
> totally have an appropriate interpreter and cqlsh works perfectly if I 
> execute it directly). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Ninja: added blank line following recent commit for CASSANDRA-10707 that broke TestCqlsh.test_pep8_compliance

2016-08-05 Thread stefania
Repository: cassandra
Updated Branches:
  refs/heads/trunk b5dfa3096 -> 78e918024


Ninja: added blank line following recent commit for CASSANDRA-10707 that broke 
TestCqlsh.test_pep8_compliance


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/78e91802
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/78e91802
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/78e91802

Branch: refs/heads/trunk
Commit: 78e9180243731098fe269abcf0549c49277143f5
Parents: b5dfa30
Author: Stefania Alborghetti 
Authored: Fri Aug 5 15:43:43 2016 +0800
Committer: Stefania Alborghetti 
Committed: Fri Aug 5 15:43:43 2016 +0800

--
 pylib/cqlshlib/cql3handling.py | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/78e91802/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index f9bf028..f388f4c 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -793,6 +793,7 @@ def select_order_column_completer(ctxt, cass):
 return [maybe_escape_name(order_by_candidates[len(prev_order_cols)])]
 return [Hint('No more orderable columns here.')]
 
+
 @completer_for('groupByClause', 'groupcol')
 def select_group_column_completer(ctxt, cass):
 prev_group_cols = ctxt.get_binding('groupcol', ())



[jira] [Commented] (CASSANDRA-12387) List Append order is wrong

2016-08-05 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409088#comment-15409088
 ] 

Sylvain Lebresne commented on CASSANDRA-12387:
--

I suspect this is simply due to the update being done too quickly after the 
insert. Append is actually based on time and given potential clock differences 
between nodes, you can't unfortunately entirely predict the real order of 
updates to lists if there are done pretty much at the same time and on 
different coordinator. If I'm right, adding a small sleep between the insert 
and the update would fix that. Another solution would be to make sure both 
insert and update get to the same coordinator (but that's really a java driver 
question, so please email the driver mailing list if you have question on that).

It might work with DevCenter either because things are send more slowly with 
it, or maybe DevCenter only connect to a single node and sends all updates to 
it (contrarily to the driver whose round-robin requests by default. That said, 
I know next to nothing about DevCenter so consider that last part about 
DevCenter as speculation.

Again assuming I'm right about the cause, that's unfortunately a current 
limitation of the design of lists, and unless someone has a simple clever idea 
to fix this, this may have to just be counted are just part of the numerous 
limitations of lists (which we certainly need to document, so I'm happy at 
least using that ticket for that documentation addition).

> List Append order is wrong
> --
>
> Key: CASSANDRA-12387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12387
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.13
>Reporter: ZhaoYang
> Fix For: 2.1.16
>
>
> "INSERT INTO collection_type(key,normal_column,list_column) VALUES 
> ('k','value',[ '#293847','#323442' ]);"
> "UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"
> Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
> '#293847','#323442'
> Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
> '#293847','#323442', '#611987'
> The error happened in 3 node cluster. In local, one node is working properly.
> (all Cassandra 2.1.13. It also happened to 3.0.x driver)
> Is it related to internal message processing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12387) List Append order is wrong

2016-08-05 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-12387:
-
Description: 
"INSERT INTO collection_type(key,normal_column,list_column) VALUES 
('k','value',[ '#293847','#323442' ]);"

"UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"

Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
'#293847','#323442'

Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
'#293847','#323442', '#611987'

The error happened in 3 node cluster. In local, one node is working properly.
(all Cassandra 2.1.13. )

Is it related to internal message processing?

  was:
"INSERT INTO collection_type(key,normal_column,list_column) VALUES 
('k','value',[ '#293847','#323442' ]);"

"UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"

Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
'#293847','#323442'

Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
'#293847','#323442', '#611987'

The error happened in 3 node cluster. In local, one node is working properly.
(all Cassandra 2.1.13. It also happened to 3.0.x driver)

Is it related to internal message processing?


> List Append order is wrong
> --
>
> Key: CASSANDRA-12387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12387
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.13
>Reporter: ZhaoYang
> Fix For: 2.1.16
>
>
> "INSERT INTO collection_type(key,normal_column,list_column) VALUES 
> ('k','value',[ '#293847','#323442' ]);"
> "UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"
> Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
> '#293847','#323442'
> Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
> '#293847','#323442', '#611987'
> The error happened in 3 node cluster. In local, one node is working properly.
> (all Cassandra 2.1.13. )
> Is it related to internal message processing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12387) List Append order is wrong

2016-08-05 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409092#comment-15409092
 ] 

ZhaoYang commented on CASSANDRA-12387:
--

Hi Sylvain, thanks for the reply. I am using client side timestamp generator. 
does it cause the time skew of List element?

> List Append order is wrong
> --
>
> Key: CASSANDRA-12387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12387
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.13
>Reporter: ZhaoYang
> Fix For: 2.1.16
>
>
> "INSERT INTO collection_type(key,normal_column,list_column) VALUES 
> ('k','value',[ '#293847','#323442' ]);"
> "UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"
> Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
> '#293847','#323442'
> Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
> '#293847','#323442', '#611987'
> The error happened in 3 node cluster. In local, one node is working properly.
> (all Cassandra 2.1.13. )
> Is it related to internal message processing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12387) List Append order is wrong

2016-08-05 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409110#comment-15409110
 ] 

Sylvain Lebresne commented on CASSANDRA-12387:
--

No. List append is always based on the server timestamp and does not depend on 
the client provided timestamp. Making it depend on the client timestamp if 
provided would be nice in principle, but it's unfortunately more complicated 
than that when there is more than one element added by a given insert/update, 
and I worry doing so would create more problems than it solves.

> List Append order is wrong
> --
>
> Key: CASSANDRA-12387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12387
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.13
>Reporter: ZhaoYang
> Fix For: 2.1.16
>
>
> "INSERT INTO collection_type(key,normal_column,list_column) VALUES 
> ('k','value',[ '#293847','#323442' ]);"
> "UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"
> Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
> '#293847','#323442'
> Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
> '#293847','#323442', '#611987'
> The error happened in 3 node cluster. In local, one node is working properly.
> (all Cassandra 2.1.13. )
> Is it related to internal message processing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables

2016-08-05 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409135#comment-15409135
 ] 

Benjamin Lerer commented on CASSANDRA-12127:


I have pushed a new commits in all the branches. The new commit modify 
{{scrub}} to allow it to correct the ordering problem and add an upgrade 
section in the {{NEWS.txt}}.  

||Branch||utests||dtests||
|[2.1|https://github.com/apache/cassandra/compare/trunk...blerer:12127-2.1]|[2.1|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-2.1-testall/]|[2.1|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-2.1-dtest/]|
|[2.2|https://github.com/apache/cassandra/compare/trunk...blerer:12127-2.2]|[2.2|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-2.2-testall/]|[2.2|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-2.2-dtest/]|
|[3.0|https://github.com/apache/cassandra/compare/trunk...blerer:12127-3.0]|[3.0|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-3.0-testall/]|[3.0|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-3.0-dtest/]|
|[3.9|https://github.com/apache/cassandra/compare/trunk...blerer:12127-3.9]|[3.9|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-3.9-testall/]|[3.9|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12127-3.9-dtest/]|

> Queries with empty ByteBuffer values in clustering column restrictions fail 
> for non-composite compact tables
> 
>
> Key: CASSANDRA-12127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12127
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 12127.txt
>
>
> For the following table:
> {code}
> CREATE TABLE myTable (pk int,
>   c blob,
>   value int,
>   PRIMARY KEY (pk, c)) WITH COMPACT STORAGE;
> INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1);
> INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2);
> {code}
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}}
> Will result in the following Exception:
> {code}
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
> to org.apache.cassandra.db.composites.CellName
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188)
>   at 
> org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125)
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
>   [...]
> {code}
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}}
> Will return 2 rows instead of 0.
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}}
> {code}
> java.lang.AssertionError
>   at 
> org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253)
>   [...]
> {code}
> I checked 2.0 and {{SELECT * FROM myTable  WHERE pk = 1 AND c > 
> textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND 
> c < textAsBlob('');}} return the same wrong results than in 2.1.
> The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is 
> rejected if a clear error message: {{Invalid empty value for clustering 
> column of COMPACT TABLE}}.
> As it is not possible to insert an empty ByteBuffer value within the 
> clu

[jira] [Commented] (CASSANDRA-12124) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_next_2_1_x_To_head_trunk.select_with_alias_test

2016-08-05 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409144#comment-15409144
 ] 

Sylvain Lebresne commented on CASSANDRA-12124:
--

As we're trying to reduce the noise of upgrade test, it's probably a good idea 
to fix failures that are really easy to fix like this one, so I create a simple 
[dtest pull request|https://github.com/riptano/cassandra-dtest/pull/1178]. Mind 
having a quick look?

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_next_2_1_x_To_head_trunk.select_with_alias_test
> -
>
> Key: CASSANDRA-12124
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12124
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all-custom_branch_runs/37/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_next_2_1_x_To_head_trunk/select_with_alias_test
> Failed on CassCI build upgrade_tests-all-custom_branch_runs #37
> This is just a problem with different error messages across C* versions. 
> Someone needs to do the legwork of figuring out what is required where, and 
> filtering. The query is failing correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12124) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_next_2_1_x_To_head_trunk.select_with_alias_test

2016-08-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12124:
-
Assignee: Sylvain Lebresne  (was: Philip Thompson)
Reviewer: Philip Thompson
  Status: Patch Available  (was: Open)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_next_2_1_x_To_head_trunk.select_with_alias_test
> -
>
> Key: CASSANDRA-12124
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12124
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Sylvain Lebresne
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all-custom_branch_runs/37/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_next_2_1_x_To_head_trunk/select_with_alias_test
> Failed on CassCI build upgrade_tests-all-custom_branch_runs #37
> This is just a problem with different error messages across C* versions. 
> Someone needs to do the legwork of figuring out what is required where, and 
> filtering. The query is failing correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11726) IndexOutOfBoundsException when selecting (distinct) row ids from counter table.

2016-08-05 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409147#comment-15409147
 ] 

Sylvain Lebresne commented on CASSANDRA-11726:
--

Fyi, test run looks good (there is a weird dtest bootstrap failure, but the 
output makes it pretty clear that it's a test issue, probably due to some 
stress recent change).

> IndexOutOfBoundsException when selecting (distinct) row ids from counter 
> table.
> ---
>
> Key: CASSANDRA-11726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11726
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: C* 3.5, cluster of 4 nodes.
>Reporter: Jaroslav Kamenik
>Assignee: Sylvain Lebresne
> Fix For: 3.x
>
>
> I have simple table containing counters:
> {code}
> CREATE TABLE tablename (
> object_id ascii,
> counter_id ascii,
> count counter,
> PRIMARY KEY (object_id, counter_id)
> ) WITH CLUSTERING ORDER BY (counter_id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'enabled': 'false'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> Counters are often inc/decreased, whole rows are queried, deleted sometimes.
> After some time I tried to query all object_ids, but it failed with:
> {code}
> cqlsh:woc> consistency quorum;
> cqlsh:woc> select object_id from tablename;
> ServerError:  message="java.lang.IndexOutOfBoundsException">
> {code}
> select * from ..., select where .., updates works well..
> With consistency one it works sometimes, so it seems something is broken at 
> one server, but I tried to repair table there and it did not help. 
> Whole exception from server log:
> {code}
> java.lang.IndexOutOfBoundsException: null
> at java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_73]
> at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) 
> ~[na:1.8.0_73]
> at 
> org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:141)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext.access$100(CounterContext.java:76)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext$ContextState.(CounterContext.java:758)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext$ContextState.wrap(CounterContext.java:765)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext.merge(CounterContext.java:271) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.Conflicts.mergeCounterValues(Conflicts.java:76) 
> ~[apache-cassandra-3.5.jar:3.5]
> at org.apache.cassandra.db.rows.Cells.reconcile(Cells.java:143) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:591)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:549)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.jar:3.5]
> at org.apache.cassandra.db.rows.Row$Merger.merge(Row.java:526) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:473)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:437)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.5.jar:3.5]
>

[jira] [Commented] (CASSANDRA-12387) List Append order is wrong

2016-08-05 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409149#comment-15409149
 ] 

ZhaoYang commented on CASSANDRA-12387:
--

In this case, if Insert is `later` than Update, then Inserted elements will 
overwrite the Updated elements.. is it?

> List Append order is wrong
> --
>
> Key: CASSANDRA-12387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12387
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.13
>Reporter: ZhaoYang
> Fix For: 2.1.16
>
>
> "INSERT INTO collection_type(key,normal_column,list_column) VALUES 
> ('k','value',[ '#293847','#323442' ]);"
> "UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"
> Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
> '#293847','#323442'
> Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
> '#293847','#323442', '#611987'
> The error happened in 3 node cluster. In local, one node is working properly.
> (all Cassandra 2.1.13. )
> Is it related to internal message processing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12387) List Append order is wrong

2016-08-05 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409157#comment-15409157
 ] 

Sylvain Lebresne commented on CASSANDRA-12387:
--

I'm afraid not. When I say that append is based on the server side time, I'm 
talking of the what identify the order of the list elements, not of the 
internal cassandra timestamps used for reconciliation. Those (the cassandra 
timestamp) will still use the client timestamp that you provide through the use 
of the client side timestamp, and so in term of "overwrites", the update *is* 
after the insert. But in term of the order of the elements in the list, it's 
not.

> List Append order is wrong
> --
>
> Key: CASSANDRA-12387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12387
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.13
>Reporter: ZhaoYang
> Fix For: 2.1.16
>
>
> "INSERT INTO collection_type(key,normal_column,list_column) VALUES 
> ('k','value',[ '#293847','#323442' ]);"
> "UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"
> Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
> '#293847','#323442'
> Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
> '#293847','#323442', '#611987'
> The error happened in 3 node cluster. In local, one node is working properly.
> (all Cassandra 2.1.13. )
> Is it related to internal message processing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11126) select_distinct_with_deletions_test failing on non-vnode environments

2016-08-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11126:
-
Reviewer: Benjamin Lerer

> select_distinct_with_deletions_test failing on non-vnode environments
> -
>
> Key: CASSANDRA-11126
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11126
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ryan McGuire
>Assignee: Sylvain Lebresne
>  Labels: dtest
> Fix For: 3.0.x
>
>
> Looks like this was fixed in CASSANDRA-10762, but not for non-vnode 
> environments:
> {code}
> $ DISABLE_VNODES=yes KEEP_TEST_DIR=yes CASSANDRA_VERSION=git:cassandra-3.0 
> PRINT_DEBUG=true nosetests -s -v 
> upgrade_tests/cql_tests.py:TestCQLNodes2RF1.select_distinct_with_deletions_test
> select_distinct_with_deletions_test 
> (upgrade_tests.cql_tests.TestCQLNodes2RF1) ... cluster ccm directory: 
> /tmp/dtest-UXb0un
> http://git-wip-us.apache.org/repos/asf/cassandra.git git:cassandra-3.0
> Custom init_config not found. Setting defaults.
> Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> getting default job version for 3.0.3
> UpgradePath(starting_version='binary:2.2.3', upgrade_version=None)
> starting from 2.2.3
> upgrading to {'install_dir': 
> '/home/ryan/.ccm/repository/gitCOLONcassandra-3.0'}
> Querying upgraded node
> FAIL
> ==
> FAIL: select_distinct_with_deletions_test 
> (upgrade_tests.cql_tests.TestCQLNodes2RF1)
> --
> Traceback (most recent call last):
>   File "/home/ryan/git/datastax/cassandra-dtest/upgrade_tests/cql_tests.py", 
> line 3360, in select_distinct_with_deletions_test
> self.assertEqual(9, len(rows))
> AssertionError: 9 != 8
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-UXb0un
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None,
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> dtest: DEBUG: getting default job version for 3.0.3
> dtest: DEBUG: UpgradePath(starting_version='binary:2.2.3', 
> upgrade_version=None)
> dtest: DEBUG: starting from 2.2.3
> dtest: DEBUG: upgrading to {'install_dir': 
> '/home/ryan/.ccm/repository/gitCOLONcassandra-3.0'}
> dtest: DEBUG: Querying upgraded node
> - >> end captured logging << -
> --
> Ran 1 test in 56.022s
> FAILED (failures=1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12388) For LeveledCompactionStrategy, provide a JMX interface to trigger printing out LeveledManifest

2016-08-05 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng updated CASSANDRA-12388:
-
Description: For LCS, it appears that we have a private toString() method 
in LeveledManifest, but it's only used from within the class, even though it 
has capability of printing all SSTables on all levels. We used to be able to 
get this information from the manifest file in data directory, but after 
CASSANDRA-4872 this is no longer available. It will be useful for 
troubleshooting if we can have a JMX MBean method to trigger printing out the 
full generations list of a particular manifest instance.  (was: It appears that 
we have a private toString() method in LeveledManifest, but it's only used from 
within the class, even though it has capability of printing all SSTables on all 
levels. We used to be able to get this information from the manifest file in 
data directory, but after CASSANDRA-4872 this is no longer available. It will 
be useful for troubleshooting if we can have a JMX MBean method to trigger 
printing out the full generations list of a particular manifest instance.)

> For LeveledCompactionStrategy, provide a JMX interface to trigger printing 
> out LeveledManifest
> --
>
> Key: CASSANDRA-12388
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12388
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Wei Deng
>Priority: Minor
>  Labels: lcs
>
> For LCS, it appears that we have a private toString() method in 
> LeveledManifest, but it's only used from within the class, even though it has 
> capability of printing all SSTables on all levels. We used to be able to get 
> this information from the manifest file in data directory, but after 
> CASSANDRA-4872 this is no longer available. It will be useful for 
> troubleshooting if we can have a JMX MBean method to trigger printing out the 
> full generations list of a particular manifest instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12388) For LeveledCompactionStrategy, provide a JMX interface to trigger printing out LeveledManifest

2016-08-05 Thread Wei Deng (JIRA)
Wei Deng created CASSANDRA-12388:


 Summary: For LeveledCompactionStrategy, provide a JMX interface to 
trigger printing out LeveledManifest
 Key: CASSANDRA-12388
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12388
 Project: Cassandra
  Issue Type: Improvement
  Components: Compaction
Reporter: Wei Deng
Priority: Minor


It appears that we have a private toString() method in LeveledManifest, but 
it's only used from within the class, even though it has capability of printing 
all SSTables on all levels. We used to be able to get this information from the 
manifest file in data directory, but after CASSANDRA-4872 this is no longer 
available. It will be useful for troubleshooting if we can have a JMX MBean 
method to trigger printing out the full generations list of a particular 
manifest instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12208) Estimated droppable tombstones given by sstablemetadata counts tombstones that aren't actually "droppable"

2016-08-05 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409169#comment-15409169
 ] 

Marcus Eriksson commented on CASSANDRA-12208:
-

Reason is that we don't really know {{gc_grace_seconds}} when calling 
sstablemetadata since this is stored in the schema. We want to support calling 
sstablemetadata on a stand-alone sstable - it would be painful if you had to 
have access to the schema.

And since it is an estimate, I don't think it matters much?

> Estimated droppable tombstones given by sstablemetadata counts tombstones 
> that aren't actually "droppable"
> --
>
> Key: CASSANDRA-12208
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12208
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Thanh
>Assignee: Marcus Eriksson
>Priority: Minor
>
> => "Estimated droppable tombstones" given by *sstablemetadata* counts 
> tombstones that aren't actually "droppable"
> To be clear, the "Estimated droppable tombstones" calculation counts 
> tombstones that have not yet passed gc_grace_seconds as droppable tombstones, 
> which is unexpected, since such tombstones aren't droppable.
> To observe the problem:
> Create a table using the default gc_grace_seconds (default gc_grace_seconds 
> is 86400 is 1 day).
> Populate the table with a couple of records.
> Do a delete.
> Do a "nodetool flush" to flush the memtable to disk.
> Do an "sstablemetadata " to get the metadata of the sstable you just 
> created by doing the flush, and observe that the Estimated droppable 
> tombstones is greater than 0.0 (actual value depends on the total number 
> inserts/updates/deletes that you did before triggered the flush)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12387) List Append order is wrong

2016-08-05 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang resolved CASSANDRA-12387.
--
Resolution: Won't Fix

> List Append order is wrong
> --
>
> Key: CASSANDRA-12387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12387
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.13
>Reporter: ZhaoYang
> Fix For: 2.1.16
>
>
> "INSERT INTO collection_type(key,normal_column,list_column) VALUES 
> ('k','value',[ '#293847','#323442' ]);"
> "UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"
> Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
> '#293847','#323442'
> Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
> '#293847','#323442', '#611987'
> The error happened in 3 node cluster. In local, one node is working properly.
> (all Cassandra 2.1.13. )
> Is it related to internal message processing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12387) List Append order is wrong

2016-08-05 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409171#comment-15409171
 ] 

ZhaoYang commented on CASSANDRA-12387:
--

Thank you. I will close this issue. There is always surprises from Collection 
Type..

> List Append order is wrong
> --
>
> Key: CASSANDRA-12387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12387
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.13
>Reporter: ZhaoYang
> Fix For: 2.1.16
>
>
> "INSERT INTO collection_type(key,normal_column,list_column) VALUES 
> ('k','value',[ '#293847','#323442' ]);"
> "UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"
> Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
> '#293847','#323442'
> Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
> '#293847','#323442', '#611987'
> The error happened in 3 node cluster. In local, one node is working properly.
> (all Cassandra 2.1.13. )
> Is it related to internal message processing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-12387) List Append order is wrong

2016-08-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reopened CASSANDRA-12387:
--
  Assignee: Sylvain Lebresne

Allow me to re-open. As said above, I think we should at least document this 
and I'd like to keep this as a reminder of that.

I'm also having 2nd though about using the timestamp provided timestamp so I'll 
need to think a bit about how feasible that is.

I'd be careful with lists as a general rule in any case. 

> List Append order is wrong
> --
>
> Key: CASSANDRA-12387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12387
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.13
>Reporter: ZhaoYang
>Assignee: Sylvain Lebresne
> Fix For: 2.1.16
>
>
> "INSERT INTO collection_type(key,normal_column,list_column) VALUES 
> ('k','value',[ '#293847','#323442' ]);"
> "UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"
> Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
> '#293847','#323442'
> Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
> '#293847','#323442', '#611987'
> The error happened in 3 node cluster. In local, one node is working properly.
> (all Cassandra 2.1.13. )
> Is it related to internal message processing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11960) Hints are not seekable

2016-08-05 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-11960:
---
Attachment: (was: 11960-trunk.patch)

> Hints are not seekable
> --
>
> Key: CASSANDRA-11960
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11960
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Stefan Podkowinski
>
> Got the following error message on trunk. No idea how to reproduce. But the 
> only thing the (not overridden) seek method does is throwing this exception.
> {code}
> ERROR [HintsDispatcher:2] 2016-06-05 18:51:09,397 CassandraDaemon.java:222 - 
> Exception in thread Thread[HintsDispatcher:2,1,main]
> java.lang.UnsupportedOperationException: Hints are not seekable.
>   at org.apache.cassandra.hints.HintsReader.seek(HintsReader.java:114) 
> ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatcher.seek(HintsDispatcher.java:79) 
> ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:257)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12371) INSERT JSON - numbers not accepted for smallint and tinyint

2016-08-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409178#comment-15409178
 ] 

Paweł Rychlik commented on CASSANDRA-12371:
---

[~thobbs] I'm looking at jenkins reports, but I'm not sure what to make out of 
it :) Some builds are green, some aborted, and most of them yellow with random 
(?) test failures. I have a feeling that the failures have very little to do 
with tiny&smallints in json, but I only checked out the cassandra repo 
yesterday, so that's pretty much my whole knowledge of the codebase ;).

> INSERT JSON - numbers not accepted for smallint and tinyint
> ---
>
> Key: CASSANDRA-12371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12371
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Apache Cassandra 3.7 (provisioned by instaclustr.com, 
> running on AWS)
>Reporter: Paweł Rychlik
>Priority: Minor
>  Labels: lhf
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: 12371-2.2.txt
>
>
> Contrary to what is written down on 
> http://cassandra.apache.org/doc/latest/cql/json.html#json-encoding-of-cassandra-data-types,
>  numbers are not an accepted format for tinyints and smallints.
> Steps to reproduce on CQLSH:
> > create table default.test(id text PRIMARY KEY, small smallint, tiny 
> > tinyint);
> > INSERT INTO default.test JSON '{"id":"123","small":11}';
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Error 
> decoding JSON value for small: Expected a short value, but got a Integer: 11"
> > INSERT INTO default.test JSON '{"id":"123","tiny":11}';
> InvalidRequest: Error from server: code=2200 [Invalid query] message="Error 
> decoding JSON value for tiny: Expected a byte value, but got a Integer: 11"
> The good news is that when you wrap the numeric values into strings - it 
> works like a charm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11960) Hints are not seekable

2016-08-05 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-11960:
---
Attachment: 11960-trunk.patch

> Hints are not seekable
> --
>
> Key: CASSANDRA-11960
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11960
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Stefan Podkowinski
> Attachments: 11960-trunk.patch
>
>
> Got the following error message on trunk. No idea how to reproduce. But the 
> only thing the (not overridden) seek method does is throwing this exception.
> {code}
> ERROR [HintsDispatcher:2] 2016-06-05 18:51:09,397 CassandraDaemon.java:222 - 
> Exception in thread Thread[HintsDispatcher:2,1,main]
> java.lang.UnsupportedOperationException: Hints are not seekable.
>   at org.apache.cassandra.hints.HintsReader.seek(HintsReader.java:114) 
> ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatcher.seek(HintsDispatcher.java:79) 
> ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:257)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220)
>  ~[main/:na]
>   at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11635) test-clientutil-jar unit test fails

2016-08-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11635:
-
Status: Patch Available  (was: Awaiting Feedback)

With no current response to my previous comment, I'm going to proceed with the 
assumption that we're ok dropping that clientutil jar from the 3.x branch, so 
I'm attaching a branch on trunk to do that (the other 2 branches haven't 
changed). That new branch includes a news entry explaining you can rely on 
older versions of the jar if you really need to, but should consider moving to 
using a true client driver for those functionality instead. With all branch 
covered, calling this "patch available".

| [11635-2.2|https://github.com/pcmanus/cassandra/commits/11635-2.2] | 
[utests|http://cassci.datastax.com/job/pcmanus-11635-2.2-testall] | 
[dtests|http://cassci.datastax.com/job/pcmanus-11635-2.2-dtest] |
| [11635-3.0|https://github.com/pcmanus/cassandra/commits/11635-3.0] | 
[utests|http://cassci.datastax.com/job/pcmanus-11635-3.0-testall] | 
[dtests|http://cassci.datastax.com/job/pcmanus-11635-3.0-dtest] |
| [11635-trunk|https://github.com/pcmanus/cassandra/commits/11635-trunk] | 
[utests|http://cassci.datastax.com/job/pcmanus-11635-trunk-testall] | 
[dtests|http://cassci.datastax.com/job/pcmanus-11635-trunk-dtest] |


> test-clientutil-jar unit test fails
> ---
>
> Key: CASSANDRA-11635
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11635
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Michael Shuler
>Assignee: Sylvain Lebresne
>  Labels: unittest
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> {noformat}
> test-clientutil-jar:
> [junit] Testsuite: org.apache.cassandra.serializers.ClientUtilsTest
> [junit] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 0.314 sec
> [junit] 
> [junit] Testcase: test(org.apache.cassandra.serializers.ClientUtilsTest): 
>   Caused an ERROR
> [junit] org/apache/cassandra/utils/SigarLibrary
> [junit] java.lang.NoClassDefFoundError: 
> org/apache/cassandra/utils/SigarLibrary
> [junit] at org.apache.cassandra.utils.UUIDGen.hash(UUIDGen.java:328)
> [junit] at 
> org.apache.cassandra.utils.UUIDGen.makeNode(UUIDGen.java:307)
> [junit] at 
> org.apache.cassandra.utils.UUIDGen.makeClockSeqAndNode(UUIDGen.java:256)
> [junit] at 
> org.apache.cassandra.utils.UUIDGen.(UUIDGen.java:39)
> [junit] at 
> org.apache.cassandra.serializers.ClientUtilsTest.test(ClientUtilsTest.java:56)
> [junit] Caused by: java.lang.ClassNotFoundException: 
> org.apache.cassandra.utils.SigarLibrary
> [junit] at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> [junit] at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.serializers.ClientUtilsTest FAILED
> BUILD FAILED
> {noformat}
> I'll see if I can find a spot where this passes, but it appears to have been 
> failing for a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12375) dtest failure in read_repair_test.TestReadRepair.test_gcable_tombstone_resurrection_on_range_slice_query

2016-08-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-12375.
--
   Resolution: Fixed
 Assignee: Sylvain Lebresne  (was: DS Test Eng)
 Reviewer: Joel Knighton
Fix Version/s: 2.2.8

Ok, committed that simple change then (as dtest commit 
70c802a0f8cb1efab24ae879ba5572cc1daaf44b).

> dtest failure in 
> read_repair_test.TestReadRepair.test_gcable_tombstone_resurrection_on_range_slice_query
> 
>
> Key: CASSANDRA-12375
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12375
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: Sylvain Lebresne
>  Labels: dtest
> Fix For: 2.2.8
>
> Attachments: node1.log, node1_gc.log, node2.log, node2_debug.log, 
> node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_jdk8/291/testReport/read_repair_test/TestReadRepair/test_gcable_tombstone_resurrection_on_range_slice_query



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12387) List Append order is wrong

2016-08-05 Thread Alexandre Dutra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409193#comment-15409193
 ] 

Alexandre Dutra commented on CASSANDRA-12387:
-

Also logged as [JAVA-1259|https://datastax-oss.atlassian.net/browse/JAVA-1259].

> List Append order is wrong
> --
>
> Key: CASSANDRA-12387
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12387
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.13
>Reporter: ZhaoYang
>Assignee: Sylvain Lebresne
> Fix For: 2.1.16
>
>
> "INSERT INTO collection_type(key,normal_column,list_column) VALUES 
> ('k','value',[ '#293847','#323442' ]);"
> "UPDATE collection_type SET list_column=list_column+'#611987' WHERE key='k`;"
> Using 2.1.7.1 java driver to run Update query, the output is: '#611987', 
> '#293847','#323442'
> Using DevCenter 1.3.1 to execute Update query, result is in correct order: 
> '#293847','#323442', '#611987'
> The error happened in 3 node cluster. In local, one node is working properly.
> (all Cassandra 2.1.13. )
> Is it related to internal message processing?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12379) CQLSH completion test broken by #12236

2016-08-05 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409199#comment-15409199
 ] 

Stefania commented on CASSANDRA-12379:
--

3.8 tests have completed successfully.

Once we've edited the 3.9 and trunk jobs and verified they complete 
successfully we can resolve this ticket.

> CQLSH completion test broken by #12236
> --
>
> Key: CASSANDRA-12379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12379
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Stefania
>
> The commit of CASSANDRA-12236 appears to have broken [cqlsh completion 
> tests|http://cassci.datastax.com/job/cassandra-3.8_cqlsh_tests/6/cython=yes,label=ctool-lab/testReport/junit/cqlshlib.test.test_cqlsh_completion/TestCqlshCompletion/test_complete_in_create_columnfamily/].
>  For the error message I suspect this may have to do with something like the 
> test comparing the completion output to what DESCRIBE shows and the later now 
> doesn't include the {{cdc}} option by default.
> Anyway, I'm not really familiar with cqlsh completion nor it's test so I'm 
> not sure what's the best option. I don't think we want to remove {{cdc}} from 
> completion so I suspect we want to either special case the test somehow (no 
> clue how to do that), or make the test run with cdc enabled so it doesn't 
> complain (which I think mostly apply a change to the CI environment since it 
> seems the tests themselves don't spin up the cluster).
> Anyway, pushing that fix to someone else as I'm not competent here and I 
> have't even be able to run those cqlsh test so far (getting stuck at the test 
> telling me that "No appropriate python interpreter found", even though I 
> totally have an appropriate interpreter and cqlsh works perfectly if I 
> execute it directly). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11726) IndexOutOfBoundsException when selecting (distinct) row ids from counter table.

2016-08-05 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409416#comment-15409416
 ] 

Aleksey Yeschenko commented on CASSANDRA-11726:
---

I think we can preserve the CASSANDRA-10657 optimisation overall even for 
counters, with a bit more work, in a follow-up ticket? That said, the patch 
solves the immediate issue. LGTM.

> IndexOutOfBoundsException when selecting (distinct) row ids from counter 
> table.
> ---
>
> Key: CASSANDRA-11726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11726
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: C* 3.5, cluster of 4 nodes.
>Reporter: Jaroslav Kamenik
>Assignee: Sylvain Lebresne
> Fix For: 3.x
>
>
> I have simple table containing counters:
> {code}
> CREATE TABLE tablename (
> object_id ascii,
> counter_id ascii,
> count counter,
> PRIMARY KEY (object_id, counter_id)
> ) WITH CLUSTERING ORDER BY (counter_id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'enabled': 'false'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> Counters are often inc/decreased, whole rows are queried, deleted sometimes.
> After some time I tried to query all object_ids, but it failed with:
> {code}
> cqlsh:woc> consistency quorum;
> cqlsh:woc> select object_id from tablename;
> ServerError:  message="java.lang.IndexOutOfBoundsException">
> {code}
> select * from ..., select where .., updates works well..
> With consistency one it works sometimes, so it seems something is broken at 
> one server, but I tried to repair table there and it did not help. 
> Whole exception from server log:
> {code}
> java.lang.IndexOutOfBoundsException: null
> at java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_73]
> at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) 
> ~[na:1.8.0_73]
> at 
> org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:141)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext.access$100(CounterContext.java:76)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext$ContextState.(CounterContext.java:758)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext$ContextState.wrap(CounterContext.java:765)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext.merge(CounterContext.java:271) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.Conflicts.mergeCounterValues(Conflicts.java:76) 
> ~[apache-cassandra-3.5.jar:3.5]
> at org.apache.cassandra.db.rows.Cells.reconcile(Cells.java:143) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:591)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:549)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.jar:3.5]
> at org.apache.cassandra.db.rows.Row$Merger.merge(Row.java:526) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:473)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:437)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.5.j

[6/6] cassandra git commit: Merge branch 'cassandra-3.9' into trunk

2016-08-05 Thread slebresne
Merge branch 'cassandra-3.9' into trunk

* cassandra-3.9:
  NullPointerException during compaction on table with static columns


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7fe43094
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7fe43094
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7fe43094

Branch: refs/heads/trunk
Commit: 7fe4309430e22bd4d17c7fd91f281bb4d0878ffa
Parents: 78e9180 21c92ca
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:03:19 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:03:19 2016 +0200

--
 CHANGES.txt| 1 +
 .../cassandra/cql3/validation/entities/StaticColumnsTest.java  | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7fe43094/CHANGES.txt
--



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-08-05 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.9

* cassandra-3.0:
  NullPointerException during compaction on table with static columns


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21c92cab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21c92cab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21c92cab

Branch: refs/heads/cassandra-3.9
Commit: 21c92cab872d9dcbc2722c73555c9dddc4c30ece
Parents: 5e319bb b66e5a1
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:03:07 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:03:07 2016 +0200

--
 CHANGES.txt| 1 +
 .../cassandra/cql3/validation/entities/StaticColumnsTest.java  | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/21c92cab/CHANGES.txt
--
diff --cc CHANGES.txt
index bcfbdc9,046c8b3..289f370
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,10 -1,5 +1,11 @@@
 -3.0.9
 +3.9
 + * Fix nodetool tablestats miss SSTable count (CASSANDRA-12205)
 + * Fixed flacky SSTablesIteratedTest (CASSANDRA-12282)
 + * Fixed flacky SSTableRewriterTest: check file counts before calling 
validateCFS (CASSANDRA-12348)
 + * cqlsh: Fix handling of $$-escaped strings (CASSANDRA-12189)
 + * Fix SSL JMX requiring truststore containing server cert (CASSANDRA-12109)
 +Merged from 3.0:
+  * NullPointerException during compaction on table with static columns 
(CASSANDRA-12336)
   * Fixed ConcurrentModificationException when reading metrics in 
GraphiteReporter (CASSANDRA-11823)
   * Fix upgrade of super columns on thrift (CASSANDRA-12335)
   * Fixed flacky BlacklistingCompactionsTest, switched to fixed size types and 
increased corruption size (CASSANDRA-12359)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/21c92cab/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
--



[2/6] cassandra git commit: NullPointerException during compaction on table with static columns

2016-08-05 Thread slebresne
NullPointerException during compaction on table with static columns

patch by Sylvain Lebresne; reviewed by Carl Yeksigian for CASSANDRA-12336


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b66e5a18
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b66e5a18
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b66e5a18

Branch: refs/heads/cassandra-3.9
Commit: b66e5a189674536903638b2028eaac23af85266b
Parents: cc8f6cc
Author: Sylvain Lebresne 
Authored: Fri Jul 29 12:36:40 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:02:20 2016 +0200

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/transform/BaseRows.java  | 3 ++-
 .../cassandra/cql3/validation/entities/StaticColumnsTest.java | 2 ++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b66e5a18/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 49733d3..046c8b3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * NullPointerException during compaction on table with static columns 
(CASSANDRA-12336)
  * Fixed ConcurrentModificationException when reading metrics in 
GraphiteReporter (CASSANDRA-11823)
  * Fix upgrade of super columns on thrift (CASSANDRA-12335)
  * Fixed flacky BlacklistingCompactionsTest, switched to fixed size types and 
increased corruption size (CASSANDRA-12359)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b66e5a18/src/java/org/apache/cassandra/db/transform/BaseRows.java
--
diff --git a/src/java/org/apache/cassandra/db/transform/BaseRows.java 
b/src/java/org/apache/cassandra/db/transform/BaseRows.java
index 7b0bb99..0586840 100644
--- a/src/java/org/apache/cassandra/db/transform/BaseRows.java
+++ b/src/java/org/apache/cassandra/db/transform/BaseRows.java
@@ -102,7 +102,8 @@ implements BaseRowIterator
 super.add(transformation);
 
 // transform any existing data
-staticRow = transformation.applyToStatic(staticRow);
+if (staticRow != null)
+staticRow = transformation.applyToStatic(staticRow);
 next = applyOne(next, transformation);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b66e5a18/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
index 75cbcc7..efa48ae 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
@@ -286,6 +286,8 @@ public class StaticColumnsTest extends CQLTester
 
 flush();
 
+Thread.sleep(1000);
+
 compact();
 
 assertRows(execute("SELECT * FROM %s"), row("k1", "c1", null, "v1"));



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-08-05 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.9

* cassandra-3.0:
  NullPointerException during compaction on table with static columns


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21c92cab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21c92cab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21c92cab

Branch: refs/heads/trunk
Commit: 21c92cab872d9dcbc2722c73555c9dddc4c30ece
Parents: 5e319bb b66e5a1
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:03:07 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:03:07 2016 +0200

--
 CHANGES.txt| 1 +
 .../cassandra/cql3/validation/entities/StaticColumnsTest.java  | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/21c92cab/CHANGES.txt
--
diff --cc CHANGES.txt
index bcfbdc9,046c8b3..289f370
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,10 -1,5 +1,11 @@@
 -3.0.9
 +3.9
 + * Fix nodetool tablestats miss SSTable count (CASSANDRA-12205)
 + * Fixed flacky SSTablesIteratedTest (CASSANDRA-12282)
 + * Fixed flacky SSTableRewriterTest: check file counts before calling 
validateCFS (CASSANDRA-12348)
 + * cqlsh: Fix handling of $$-escaped strings (CASSANDRA-12189)
 + * Fix SSL JMX requiring truststore containing server cert (CASSANDRA-12109)
 +Merged from 3.0:
+  * NullPointerException during compaction on table with static columns 
(CASSANDRA-12336)
   * Fixed ConcurrentModificationException when reading metrics in 
GraphiteReporter (CASSANDRA-11823)
   * Fix upgrade of super columns on thrift (CASSANDRA-12335)
   * Fixed flacky BlacklistingCompactionsTest, switched to fixed size types and 
increased corruption size (CASSANDRA-12359)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/21c92cab/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
--



[3/6] cassandra git commit: NullPointerException during compaction on table with static columns

2016-08-05 Thread slebresne
NullPointerException during compaction on table with static columns

patch by Sylvain Lebresne; reviewed by Carl Yeksigian for CASSANDRA-12336


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b66e5a18
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b66e5a18
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b66e5a18

Branch: refs/heads/trunk
Commit: b66e5a189674536903638b2028eaac23af85266b
Parents: cc8f6cc
Author: Sylvain Lebresne 
Authored: Fri Jul 29 12:36:40 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:02:20 2016 +0200

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/transform/BaseRows.java  | 3 ++-
 .../cassandra/cql3/validation/entities/StaticColumnsTest.java | 2 ++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b66e5a18/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 49733d3..046c8b3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * NullPointerException during compaction on table with static columns 
(CASSANDRA-12336)
  * Fixed ConcurrentModificationException when reading metrics in 
GraphiteReporter (CASSANDRA-11823)
  * Fix upgrade of super columns on thrift (CASSANDRA-12335)
  * Fixed flacky BlacklistingCompactionsTest, switched to fixed size types and 
increased corruption size (CASSANDRA-12359)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b66e5a18/src/java/org/apache/cassandra/db/transform/BaseRows.java
--
diff --git a/src/java/org/apache/cassandra/db/transform/BaseRows.java 
b/src/java/org/apache/cassandra/db/transform/BaseRows.java
index 7b0bb99..0586840 100644
--- a/src/java/org/apache/cassandra/db/transform/BaseRows.java
+++ b/src/java/org/apache/cassandra/db/transform/BaseRows.java
@@ -102,7 +102,8 @@ implements BaseRowIterator
 super.add(transformation);
 
 // transform any existing data
-staticRow = transformation.applyToStatic(staticRow);
+if (staticRow != null)
+staticRow = transformation.applyToStatic(staticRow);
 next = applyOne(next, transformation);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b66e5a18/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
index 75cbcc7..efa48ae 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
@@ -286,6 +286,8 @@ public class StaticColumnsTest extends CQLTester
 
 flush();
 
+Thread.sleep(1000);
+
 compact();
 
 assertRows(execute("SELECT * FROM %s"), row("k1", "c1", null, "v1"));



[1/6] cassandra git commit: NullPointerException during compaction on table with static columns

2016-08-05 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 cc8f6cc51 -> b66e5a189
  refs/heads/cassandra-3.9 5e319bb69 -> 21c92cab8
  refs/heads/trunk 78e918024 -> 7fe430943


NullPointerException during compaction on table with static columns

patch by Sylvain Lebresne; reviewed by Carl Yeksigian for CASSANDRA-12336


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b66e5a18
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b66e5a18
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b66e5a18

Branch: refs/heads/cassandra-3.0
Commit: b66e5a189674536903638b2028eaac23af85266b
Parents: cc8f6cc
Author: Sylvain Lebresne 
Authored: Fri Jul 29 12:36:40 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:02:20 2016 +0200

--
 CHANGES.txt   | 1 +
 src/java/org/apache/cassandra/db/transform/BaseRows.java  | 3 ++-
 .../cassandra/cql3/validation/entities/StaticColumnsTest.java | 2 ++
 3 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b66e5a18/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 49733d3..046c8b3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.9
+ * NullPointerException during compaction on table with static columns 
(CASSANDRA-12336)
  * Fixed ConcurrentModificationException when reading metrics in 
GraphiteReporter (CASSANDRA-11823)
  * Fix upgrade of super columns on thrift (CASSANDRA-12335)
  * Fixed flacky BlacklistingCompactionsTest, switched to fixed size types and 
increased corruption size (CASSANDRA-12359)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b66e5a18/src/java/org/apache/cassandra/db/transform/BaseRows.java
--
diff --git a/src/java/org/apache/cassandra/db/transform/BaseRows.java 
b/src/java/org/apache/cassandra/db/transform/BaseRows.java
index 7b0bb99..0586840 100644
--- a/src/java/org/apache/cassandra/db/transform/BaseRows.java
+++ b/src/java/org/apache/cassandra/db/transform/BaseRows.java
@@ -102,7 +102,8 @@ implements BaseRowIterator
 super.add(transformation);
 
 // transform any existing data
-staticRow = transformation.applyToStatic(staticRow);
+if (staticRow != null)
+staticRow = transformation.applyToStatic(staticRow);
 next = applyOne(next, transformation);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b66e5a18/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
index 75cbcc7..efa48ae 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/StaticColumnsTest.java
@@ -286,6 +286,8 @@ public class StaticColumnsTest extends CQLTester
 
 flush();
 
+Thread.sleep(1000);
+
 compact();
 
 assertRows(execute("SELECT * FROM %s"), row("k1", "c1", null, "v1"));



[jira] [Updated] (CASSANDRA-12336) NullPointerException during compaction on table with static columns

2016-08-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12336:
-
   Resolution: Fixed
Fix Version/s: 3.9
   Status: Resolved  (was: Patch Available)

Tests looked good so committed, thanks.

> NullPointerException during compaction on table with static columns
> ---
>
> Key: CASSANDRA-12336
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12336
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: cqlsh 5.0.1
> Cassandra 3.0.8-SNAPSHOT (3.0.x dev - a5cbb0)
>Reporter: Evan Prothro
>Assignee: Sylvain Lebresne
> Fix For: 3.0.9, 3.9
>
>
> After being affected by 
> https://issues.apache.org/jira/browse/CASSANDRA-11988, we built a5cbb0. 
> Compaction still fails with the following trace:
> {code}
> WARN  [SharedPool-Worker-2] 2016-07-28 10:51:56,111 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2453)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_72]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[main/:na]
>   at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [main/:na]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
> Caused by: java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.db.ReadCommand$1MetricRecording.applyToRow(ReadCommand.java:466)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand$1MetricRecording.applyToStatic(ReadCommand.java:460)
>  ~[main/:na]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:105) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand$1MetricRecording.applyToPartition(ReadCommand.java:454)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand$1MetricRecording.applyToPartition(ReadCommand.java:438)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:320) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1796)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2449)
>  ~[main/:na]
>   ... 5 common frames omitted
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12389) Make SASI work with RandomPartitioner

2016-08-05 Thread Alex Petrov (JIRA)
Alex Petrov created CASSANDRA-12389:
---

 Summary: Make SASI work with RandomPartitioner
 Key: CASSANDRA-12389
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12389
 Project: Cassandra
  Issue Type: Improvement
  Components: sasi
Reporter: Alex Petrov


Currently, SASI works only with the Murmur3Partitioner. In order to improve an 
adoption in existing clusters, we need to enable it to be used with different 
partitioners. 

RandomPartitioner is the simplest case, since it's fixed-size. I've ran several 
tests and it's been proven to work with RandomPartitioner quite well, with an 
exception of tests. Test suite may require more work in order to test it fairly 
well. 

During [CASSANDRA-11990], some work has been done to ease this transition. 
Namely, Token class is used everywhere instead of {{long}} tokens and 
serialisation logic is abstracted to the single place. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12390) Make SASI work with partitioners that have variable-size tokens

2016-08-05 Thread Alex Petrov (JIRA)
Alex Petrov created CASSANDRA-12390:
---

 Summary: Make SASI work with partitioners that have variable-size 
tokens
 Key: CASSANDRA-12390
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12390
 Project: Cassandra
  Issue Type: Improvement
  Components: sasi
Reporter: Alex Petrov


At the moment, SASI indexed can work only with Murmu3Partitioner. 
[CASSANDRA-12389] was created to enable support of one more partitioner with 
fixed-size tokens, although enabling variable-size tokens will need more work, 
namely skipping tokens, since we can't rely on fixed-size multiplication for 
calculating offsets in that case anymore.

This change won't require bytecode format changes, although supporting 
ByteOrderedPartitioner is not a very high priority, and performance will be 
worse because of "manual" skipping. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11726) IndexOutOfBoundsException when selecting (distinct) row ids from counter table.

2016-08-05 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409449#comment-15409449
 ] 

Sylvain Lebresne commented on CASSANDRA-11726:
--

bq. I think we can preserve the CASSANDRA-10657 optimisation overall even for 
counters, with a bit more work, in a follow-up ticket?

Yes, we can make the merge function be happy with empty byte buffers, and it's 
not really more work. The only reason I didn't do that was that during upgrade 
from 2.x, we'll have a mix a results with value and other without, and having a 
merge that return an empty byte buffer if any of it's argument is one even if 
one of them isn't felt possibly a bit dangerous (as in, is there a risk it 
won't silently discard stuffs in a case we didn't intended it?).

That said, it's not a very objectively funded fear, and doing that is probably 
fine. And if we do it, probably worth doing it right now rather than delaying. 
So I'm attaching an alternative that does that. I don't have a huge preference 
over one version or another, so feel free to let me know which one you're the 
more happy with. 

| 
[11726-3.9-alternative|https://github.com/pcmanus/cassandra/commits/11726-3.9-alternative]
 | 
[utests|http://cassci.datastax.com/job/pcmanus-11726-3.9-alternative-testall] | 
[dtests|http://cassci.datastax.com/job/pcmanus-11726-3.9-alternative-dtest] |


> IndexOutOfBoundsException when selecting (distinct) row ids from counter 
> table.
> ---
>
> Key: CASSANDRA-11726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11726
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: C* 3.5, cluster of 4 nodes.
>Reporter: Jaroslav Kamenik
>Assignee: Sylvain Lebresne
> Fix For: 3.x
>
>
> I have simple table containing counters:
> {code}
> CREATE TABLE tablename (
> object_id ascii,
> counter_id ascii,
> count counter,
> PRIMARY KEY (object_id, counter_id)
> ) WITH CLUSTERING ORDER BY (counter_id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'enabled': 'false'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> Counters are often inc/decreased, whole rows are queried, deleted sometimes.
> After some time I tried to query all object_ids, but it failed with:
> {code}
> cqlsh:woc> consistency quorum;
> cqlsh:woc> select object_id from tablename;
> ServerError:  message="java.lang.IndexOutOfBoundsException">
> {code}
> select * from ..., select where .., updates works well..
> With consistency one it works sometimes, so it seems something is broken at 
> one server, but I tried to repair table there and it did not help. 
> Whole exception from server log:
> {code}
> java.lang.IndexOutOfBoundsException: null
> at java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_73]
> at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) 
> ~[na:1.8.0_73]
> at 
> org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:141)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext.access$100(CounterContext.java:76)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext$ContextState.(CounterContext.java:758)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext$ContextState.wrap(CounterContext.java:765)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext.merge(CounterContext.java:271) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.Conflicts.mergeCounterValues(Conflicts.java:76) 
> ~[apache-cassandra-3.5.jar:3.5]
> at org.apache.cassandra.db.rows.Cells.reconcile(Cells.java:143) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:591)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:549)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217)
>  ~[apache-cassandra-3.5.jar:3.5]
>   

[jira] [Comment Edited] (CASSANDRA-11990) Address rows rather than partitions in SASI

2016-08-05 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15408322#comment-15408322
 ] 

Alex Petrov edited comment on CASSANDRA-11990 at 8/5/16 1:29 PM:
-

I've tested random partitioner and it (mostly) works, although as suggested I'd 
highly advise bringing in support for different partitioners for the next 
patch. Reason for leaving the random partitioner out is that the test suite 
will require quite some refactoring, and current patch is already quite big. 
I've implemented a serialization helper that abstracts all the logic that will 
be required to implement a fixed-size token reading. For variable-size token 
reading we'll have to invest some more work and implement skipping in some 
reasonable way.

I've implemented the support for reading from old format (ab). Although to make 
the testing more complete, I'm going to be working on the upgrade tests for 
SASI. Also, dtests are missing for SASI, so I'll create a ticket for that.

|[trunk|https://github.com/ifesdjeen/cassandra/tree/11990-trunk]|[testall|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11182-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11182-trunk-dtest/]|

[CASSANDRA-12389] and [CASSANDRA-12390] to track the follow-up work on 
supporting different partitioners.


was (Author: ifesdjeen):
I've tested random partitioner and it (mostly) works, although as suggested I'd 
highly advise bringing in support for different partitioners for the next 
patch. Reason for leaving the random partitioner out is that the test suite 
will require quite some refactoring, and current patch is already quite big. 
I've implemented a serialization helper that abstracts all the logic that will 
be required to implement a fixed-size token reading. For variable-size token 
reading we'll have to invest some more work and implement skipping in some 
reasonable way.

I've implemented the support for reading from old format (ab). Although to make 
the testing more complete, I'm going to be working on the upgrade tests for 
SASI. Also, dtests are missing for SASI, so I'll create a ticket for that.

|[trunk|https://github.com/ifesdjeen/cassandra/tree/11990-trunk]|[testall|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11182-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11182-trunk-dtest/]|

> Address rows rather than partitions in SASI
> ---
>
> Key: CASSANDRA-11990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11990
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Alex Petrov
>Assignee: Alex Petrov
> Attachments: perf.pdf, size_comparison.png
>
>
> Currently, the lookup in SASI index would return the key position of the 
> partition. After the partition lookup, the rows are iterated and the 
> operators are applied in order to filter out ones that do not match.
> bq. TokenTree which accepts variable size keys (such would enable different 
> partitioners, collections support, primary key indexing etc.), 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12391) Create a dtest and upgrade test suites for SASI

2016-08-05 Thread Alex Petrov (JIRA)
Alex Petrov created CASSANDRA-12391:
---

 Summary: Create a dtest and upgrade test suites for SASI
 Key: CASSANDRA-12391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12391
 Project: Cassandra
  Issue Type: Test
Reporter: Alex Petrov


Right now, SASI is covered only with unit tests. In order to improve coverage, 
we need to cover it with dtests and possibly add upgrade tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12251) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x.whole_list_conditional_test

2016-08-05 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12251:

Status: Patch Available  (was: Open)

|[trunk|https://github.com/ifesdjeen/cassandra/tree/12251-upgrade-trunk]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12251-upgrade-trunk-dtest/]|[testall|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12251-upgrade-trunk-testall/]|[upgrade|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12251-upgrade-trunk-upgrade/]|

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x.whole_list_conditional_test
> --
>
> Key: CASSANDRA-12251
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12251
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Alex Petrov
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.8_dtest_upgrade/1/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x/whole_list_conditional_test
> Failed on CassCI build cassandra-3.8_dtest_upgrade #1
> Relevant error in logs is
> {code}
> Unexpected error in node1 log, error: 
> ERROR [InternalResponseStage:2] 2016-07-20 04:58:45,876 
> CassandraDaemon.java:217 - Exception in thread 
> Thread[InternalResponseStage:2,5,main]
> java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
> down
>   at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:61)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823) 
> ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369) 
> ~[na:1.8.0_51]
>   at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:165)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
>  ~[na:1.8.0_51]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.switchMemtable(ColumnFamilyStore.java:842)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.switchMemtableIfCurrent(ColumnFamilyStore.java:822)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:891)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$flush$1(SchemaKeyspace.java:279)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$200/1129213153.accept(Unknown
>  Source) ~[na:na]
>   at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_51]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.flush(SchemaKeyspace.java:279) 
> ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1271)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1253)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.service.MigrationTask$1.response(MigrationTask.java:92) 
> ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53)
>  ~[apache-cassandra-3.7.jar:3.7]
>   at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.7.jar:3.7]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_51]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_51]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_51]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
> {code}
> This is on a mixed 3.0.8, 3.8-tentative cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11990) Address rows rather than partitions in SASI

2016-08-05 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-11990:

Component/s: sasi

> Address rows rather than partitions in SASI
> ---
>
> Key: CASSANDRA-11990
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11990
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, sasi
>Reporter: Alex Petrov
>Assignee: Alex Petrov
> Attachments: perf.pdf, size_comparison.png
>
>
> Currently, the lookup in SASI index would return the key position of the 
> partition. After the partition lookup, the rows are iterated and the 
> operators are applied in order to filter out ones that do not match.
> bq. TokenTree which accepts variable size keys (such would enable different 
> partitioners, collections support, primary key indexing etc.), 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12223) SASI Indexes querying incorrectly return 0 rows

2016-08-05 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-12223:
---

Assignee: Alex Petrov

> SASI Indexes querying incorrectly return 0 rows
> ---
>
> Key: CASSANDRA-12223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12223
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Windows, DataStax Distribution
>Reporter: Qiu Zhida
>Assignee: Alex Petrov
> Fix For: 3.7
>
>
> I just started working with the SASI index on Cassandra 3.7.0 and I 
> encountered a problem which as I suspected was a bug. I had hardly tracked 
> down the situation in which the bug showed up, here is what I found:
> When querying with a SASI index, *it may incorrectly return 0 rows*, and 
> changing a little conditions, it works again, like the following CQL code:
> {code:title=CQL|borderStyle=solid}
> CREATE TABLE IF NOT EXISTS roles (
> name text,
> a int,
> b int,
> PRIMARY KEY ((name, a), b)
> ) WITH CLUSTERING ORDER BY (b DESC);
> 
> insert into roles (name,a,b) values ('Joe',1,1);
> insert into roles (name,a,b) values ('Joe',2,2);
> insert into roles (name,a,b) values ('Joe',3,3);
> insert into roles (name,a,b) values ('Joe',4,4);
> CREATE TABLE IF NOT EXISTS roles2 (
> name text,
> a int,
> b int,
> PRIMARY KEY ((name, a), b)
> ) WITH CLUSTERING ORDER BY (b ASC);
> 
> insert into roles2 (name,a,b) values ('Joe',1,1);
> insert into roles2 (name,a,b) values ('Joe',2,2);
> insert into roles2 (name,a,b) values ('Joe',3,3);
> insert into roles2 (name,a,b) values ('Joe',4,4);
> CREATE CUSTOM INDEX ON roles (b) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex' 
> WITH OPTIONS = { 'mode': 'SPARSE' };
> CREATE CUSTOM INDEX ON roles2 (b) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex' 
> WITH OPTIONS = { 'mode': 'SPARSE' };
> {code}
> Noticing that I only change table *roles2* from table *roles*'s '*CLUSTERING 
> ORDER BY (b DESC)*' into '*CLUSTERING ORDER BY (b ASC)*'.
> When querying with statement +select * from roles2 where b<3+, the rusult is 
> two rows:
> {code:title=CQL|borderStyle=solid}
>  name | a | b
> --+---+---
>   Joe | 1 | 1
>   Joe | 2 | 2
> (2 rows)
> {code}
> However, if querying with +select * from roles where b<3+, it returned no 
> rows at all:
> {code:title=CQL|borderStyle=solid}
>  name | a | b
> --+---+---
> (0 rows)
> {code}
> This is not the only situation where the bug would show up, one time I 
> created a SASI index with specific name like 'end_idx' on column 'end', the 
> bug showed up, when I didn't specify the index name, it gone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[08/23] cassandra git commit: Merge commit 'bd6654733dded3513c2c7acf96df2c364b0c043e' into cassandra-2.2

2016-08-05 Thread slebresne
Merge commit 'bd6654733dded3513c2c7acf96df2c364b0c043e' into cassandra-2.2

* commit 'bd6654733dded3513c2c7acf96df2c364b0c043e':
  Disable passing control to post-flush after flush failure to prevent data 
loss.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6dc1745e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6dc1745e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6dc1745e

Branch: refs/heads/cassandra-2.2
Commit: 6dc1745edd8d3861d853ee56f49ac67633a753b0
Parents: 0398521 bd66547
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:36:29 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:37:11 2016 +0200

--
 CHANGES.txt |   3 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  67 +---
 .../apache/cassandra/cql3/OutOfSpaceBase.java   |  95 +++
 .../cassandra/cql3/OutOfSpaceDieTest.java   |  68 
 .../cassandra/cql3/OutOfSpaceIgnoreTest.java|  60 +++
 .../cassandra/cql3/OutOfSpaceStopTest.java  |  63 
 .../apache/cassandra/cql3/OutOfSpaceTest.java   | 157 ---
 7 files changed, 336 insertions(+), 177 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6dc1745e/CHANGES.txt
--
diff --cc CHANGES.txt
index 87228d3,1275631..7fcf373
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,55 -1,13 +1,58 @@@
 +2.2.8
 + * Release sstables of failed stream sessions only when outgoing transfers 
are finished (CASSANDRA-11345)
 + * Revert CASSANDRA-11427 (CASSANDRA-12351)
 + * Wait for tracing events before returning response and query at same 
consistency level client side (CASSANDRA-11465)
 + * cqlsh copyutil should get host metadata by connected address 
(CASSANDRA-11979)
 + * Fixed cqlshlib.test.remove_test_db (CASSANDRA-12214)
 + * Synchronize ThriftServer::stop() (CASSANDRA-12105)
 + * Use dedicated thread for JMX notifications (CASSANDRA-12146)
 + * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
 + * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
 + * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)
 + * Don't write shadowed range tombstone (CASSANDRA-12030)
 +Merged from 2.1:
++===
+ 2.1.16
+  * Disable passing control to post-flush after flush failure to prevent data 
loss (CASSANDRA-11828)
   * Allow STCS-in-L0 compactions to reduce scope with LCS (CASSANDRA-12040)
   * cannot use cql since upgrading python to 2.7.11+ (CASSANDRA-11850)
 - * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)
   * Improve digest calculation in the presence of overlapping tombstones 
(CASSANDRA-11349)
 + * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)
 + * Account for partition deletions in tombstone histogram (CASSANDRA-12112)
  
  
 -2.1.15
 - * Account for partition deletions in tombstone histogram (CASSANDRA-12112)
 +2.2.7
 + * Allow nodetool info to run with readonly JMX access (CASSANDRA-11755)
 + * Validate bloom_filter_fp_chance against lowest supported
 +   value when the table is created (CASSANDRA-11920)
 + * RandomAccessReader: call isEOF() only when rebuffering, not for every read 
operation (CASSANDRA-12013)
 + * Don't send erroneous NEW_NODE notifications on restart (CASSANDRA-11038)
 + * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
 + * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
 + * Run CommitLog tests with different compression settings (CASSANDRA-9039)
 + * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)
 + * Avoid showing estimated key as -1 in tablestats (CASSANDRA-11587)
 + * Fix possible race condition in CommitLog.recover (CASSANDRA-11743)
 + * Enable client encryption in sstableloader with cli options 
(CASSANDRA-11708)
 + * Possible memory leak in NIODataInputStream (CASSANDRA-11867)
 + * Fix commit log replay after out-of-order flush completion (CASSANDRA-9669)
 + * Add seconds to cqlsh tracing session duration (CASSANDRA-11753)
 + * Prohibit Reverse Counter type as part of the PK (CASSANDRA-9395)
 + * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626)
 + * Exit JVM if JMX server fails to startup (CASSANDRA-11540)
 + * Produce a heap dump when exiting on OOM (CASSANDRA-9861)
 + * Avoid read repairing purgeable tombstones on range slices (CASSANDRA-11427)
 + * Restore ability to filter on clustering columns when using a 2i 
(CASSANDRA-11510)
 + * JSON datetime formatting needs timezone (CASSANDRA-11137)
 + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
 + * Remove

[13/23] cassandra git commit: Change commitlog and sstables to track dirty and clean intervals.

2016-08-05 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/904cb5d1/src/java/org/apache/cassandra/utils/IntegerInterval.java
--
diff --git a/src/java/org/apache/cassandra/utils/IntegerInterval.java 
b/src/java/org/apache/cassandra/utils/IntegerInterval.java
new file mode 100644
index 000..03ad6e0
--- /dev/null
+++ b/src/java/org/apache/cassandra/utils/IntegerInterval.java
@@ -0,0 +1,227 @@
+package org.apache.cassandra.utils;
+
+import java.util.*;
+import java.util.concurrent.atomic.AtomicLongFieldUpdater;
+import java.util.stream.Collectors;
+
+import com.google.common.collect.Lists;
+import com.google.common.primitives.Longs;
+
+/**
+ * Mutable integer interval class, thread-safe.
+ * Represents the interval [lower,upper].
+ */
+public class IntegerInterval
+{
+volatile long interval;
+private static AtomicLongFieldUpdater intervalUpdater =
+AtomicLongFieldUpdater.newUpdater(IntegerInterval.class, 
"interval");
+
+private IntegerInterval(long interval)
+{
+this.interval = interval;
+}
+
+public IntegerInterval(int lower, int upper)
+{
+this(make(lower, upper));
+}
+
+public IntegerInterval(IntegerInterval src)
+{
+this(src.interval);
+}
+
+public int lower()
+{
+return lower(interval);
+}
+
+public int upper()
+{
+return upper(interval);
+}
+
+/**
+ * Expands the interval to cover the given value by extending one of its 
sides if necessary.
+ * Mutates this. Thread-safe.
+ */
+public void expandToCover(int value)
+{
+long prev;
+int lower;
+int upper;
+do
+{
+prev = interval;
+upper = upper(prev);
+lower = lower(prev);
+if (value > upper) // common case
+upper = value;
+else if (value < lower)
+lower = value;
+}
+while (!intervalUpdater.compareAndSet(this, prev, make(lower, upper)));
+}
+
+@Override
+public int hashCode()
+{
+return Long.hashCode(interval);
+}
+
+@Override
+public boolean equals(Object obj)
+{
+if (getClass() != obj.getClass())
+return false;
+IntegerInterval other = (IntegerInterval) obj;
+return interval == other.interval;
+}
+
+public String toString()
+{
+long interval = this.interval;
+return "[" + lower(interval) + "," + upper(interval) + "]";
+}
+
+private static long make(int lower, int upper)
+{
+assert lower <= upper;
+return ((lower & 0xL) << 32) | upper & 0xL;
+}
+
+private static int lower(long interval)
+{
+return (int) (interval >>> 32);
+}
+
+private static int upper(long interval)
+{
+return (int) interval;
+}
+
+
+/**
+ * A mutable set of closed integer intervals, stored in normalized form 
(i.e. where overlapping intervals are
+ * converted to a single interval covering both). Thread-safe.
+ */
+public static class Set
+{
+static long[] EMPTY = new long[0];
+
+private volatile long[] ranges = EMPTY;
+
+/**
+ * Adds an interval to the set, performing the necessary normalization.
+ */
+public synchronized void add(int start, int end)
+{
+assert start <= end;
+long[] ranges, newRanges;
+{
+ranges = this.ranges; // take local copy to avoid risk of it 
changing in the midst of operation
+
+// extend ourselves to cover any ranges we overlap
+// record directly preceding our end may extend past us, so 
take the max of our end and its
+int rpos = Arrays.binarySearch(ranges, ((end & 0xL) << 
32) | 0xL); // floor (i.e. greatest <=) of the end position
+if (rpos < 0)
+rpos = (-1 - rpos) - 1;
+if (rpos >= 0)
+{
+int extend = upper(ranges[rpos]);
+if (extend > end)
+end = extend;
+}
+
+// record directly preceding our start may extend into us; if 
it does, we take it as our start
+int lpos = Arrays.binarySearch(ranges, ((start & 0xL) 
<< 32) | 0); // lower (i.e. greatest <) of the start position
+if (lpos < 0)
+lpos = -1 - lpos;
+lpos -= 1;
+if (lpos >= 0)
+{
+if (upper(ranges[lpos]) >= start)
+{
+start = lower(ranges[lpos]);
+--lpos;
+}
+}
+
+newRanges = new long[ranges.length - (rpos - lpos) + 1];
+int dest = 0;
+ 

[09/23] cassandra git commit: Merge commit 'bd6654733dded3513c2c7acf96df2c364b0c043e' into cassandra-2.2

2016-08-05 Thread slebresne
Merge commit 'bd6654733dded3513c2c7acf96df2c364b0c043e' into cassandra-2.2

* commit 'bd6654733dded3513c2c7acf96df2c364b0c043e':
  Disable passing control to post-flush after flush failure to prevent data 
loss.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6dc1745e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6dc1745e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6dc1745e

Branch: refs/heads/trunk
Commit: 6dc1745edd8d3861d853ee56f49ac67633a753b0
Parents: 0398521 bd66547
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:36:29 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:37:11 2016 +0200

--
 CHANGES.txt |   3 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  67 +---
 .../apache/cassandra/cql3/OutOfSpaceBase.java   |  95 +++
 .../cassandra/cql3/OutOfSpaceDieTest.java   |  68 
 .../cassandra/cql3/OutOfSpaceIgnoreTest.java|  60 +++
 .../cassandra/cql3/OutOfSpaceStopTest.java  |  63 
 .../apache/cassandra/cql3/OutOfSpaceTest.java   | 157 ---
 7 files changed, 336 insertions(+), 177 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6dc1745e/CHANGES.txt
--
diff --cc CHANGES.txt
index 87228d3,1275631..7fcf373
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,55 -1,13 +1,58 @@@
 +2.2.8
 + * Release sstables of failed stream sessions only when outgoing transfers 
are finished (CASSANDRA-11345)
 + * Revert CASSANDRA-11427 (CASSANDRA-12351)
 + * Wait for tracing events before returning response and query at same 
consistency level client side (CASSANDRA-11465)
 + * cqlsh copyutil should get host metadata by connected address 
(CASSANDRA-11979)
 + * Fixed cqlshlib.test.remove_test_db (CASSANDRA-12214)
 + * Synchronize ThriftServer::stop() (CASSANDRA-12105)
 + * Use dedicated thread for JMX notifications (CASSANDRA-12146)
 + * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
 + * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
 + * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)
 + * Don't write shadowed range tombstone (CASSANDRA-12030)
 +Merged from 2.1:
++===
+ 2.1.16
+  * Disable passing control to post-flush after flush failure to prevent data 
loss (CASSANDRA-11828)
   * Allow STCS-in-L0 compactions to reduce scope with LCS (CASSANDRA-12040)
   * cannot use cql since upgrading python to 2.7.11+ (CASSANDRA-11850)
 - * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)
   * Improve digest calculation in the presence of overlapping tombstones 
(CASSANDRA-11349)
 + * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)
 + * Account for partition deletions in tombstone histogram (CASSANDRA-12112)
  
  
 -2.1.15
 - * Account for partition deletions in tombstone histogram (CASSANDRA-12112)
 +2.2.7
 + * Allow nodetool info to run with readonly JMX access (CASSANDRA-11755)
 + * Validate bloom_filter_fp_chance against lowest supported
 +   value when the table is created (CASSANDRA-11920)
 + * RandomAccessReader: call isEOF() only when rebuffering, not for every read 
operation (CASSANDRA-12013)
 + * Don't send erroneous NEW_NODE notifications on restart (CASSANDRA-11038)
 + * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
 + * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
 + * Run CommitLog tests with different compression settings (CASSANDRA-9039)
 + * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)
 + * Avoid showing estimated key as -1 in tablestats (CASSANDRA-11587)
 + * Fix possible race condition in CommitLog.recover (CASSANDRA-11743)
 + * Enable client encryption in sstableloader with cli options 
(CASSANDRA-11708)
 + * Possible memory leak in NIODataInputStream (CASSANDRA-11867)
 + * Fix commit log replay after out-of-order flush completion (CASSANDRA-9669)
 + * Add seconds to cqlsh tracing session duration (CASSANDRA-11753)
 + * Prohibit Reverse Counter type as part of the PK (CASSANDRA-9395)
 + * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626)
 + * Exit JVM if JMX server fails to startup (CASSANDRA-11540)
 + * Produce a heap dump when exiting on OOM (CASSANDRA-9861)
 + * Avoid read repairing purgeable tombstones on range slices (CASSANDRA-11427)
 + * Restore ability to filter on clustering columns when using a 2i 
(CASSANDRA-11510)
 + * JSON datetime formatting needs timezone (CASSANDRA-11137)
 + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
 + * Remove unnesce

[14/23] cassandra git commit: Change commitlog and sstables to track dirty and clean intervals.

2016-08-05 Thread slebresne
Change commitlog and sstables to track dirty and clean intervals.

patch by Branimir Lambov; reviewed by Sylvain Lebresne for
CASSANDRA-11828


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/904cb5d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/904cb5d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/904cb5d1

Branch: refs/heads/trunk
Commit: 904cb5d10e0de1a6ca89249be8c257ed38a80ef0
Parents: cf85f52
Author: Branimir Lambov 
Authored: Sat May 14 11:31:16 2016 +0300
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:38:37 2016 +0200

--
 CHANGES.txt |   1 +
 .../cassandra/db/BlacklistedDirectories.java|  13 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  66 +---
 .../org/apache/cassandra/db/Directories.java|   2 +-
 src/java/org/apache/cassandra/db/Memtable.java  |  18 +-
 .../cassandra/db/commitlog/CommitLog.java   |  11 +-
 .../db/commitlog/CommitLogReplayer.java |  59 +++-
 .../db/commitlog/CommitLogSegment.java  |  77 ++---
 .../db/commitlog/CommitLogSegmentManager.java   |   4 +-
 .../cassandra/db/commitlog/IntervalSet.java | 192 +++
 .../cassandra/db/commitlog/ReplayPosition.java  |  71 
 .../compaction/AbstractCompactionStrategy.java  |   3 +
 .../compaction/CompactionStrategyManager.java   |   3 +
 .../apache/cassandra/db/lifecycle/Tracker.java  |  44 +--
 .../org/apache/cassandra/db/lifecycle/View.java |  36 +-
 .../cassandra/io/sstable/format/Version.java|   2 +
 .../io/sstable/format/big/BigFormat.java|  14 +-
 .../metadata/LegacyMetadataSerializer.java  |  17 +-
 .../io/sstable/metadata/MetadataCollector.java  |  38 +--
 .../io/sstable/metadata/StatsMetadata.java  |  44 +--
 .../cassandra/tools/SSTableMetadataViewer.java  |   3 +-
 .../apache/cassandra/utils/IntegerInterval.java | 227 +
 .../legacy_mc_clust/mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust/mc-1-big-Data.db| Bin 0 -> 5355 bytes
 .../legacy_mc_clust/mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../legacy_mc_clust/mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust_compact/mc-1-big-Data.db| Bin 0 -> 5382 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_compact/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_compact/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust_compact/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_compact/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../legacy_mc_clust_counter/mc-1-big-Data.db| Bin 0 -> 4631 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_counter/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_counter/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../legacy_mc_clust_counter/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_counter/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../mc-1-big-Data.db| Bin 0 -> 4625 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple/mc-1-big-Data.db   | Bin 0 -> 89 bytes
 .../legacy_mc_simple/mc-1-big-Digest.crc32  |   1 +
 .../legacy_mc_simple/mc-1-big-Filter.db | Bin 0 -> 24 bytes
 .../legacy_mc_simple/mc-1-big-Index.db  | Bin 0 -> 26 bytes
 .../legacy_mc_simple/mc-1-big-Statistics.db | Bin 0 -> 4639 bytes
 .../legacy_mc_simple/mc-1-big-Summary.db| Bin 0 -> 47 bytes
 .../legacy_mc_simple/mc-1-big-TOC.txt   |   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple_compact/mc-1-big-Data.db   | Bin 0 -> 91 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_simple_compact/mc-1-big-Filter.db | Bin 0 -> 24 bytes
 .../legacy_mc_simple_compact/mc-1-big-I

[17/23] cassandra git commit: Change commitlog and sstables to track dirty and clean intervals.

2016-08-05 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/904cb5d1/src/java/org/apache/cassandra/utils/IntegerInterval.java
--
diff --git a/src/java/org/apache/cassandra/utils/IntegerInterval.java 
b/src/java/org/apache/cassandra/utils/IntegerInterval.java
new file mode 100644
index 000..03ad6e0
--- /dev/null
+++ b/src/java/org/apache/cassandra/utils/IntegerInterval.java
@@ -0,0 +1,227 @@
+package org.apache.cassandra.utils;
+
+import java.util.*;
+import java.util.concurrent.atomic.AtomicLongFieldUpdater;
+import java.util.stream.Collectors;
+
+import com.google.common.collect.Lists;
+import com.google.common.primitives.Longs;
+
+/**
+ * Mutable integer interval class, thread-safe.
+ * Represents the interval [lower,upper].
+ */
+public class IntegerInterval
+{
+volatile long interval;
+private static AtomicLongFieldUpdater intervalUpdater =
+AtomicLongFieldUpdater.newUpdater(IntegerInterval.class, 
"interval");
+
+private IntegerInterval(long interval)
+{
+this.interval = interval;
+}
+
+public IntegerInterval(int lower, int upper)
+{
+this(make(lower, upper));
+}
+
+public IntegerInterval(IntegerInterval src)
+{
+this(src.interval);
+}
+
+public int lower()
+{
+return lower(interval);
+}
+
+public int upper()
+{
+return upper(interval);
+}
+
+/**
+ * Expands the interval to cover the given value by extending one of its 
sides if necessary.
+ * Mutates this. Thread-safe.
+ */
+public void expandToCover(int value)
+{
+long prev;
+int lower;
+int upper;
+do
+{
+prev = interval;
+upper = upper(prev);
+lower = lower(prev);
+if (value > upper) // common case
+upper = value;
+else if (value < lower)
+lower = value;
+}
+while (!intervalUpdater.compareAndSet(this, prev, make(lower, upper)));
+}
+
+@Override
+public int hashCode()
+{
+return Long.hashCode(interval);
+}
+
+@Override
+public boolean equals(Object obj)
+{
+if (getClass() != obj.getClass())
+return false;
+IntegerInterval other = (IntegerInterval) obj;
+return interval == other.interval;
+}
+
+public String toString()
+{
+long interval = this.interval;
+return "[" + lower(interval) + "," + upper(interval) + "]";
+}
+
+private static long make(int lower, int upper)
+{
+assert lower <= upper;
+return ((lower & 0xL) << 32) | upper & 0xL;
+}
+
+private static int lower(long interval)
+{
+return (int) (interval >>> 32);
+}
+
+private static int upper(long interval)
+{
+return (int) interval;
+}
+
+
+/**
+ * A mutable set of closed integer intervals, stored in normalized form 
(i.e. where overlapping intervals are
+ * converted to a single interval covering both). Thread-safe.
+ */
+public static class Set
+{
+static long[] EMPTY = new long[0];
+
+private volatile long[] ranges = EMPTY;
+
+/**
+ * Adds an interval to the set, performing the necessary normalization.
+ */
+public synchronized void add(int start, int end)
+{
+assert start <= end;
+long[] ranges, newRanges;
+{
+ranges = this.ranges; // take local copy to avoid risk of it 
changing in the midst of operation
+
+// extend ourselves to cover any ranges we overlap
+// record directly preceding our end may extend past us, so 
take the max of our end and its
+int rpos = Arrays.binarySearch(ranges, ((end & 0xL) << 
32) | 0xL); // floor (i.e. greatest <=) of the end position
+if (rpos < 0)
+rpos = (-1 - rpos) - 1;
+if (rpos >= 0)
+{
+int extend = upper(ranges[rpos]);
+if (extend > end)
+end = extend;
+}
+
+// record directly preceding our start may extend into us; if 
it does, we take it as our start
+int lpos = Arrays.binarySearch(ranges, ((start & 0xL) 
<< 32) | 0); // lower (i.e. greatest <) of the start position
+if (lpos < 0)
+lpos = -1 - lpos;
+lpos -= 1;
+if (lpos >= 0)
+{
+if (upper(ranges[lpos]) >= start)
+{
+start = lower(ranges[lpos]);
+--lpos;
+}
+}
+
+newRanges = new long[ranges.length - (rpos - lpos) + 1];
+int dest = 0;
+ 

[15/23] cassandra git commit: Change commitlog and sstables to track dirty and clean intervals.

2016-08-05 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/904cb5d1/src/java/org/apache/cassandra/utils/IntegerInterval.java
--
diff --git a/src/java/org/apache/cassandra/utils/IntegerInterval.java 
b/src/java/org/apache/cassandra/utils/IntegerInterval.java
new file mode 100644
index 000..03ad6e0
--- /dev/null
+++ b/src/java/org/apache/cassandra/utils/IntegerInterval.java
@@ -0,0 +1,227 @@
+package org.apache.cassandra.utils;
+
+import java.util.*;
+import java.util.concurrent.atomic.AtomicLongFieldUpdater;
+import java.util.stream.Collectors;
+
+import com.google.common.collect.Lists;
+import com.google.common.primitives.Longs;
+
+/**
+ * Mutable integer interval class, thread-safe.
+ * Represents the interval [lower,upper].
+ */
+public class IntegerInterval
+{
+volatile long interval;
+private static AtomicLongFieldUpdater intervalUpdater =
+AtomicLongFieldUpdater.newUpdater(IntegerInterval.class, 
"interval");
+
+private IntegerInterval(long interval)
+{
+this.interval = interval;
+}
+
+public IntegerInterval(int lower, int upper)
+{
+this(make(lower, upper));
+}
+
+public IntegerInterval(IntegerInterval src)
+{
+this(src.interval);
+}
+
+public int lower()
+{
+return lower(interval);
+}
+
+public int upper()
+{
+return upper(interval);
+}
+
+/**
+ * Expands the interval to cover the given value by extending one of its 
sides if necessary.
+ * Mutates this. Thread-safe.
+ */
+public void expandToCover(int value)
+{
+long prev;
+int lower;
+int upper;
+do
+{
+prev = interval;
+upper = upper(prev);
+lower = lower(prev);
+if (value > upper) // common case
+upper = value;
+else if (value < lower)
+lower = value;
+}
+while (!intervalUpdater.compareAndSet(this, prev, make(lower, upper)));
+}
+
+@Override
+public int hashCode()
+{
+return Long.hashCode(interval);
+}
+
+@Override
+public boolean equals(Object obj)
+{
+if (getClass() != obj.getClass())
+return false;
+IntegerInterval other = (IntegerInterval) obj;
+return interval == other.interval;
+}
+
+public String toString()
+{
+long interval = this.interval;
+return "[" + lower(interval) + "," + upper(interval) + "]";
+}
+
+private static long make(int lower, int upper)
+{
+assert lower <= upper;
+return ((lower & 0xL) << 32) | upper & 0xL;
+}
+
+private static int lower(long interval)
+{
+return (int) (interval >>> 32);
+}
+
+private static int upper(long interval)
+{
+return (int) interval;
+}
+
+
+/**
+ * A mutable set of closed integer intervals, stored in normalized form 
(i.e. where overlapping intervals are
+ * converted to a single interval covering both). Thread-safe.
+ */
+public static class Set
+{
+static long[] EMPTY = new long[0];
+
+private volatile long[] ranges = EMPTY;
+
+/**
+ * Adds an interval to the set, performing the necessary normalization.
+ */
+public synchronized void add(int start, int end)
+{
+assert start <= end;
+long[] ranges, newRanges;
+{
+ranges = this.ranges; // take local copy to avoid risk of it 
changing in the midst of operation
+
+// extend ourselves to cover any ranges we overlap
+// record directly preceding our end may extend past us, so 
take the max of our end and its
+int rpos = Arrays.binarySearch(ranges, ((end & 0xL) << 
32) | 0xL); // floor (i.e. greatest <=) of the end position
+if (rpos < 0)
+rpos = (-1 - rpos) - 1;
+if (rpos >= 0)
+{
+int extend = upper(ranges[rpos]);
+if (extend > end)
+end = extend;
+}
+
+// record directly preceding our start may extend into us; if 
it does, we take it as our start
+int lpos = Arrays.binarySearch(ranges, ((start & 0xL) 
<< 32) | 0); // lower (i.e. greatest <) of the start position
+if (lpos < 0)
+lpos = -1 - lpos;
+lpos -= 1;
+if (lpos >= 0)
+{
+if (upper(ranges[lpos]) >= start)
+{
+start = lower(ranges[lpos]);
+--lpos;
+}
+}
+
+newRanges = new long[ranges.length - (rpos - lpos) + 1];
+int dest = 0;
+ 

[06/23] cassandra git commit: Merge commit 'bd6654733dded3513c2c7acf96df2c364b0c043e' into cassandra-2.2

2016-08-05 Thread slebresne
Merge commit 'bd6654733dded3513c2c7acf96df2c364b0c043e' into cassandra-2.2

* commit 'bd6654733dded3513c2c7acf96df2c364b0c043e':
  Disable passing control to post-flush after flush failure to prevent data 
loss.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6dc1745e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6dc1745e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6dc1745e

Branch: refs/heads/cassandra-3.0
Commit: 6dc1745edd8d3861d853ee56f49ac67633a753b0
Parents: 0398521 bd66547
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:36:29 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:37:11 2016 +0200

--
 CHANGES.txt |   3 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  67 +---
 .../apache/cassandra/cql3/OutOfSpaceBase.java   |  95 +++
 .../cassandra/cql3/OutOfSpaceDieTest.java   |  68 
 .../cassandra/cql3/OutOfSpaceIgnoreTest.java|  60 +++
 .../cassandra/cql3/OutOfSpaceStopTest.java  |  63 
 .../apache/cassandra/cql3/OutOfSpaceTest.java   | 157 ---
 7 files changed, 336 insertions(+), 177 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6dc1745e/CHANGES.txt
--
diff --cc CHANGES.txt
index 87228d3,1275631..7fcf373
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,55 -1,13 +1,58 @@@
 +2.2.8
 + * Release sstables of failed stream sessions only when outgoing transfers 
are finished (CASSANDRA-11345)
 + * Revert CASSANDRA-11427 (CASSANDRA-12351)
 + * Wait for tracing events before returning response and query at same 
consistency level client side (CASSANDRA-11465)
 + * cqlsh copyutil should get host metadata by connected address 
(CASSANDRA-11979)
 + * Fixed cqlshlib.test.remove_test_db (CASSANDRA-12214)
 + * Synchronize ThriftServer::stop() (CASSANDRA-12105)
 + * Use dedicated thread for JMX notifications (CASSANDRA-12146)
 + * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
 + * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
 + * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)
 + * Don't write shadowed range tombstone (CASSANDRA-12030)
 +Merged from 2.1:
++===
+ 2.1.16
+  * Disable passing control to post-flush after flush failure to prevent data 
loss (CASSANDRA-11828)
   * Allow STCS-in-L0 compactions to reduce scope with LCS (CASSANDRA-12040)
   * cannot use cql since upgrading python to 2.7.11+ (CASSANDRA-11850)
 - * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)
   * Improve digest calculation in the presence of overlapping tombstones 
(CASSANDRA-11349)
 + * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)
 + * Account for partition deletions in tombstone histogram (CASSANDRA-12112)
  
  
 -2.1.15
 - * Account for partition deletions in tombstone histogram (CASSANDRA-12112)
 +2.2.7
 + * Allow nodetool info to run with readonly JMX access (CASSANDRA-11755)
 + * Validate bloom_filter_fp_chance against lowest supported
 +   value when the table is created (CASSANDRA-11920)
 + * RandomAccessReader: call isEOF() only when rebuffering, not for every read 
operation (CASSANDRA-12013)
 + * Don't send erroneous NEW_NODE notifications on restart (CASSANDRA-11038)
 + * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
 + * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
 + * Run CommitLog tests with different compression settings (CASSANDRA-9039)
 + * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)
 + * Avoid showing estimated key as -1 in tablestats (CASSANDRA-11587)
 + * Fix possible race condition in CommitLog.recover (CASSANDRA-11743)
 + * Enable client encryption in sstableloader with cli options 
(CASSANDRA-11708)
 + * Possible memory leak in NIODataInputStream (CASSANDRA-11867)
 + * Fix commit log replay after out-of-order flush completion (CASSANDRA-9669)
 + * Add seconds to cqlsh tracing session duration (CASSANDRA-11753)
 + * Prohibit Reverse Counter type as part of the PK (CASSANDRA-9395)
 + * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626)
 + * Exit JVM if JMX server fails to startup (CASSANDRA-11540)
 + * Produce a heap dump when exiting on OOM (CASSANDRA-9861)
 + * Avoid read repairing purgeable tombstones on range slices (CASSANDRA-11427)
 + * Restore ability to filter on clustering columns when using a 2i 
(CASSANDRA-11510)
 + * JSON datetime formatting needs timezone (CASSANDRA-11137)
 + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
 + * Remove

[19/23] cassandra git commit: Merge commit '904cb5d10e0de1a6ca89249be8c257ed38a80ef0' into cassandra-3.9

2016-08-05 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b102173/src/java/org/apache/cassandra/db/lifecycle/Tracker.java
--
diff --cc src/java/org/apache/cassandra/db/lifecycle/Tracker.java
index b1c706e,5a3d524..f464e08
--- a/src/java/org/apache/cassandra/db/lifecycle/Tracker.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/Tracker.java
@@@ -353,35 -347,13 +349,16 @@@ public class Tracke
  
  Throwable fail;
  fail = updateSizeTracking(emptySet(), sstables, null);
 +
 +notifyDiscarded(memtable);
 +
- maybeFail(fail);
- }
- 
- /**
-  * permit compaction of the provided sstable; this translates to 
notifying compaction
-  * strategies of its existence, and potentially submitting a background 
task
-  */
- public void permitCompactionOfFlushed(Collection sstables)
- {
- if (sstables.isEmpty())
- return;
+ // TODO: if we're invalidated, should we notifyadded AND removed, or 
just skip both?
+ fail = notifyAdded(sstables, fail);
  
- apply(View.permitCompactionOfFlushed(sstables));
- 
- if (isDummy())
- return;
- 
- if (cfstore.isValid())
- {
- notifyAdded(sstables);
- CompactionManager.instance.submitBackground(cfstore);
- }
- else
- {
+ if (!isDummy() && !cfstore.isValid())
  dropSSTables();
- }
+ 
+ maybeFail(fail);
  }
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b102173/src/java/org/apache/cassandra/db/lifecycle/View.java
--
diff --cc src/java/org/apache/cassandra/db/lifecycle/View.java
index a5c781d,4b3aae0..b26426d
--- a/src/java/org/apache/cassandra/db/lifecycle/View.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/View.java
@@@ -40,7 -39,7 +39,6 @@@ import static com.google.common.collect
  import static com.google.common.collect.Iterables.all;
  import static com.google.common.collect.Iterables.concat;
  import static com.google.common.collect.Iterables.filter;
--import static com.google.common.collect.Iterables.transform;
  import static org.apache.cassandra.db.lifecycle.Helpers.emptySet;
  import static org.apache.cassandra.db.lifecycle.Helpers.filterOut;
  import static org.apache.cassandra.db.lifecycle.Helpers.replace;
@@@ -336,14 -333,12 +332,12 @@@ public class Vie
  List flushingMemtables = 
copyOf(filter(view.flushingMemtables, not(equalTo(memtable;
  assert flushingMemtables.size() == 
view.flushingMemtables.size() - 1;
  
 -if (flushed == null || flushed.isEmpty())
 +if (flushed == null || Iterables.isEmpty(flushed))
  return new View(view.liveMemtables, flushingMemtables, 
view.sstablesMap,
- view.compactingMap, view.premature, 
view.intervalTree);
+ view.compactingMap, view.intervalTree);
  
  Map sstableMap = 
replace(view.sstablesMap, emptySet(), flushed);
- Map compactingMap = 
replace(view.compactingMap, emptySet(), flushed);
- Set premature = replace(view.premature, 
emptySet(), flushed);
- return new View(view.liveMemtables, flushingMemtables, 
sstableMap, compactingMap, premature,
+ return new View(view.liveMemtables, flushingMemtables, 
sstableMap, view.compactingMap,
  
SSTableIntervalTree.build(sstableMap.keySet()));
  }
  };

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b102173/src/java/org/apache/cassandra/io/sstable/format/big/BigFormat.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b102173/src/java/org/apache/cassandra/io/sstable/metadata/LegacyMetadataSerializer.java
--
diff --cc 
src/java/org/apache/cassandra/io/sstable/metadata/LegacyMetadataSerializer.java
index 505de49,a683513..14e391b
--- 
a/src/java/org/apache/cassandra/io/sstable/metadata/LegacyMetadataSerializer.java
+++ 
b/src/java/org/apache/cassandra/io/sstable/metadata/LegacyMetadataSerializer.java
@@@ -24,7 -24,8 +24,8 @@@ import java.util.*
  import com.google.common.collect.Maps;
  
  import org.apache.cassandra.db.TypeSizes;
 +import org.apache.cassandra.db.commitlog.CommitLogPosition;
+ import org.apache.cassandra.db.commitlog.IntervalSet;
 -import org.apache.cassandra.db.commitlog.ReplayPosition;
  import org.apache.cassandra.io.sstable.Component;
  import org.apache.cassandra.io.sstable.Descriptor;
  import org.apache.cassandra.io.sstable.format.Version;
@@@ -35,6 -36,8 +36,8 @@@ import org.apache.cassandra.utils.ByteB
  import org.apache.cassandra.utils.EstimatedHistogram;
  i

[03/23] cassandra git commit: Disable passing control to post-flush after flush failure to prevent data loss.

2016-08-05 Thread slebresne
Disable passing control to post-flush after flush failure to prevent
data loss.

patch by Branimir Lambov; reviewed by Sylvain Lebresne for
CASSANDRA-11828

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bd665473
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bd665473
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bd665473

Branch: refs/heads/cassandra-3.0
Commit: bd6654733dded3513c2c7acf96df2c364b0c043e
Parents: bc0d1da
Author: Branimir Lambov 
Authored: Wed Aug 3 11:32:48 2016 +0300
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:35:25 2016 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  45 --
 .../apache/cassandra/cql3/OutOfSpaceBase.java   |  87 ++
 .../cassandra/cql3/OutOfSpaceDieTest.java   |  68 
 .../cassandra/cql3/OutOfSpaceIgnoreTest.java|  60 +++
 .../cassandra/cql3/OutOfSpaceStopTest.java  |  63 
 .../apache/cassandra/cql3/OutOfSpaceTest.java   | 157 ---
 7 files changed, 311 insertions(+), 170 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd665473/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8ecc787..1275631 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.16
+ * Disable passing control to post-flush after flush failure to prevent data 
loss (CASSANDRA-11828)
  * Allow STCS-in-L0 compactions to reduce scope with LCS (CASSANDRA-12040)
  * cannot use cql since upgrading python to 2.7.11+ (CASSANDRA-11850)
  * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd665473/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index b64d5de..6e82745 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -99,6 +99,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean

   new NamedThreadFactory("MemtablePostFlush"),

   "internal");
 
+// If a flush fails with an error the post-flush is never allowed to 
continue. This stores the error that caused it
+// to be able to show an error on following flushes instead of blindly 
continuing.
+private static volatile FSWriteError previousFlushFailure = null;
+
 private static final ExecutorService reclaimExecutor = new 
JMXEnabledThreadPoolExecutor(1,

 StageManager.KEEPALIVE,

 TimeUnit.SECONDS,
@@ -869,12 +873,20 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 synchronized (data)
 {
+if (previousFlushFailure != null)
+throw new IllegalStateException("A flush previously failed 
with the error below. To prevent data loss, "
+  + "no flushes can be carried out 
until the node is restarted.",
+previousFlushFailure);
 logFlush();
 Flush flush = new Flush(false);
-flushExecutor.execute(flush);
+ListenableFutureTask flushTask = 
ListenableFutureTask.create(flush, null);
+flushExecutor.submit(flushTask);
 ListenableFutureTask task = 
ListenableFutureTask.create(flush.postFlush, null);
 postFlushExecutor.submit(task);
-return task;
+
+@SuppressWarnings("unchecked")
+ListenableFuture future = Futures.allAsList(flushTask, task);
+return future;
 }
 }
 
@@ -967,7 +979,6 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 final OpOrder.Barrier writeBarrier;
 final CountDownLatch latch = new CountDownLatch(1);
 final ReplayPosition lastReplayPosition;
-volatile FSWriteError flushFailure = null;
 
 private PostFlush(boolean flushSecondaryIndexes, OpOrder.Barrier 
writeBarrier, ReplayPosition lastReplayPosition)
 {
@@ -1010,16 +1021,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 // must check lastReplayPosition != null because Flush ma

[02/23] cassandra git commit: Disable passing control to post-flush after flush failure to prevent data loss.

2016-08-05 Thread slebresne
Disable passing control to post-flush after flush failure to prevent
data loss.

patch by Branimir Lambov; reviewed by Sylvain Lebresne for
CASSANDRA-11828

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bd665473
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bd665473
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bd665473

Branch: refs/heads/cassandra-2.2
Commit: bd6654733dded3513c2c7acf96df2c364b0c043e
Parents: bc0d1da
Author: Branimir Lambov 
Authored: Wed Aug 3 11:32:48 2016 +0300
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:35:25 2016 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  45 --
 .../apache/cassandra/cql3/OutOfSpaceBase.java   |  87 ++
 .../cassandra/cql3/OutOfSpaceDieTest.java   |  68 
 .../cassandra/cql3/OutOfSpaceIgnoreTest.java|  60 +++
 .../cassandra/cql3/OutOfSpaceStopTest.java  |  63 
 .../apache/cassandra/cql3/OutOfSpaceTest.java   | 157 ---
 7 files changed, 311 insertions(+), 170 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd665473/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8ecc787..1275631 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.16
+ * Disable passing control to post-flush after flush failure to prevent data 
loss (CASSANDRA-11828)
  * Allow STCS-in-L0 compactions to reduce scope with LCS (CASSANDRA-12040)
  * cannot use cql since upgrading python to 2.7.11+ (CASSANDRA-11850)
  * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd665473/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index b64d5de..6e82745 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -99,6 +99,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean

   new NamedThreadFactory("MemtablePostFlush"),

   "internal");
 
+// If a flush fails with an error the post-flush is never allowed to 
continue. This stores the error that caused it
+// to be able to show an error on following flushes instead of blindly 
continuing.
+private static volatile FSWriteError previousFlushFailure = null;
+
 private static final ExecutorService reclaimExecutor = new 
JMXEnabledThreadPoolExecutor(1,

 StageManager.KEEPALIVE,

 TimeUnit.SECONDS,
@@ -869,12 +873,20 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 synchronized (data)
 {
+if (previousFlushFailure != null)
+throw new IllegalStateException("A flush previously failed 
with the error below. To prevent data loss, "
+  + "no flushes can be carried out 
until the node is restarted.",
+previousFlushFailure);
 logFlush();
 Flush flush = new Flush(false);
-flushExecutor.execute(flush);
+ListenableFutureTask flushTask = 
ListenableFutureTask.create(flush, null);
+flushExecutor.submit(flushTask);
 ListenableFutureTask task = 
ListenableFutureTask.create(flush.postFlush, null);
 postFlushExecutor.submit(task);
-return task;
+
+@SuppressWarnings("unchecked")
+ListenableFuture future = Futures.allAsList(flushTask, task);
+return future;
 }
 }
 
@@ -967,7 +979,6 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 final OpOrder.Barrier writeBarrier;
 final CountDownLatch latch = new CountDownLatch(1);
 final ReplayPosition lastReplayPosition;
-volatile FSWriteError flushFailure = null;
 
 private PostFlush(boolean flushSecondaryIndexes, OpOrder.Barrier 
writeBarrier, ReplayPosition lastReplayPosition)
 {
@@ -1010,16 +1021,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 // must check lastReplayPosition != null because Flush ma

[04/23] cassandra git commit: Disable passing control to post-flush after flush failure to prevent data loss.

2016-08-05 Thread slebresne
Disable passing control to post-flush after flush failure to prevent
data loss.

patch by Branimir Lambov; reviewed by Sylvain Lebresne for
CASSANDRA-11828

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bd665473
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bd665473
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bd665473

Branch: refs/heads/cassandra-3.9
Commit: bd6654733dded3513c2c7acf96df2c364b0c043e
Parents: bc0d1da
Author: Branimir Lambov 
Authored: Wed Aug 3 11:32:48 2016 +0300
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:35:25 2016 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  45 --
 .../apache/cassandra/cql3/OutOfSpaceBase.java   |  87 ++
 .../cassandra/cql3/OutOfSpaceDieTest.java   |  68 
 .../cassandra/cql3/OutOfSpaceIgnoreTest.java|  60 +++
 .../cassandra/cql3/OutOfSpaceStopTest.java  |  63 
 .../apache/cassandra/cql3/OutOfSpaceTest.java   | 157 ---
 7 files changed, 311 insertions(+), 170 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd665473/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8ecc787..1275631 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.16
+ * Disable passing control to post-flush after flush failure to prevent data 
loss (CASSANDRA-11828)
  * Allow STCS-in-L0 compactions to reduce scope with LCS (CASSANDRA-12040)
  * cannot use cql since upgrading python to 2.7.11+ (CASSANDRA-11850)
  * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd665473/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index b64d5de..6e82745 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -99,6 +99,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean

   new NamedThreadFactory("MemtablePostFlush"),

   "internal");
 
+// If a flush fails with an error the post-flush is never allowed to 
continue. This stores the error that caused it
+// to be able to show an error on following flushes instead of blindly 
continuing.
+private static volatile FSWriteError previousFlushFailure = null;
+
 private static final ExecutorService reclaimExecutor = new 
JMXEnabledThreadPoolExecutor(1,

 StageManager.KEEPALIVE,

 TimeUnit.SECONDS,
@@ -869,12 +873,20 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 synchronized (data)
 {
+if (previousFlushFailure != null)
+throw new IllegalStateException("A flush previously failed 
with the error below. To prevent data loss, "
+  + "no flushes can be carried out 
until the node is restarted.",
+previousFlushFailure);
 logFlush();
 Flush flush = new Flush(false);
-flushExecutor.execute(flush);
+ListenableFutureTask flushTask = 
ListenableFutureTask.create(flush, null);
+flushExecutor.submit(flushTask);
 ListenableFutureTask task = 
ListenableFutureTask.create(flush.postFlush, null);
 postFlushExecutor.submit(task);
-return task;
+
+@SuppressWarnings("unchecked")
+ListenableFuture future = Futures.allAsList(flushTask, task);
+return future;
 }
 }
 
@@ -967,7 +979,6 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 final OpOrder.Barrier writeBarrier;
 final CountDownLatch latch = new CountDownLatch(1);
 final ReplayPosition lastReplayPosition;
-volatile FSWriteError flushFailure = null;
 
 private PostFlush(boolean flushSecondaryIndexes, OpOrder.Barrier 
writeBarrier, ReplayPosition lastReplayPosition)
 {
@@ -1010,16 +1021,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 // must check lastReplayPosition != null because Flush ma

[07/23] cassandra git commit: Merge commit 'bd6654733dded3513c2c7acf96df2c364b0c043e' into cassandra-2.2

2016-08-05 Thread slebresne
Merge commit 'bd6654733dded3513c2c7acf96df2c364b0c043e' into cassandra-2.2

* commit 'bd6654733dded3513c2c7acf96df2c364b0c043e':
  Disable passing control to post-flush after flush failure to prevent data 
loss.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6dc1745e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6dc1745e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6dc1745e

Branch: refs/heads/cassandra-3.9
Commit: 6dc1745edd8d3861d853ee56f49ac67633a753b0
Parents: 0398521 bd66547
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:36:29 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:37:11 2016 +0200

--
 CHANGES.txt |   3 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  67 +---
 .../apache/cassandra/cql3/OutOfSpaceBase.java   |  95 +++
 .../cassandra/cql3/OutOfSpaceDieTest.java   |  68 
 .../cassandra/cql3/OutOfSpaceIgnoreTest.java|  60 +++
 .../cassandra/cql3/OutOfSpaceStopTest.java  |  63 
 .../apache/cassandra/cql3/OutOfSpaceTest.java   | 157 ---
 7 files changed, 336 insertions(+), 177 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6dc1745e/CHANGES.txt
--
diff --cc CHANGES.txt
index 87228d3,1275631..7fcf373
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,55 -1,13 +1,58 @@@
 +2.2.8
 + * Release sstables of failed stream sessions only when outgoing transfers 
are finished (CASSANDRA-11345)
 + * Revert CASSANDRA-11427 (CASSANDRA-12351)
 + * Wait for tracing events before returning response and query at same 
consistency level client side (CASSANDRA-11465)
 + * cqlsh copyutil should get host metadata by connected address 
(CASSANDRA-11979)
 + * Fixed cqlshlib.test.remove_test_db (CASSANDRA-12214)
 + * Synchronize ThriftServer::stop() (CASSANDRA-12105)
 + * Use dedicated thread for JMX notifications (CASSANDRA-12146)
 + * NPE when trying to remove purgable tombstones from result (CASSANDRA-12143)
 + * Improve streaming synchronization and fault tolerance (CASSANDRA-11414)
 + * MemoryUtil.getShort() should return an unsigned short also for 
architectures not supporting unaligned memory accesses (CASSANDRA-11973)
 + * Don't write shadowed range tombstone (CASSANDRA-12030)
 +Merged from 2.1:
++===
+ 2.1.16
+  * Disable passing control to post-flush after flush failure to prevent data 
loss (CASSANDRA-11828)
   * Allow STCS-in-L0 compactions to reduce scope with LCS (CASSANDRA-12040)
   * cannot use cql since upgrading python to 2.7.11+ (CASSANDRA-11850)
 - * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)
   * Improve digest calculation in the presence of overlapping tombstones 
(CASSANDRA-11349)
 + * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)
 + * Account for partition deletions in tombstone histogram (CASSANDRA-12112)
  
  
 -2.1.15
 - * Account for partition deletions in tombstone histogram (CASSANDRA-12112)
 +2.2.7
 + * Allow nodetool info to run with readonly JMX access (CASSANDRA-11755)
 + * Validate bloom_filter_fp_chance against lowest supported
 +   value when the table is created (CASSANDRA-11920)
 + * RandomAccessReader: call isEOF() only when rebuffering, not for every read 
operation (CASSANDRA-12013)
 + * Don't send erroneous NEW_NODE notifications on restart (CASSANDRA-11038)
 + * StorageService shutdown hook should use a volatile variable 
(CASSANDRA-11984)
 + * Persist local metadata earlier in startup sequence (CASSANDRA-11742)
 + * Run CommitLog tests with different compression settings (CASSANDRA-9039)
 + * cqlsh: fix tab completion for case-sensitive identifiers (CASSANDRA-11664)
 + * Avoid showing estimated key as -1 in tablestats (CASSANDRA-11587)
 + * Fix possible race condition in CommitLog.recover (CASSANDRA-11743)
 + * Enable client encryption in sstableloader with cli options 
(CASSANDRA-11708)
 + * Possible memory leak in NIODataInputStream (CASSANDRA-11867)
 + * Fix commit log replay after out-of-order flush completion (CASSANDRA-9669)
 + * Add seconds to cqlsh tracing session duration (CASSANDRA-11753)
 + * Prohibit Reverse Counter type as part of the PK (CASSANDRA-9395)
 + * cqlsh: correctly handle non-ascii chars in error messages (CASSANDRA-11626)
 + * Exit JVM if JMX server fails to startup (CASSANDRA-11540)
 + * Produce a heap dump when exiting on OOM (CASSANDRA-9861)
 + * Avoid read repairing purgeable tombstones on range slices (CASSANDRA-11427)
 + * Restore ability to filter on clustering columns when using a 2i 
(CASSANDRA-11510)
 + * JSON datetime formatting needs timezone (CASSANDRA-11137)
 + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502)
 + * Remove

[01/23] cassandra git commit: Disable passing control to post-flush after flush failure to prevent data loss.

2016-08-05 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 bc0d1da0c -> bd6654733
  refs/heads/cassandra-2.2 039852126 -> 6dc1745ed
  refs/heads/cassandra-3.0 b66e5a189 -> 904cb5d10
  refs/heads/cassandra-3.9 21c92cab8 -> 7b1021733
  refs/heads/trunk 7fe430943 -> 624ed7838


Disable passing control to post-flush after flush failure to prevent
data loss.

patch by Branimir Lambov; reviewed by Sylvain Lebresne for
CASSANDRA-11828

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bd665473
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bd665473
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bd665473

Branch: refs/heads/cassandra-2.1
Commit: bd6654733dded3513c2c7acf96df2c364b0c043e
Parents: bc0d1da
Author: Branimir Lambov 
Authored: Wed Aug 3 11:32:48 2016 +0300
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:35:25 2016 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  45 --
 .../apache/cassandra/cql3/OutOfSpaceBase.java   |  87 ++
 .../cassandra/cql3/OutOfSpaceDieTest.java   |  68 
 .../cassandra/cql3/OutOfSpaceIgnoreTest.java|  60 +++
 .../cassandra/cql3/OutOfSpaceStopTest.java  |  63 
 .../apache/cassandra/cql3/OutOfSpaceTest.java   | 157 ---
 7 files changed, 311 insertions(+), 170 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd665473/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8ecc787..1275631 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.16
+ * Disable passing control to post-flush after flush failure to prevent data 
loss (CASSANDRA-11828)
  * Allow STCS-in-L0 compactions to reduce scope with LCS (CASSANDRA-12040)
  * cannot use cql since upgrading python to 2.7.11+ (CASSANDRA-11850)
  * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd665473/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index b64d5de..6e82745 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -99,6 +99,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean

   new NamedThreadFactory("MemtablePostFlush"),

   "internal");
 
+// If a flush fails with an error the post-flush is never allowed to 
continue. This stores the error that caused it
+// to be able to show an error on following flushes instead of blindly 
continuing.
+private static volatile FSWriteError previousFlushFailure = null;
+
 private static final ExecutorService reclaimExecutor = new 
JMXEnabledThreadPoolExecutor(1,

 StageManager.KEEPALIVE,

 TimeUnit.SECONDS,
@@ -869,12 +873,20 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 synchronized (data)
 {
+if (previousFlushFailure != null)
+throw new IllegalStateException("A flush previously failed 
with the error below. To prevent data loss, "
+  + "no flushes can be carried out 
until the node is restarted.",
+previousFlushFailure);
 logFlush();
 Flush flush = new Flush(false);
-flushExecutor.execute(flush);
+ListenableFutureTask flushTask = 
ListenableFutureTask.create(flush, null);
+flushExecutor.submit(flushTask);
 ListenableFutureTask task = 
ListenableFutureTask.create(flush.postFlush, null);
 postFlushExecutor.submit(task);
-return task;
+
+@SuppressWarnings("unchecked")
+ListenableFuture future = Futures.allAsList(flushTask, task);
+return future;
 }
 }
 
@@ -967,7 +979,6 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 final OpOrder.Barrier writeBarrier;
 final CountDownLatch latch = new CountDownLatch(1);
 final ReplayPosition lastReplayPosition;
-volatile FSWriteError flushFailure = null;
 
 priv

[05/23] cassandra git commit: Disable passing control to post-flush after flush failure to prevent data loss.

2016-08-05 Thread slebresne
Disable passing control to post-flush after flush failure to prevent
data loss.

patch by Branimir Lambov; reviewed by Sylvain Lebresne for
CASSANDRA-11828

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bd665473
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bd665473
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bd665473

Branch: refs/heads/trunk
Commit: bd6654733dded3513c2c7acf96df2c364b0c043e
Parents: bc0d1da
Author: Branimir Lambov 
Authored: Wed Aug 3 11:32:48 2016 +0300
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:35:25 2016 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  45 --
 .../apache/cassandra/cql3/OutOfSpaceBase.java   |  87 ++
 .../cassandra/cql3/OutOfSpaceDieTest.java   |  68 
 .../cassandra/cql3/OutOfSpaceIgnoreTest.java|  60 +++
 .../cassandra/cql3/OutOfSpaceStopTest.java  |  63 
 .../apache/cassandra/cql3/OutOfSpaceTest.java   | 157 ---
 7 files changed, 311 insertions(+), 170 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd665473/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8ecc787..1275631 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.16
+ * Disable passing control to post-flush after flush failure to prevent data 
loss (CASSANDRA-11828)
  * Allow STCS-in-L0 compactions to reduce scope with LCS (CASSANDRA-12040)
  * cannot use cql since upgrading python to 2.7.11+ (CASSANDRA-11850)
  * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd665473/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index b64d5de..6e82745 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -99,6 +99,10 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean

   new NamedThreadFactory("MemtablePostFlush"),

   "internal");
 
+// If a flush fails with an error the post-flush is never allowed to 
continue. This stores the error that caused it
+// to be able to show an error on following flushes instead of blindly 
continuing.
+private static volatile FSWriteError previousFlushFailure = null;
+
 private static final ExecutorService reclaimExecutor = new 
JMXEnabledThreadPoolExecutor(1,

 StageManager.KEEPALIVE,

 TimeUnit.SECONDS,
@@ -869,12 +873,20 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 synchronized (data)
 {
+if (previousFlushFailure != null)
+throw new IllegalStateException("A flush previously failed 
with the error below. To prevent data loss, "
+  + "no flushes can be carried out 
until the node is restarted.",
+previousFlushFailure);
 logFlush();
 Flush flush = new Flush(false);
-flushExecutor.execute(flush);
+ListenableFutureTask flushTask = 
ListenableFutureTask.create(flush, null);
+flushExecutor.submit(flushTask);
 ListenableFutureTask task = 
ListenableFutureTask.create(flush.postFlush, null);
 postFlushExecutor.submit(task);
-return task;
+
+@SuppressWarnings("unchecked")
+ListenableFuture future = Futures.allAsList(flushTask, task);
+return future;
 }
 }
 
@@ -967,7 +979,6 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 final OpOrder.Barrier writeBarrier;
 final CountDownLatch latch = new CountDownLatch(1);
 final ReplayPosition lastReplayPosition;
-volatile FSWriteError flushFailure = null;
 
 private PostFlush(boolean flushSecondaryIndexes, OpOrder.Barrier 
writeBarrier, ReplayPosition lastReplayPosition)
 {
@@ -1010,16 +1021,12 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
 // must check lastReplayPosition != null because Flush may find 

[22/23] cassandra git commit: Merge commit '904cb5d10e0de1a6ca89249be8c257ed38a80ef0' into cassandra-3.9

2016-08-05 Thread slebresne
Merge commit '904cb5d10e0de1a6ca89249be8c257ed38a80ef0' into cassandra-3.9

* commit '904cb5d10e0de1a6ca89249be8c257ed38a80ef0':
  Change commitlog and sstables to track dirty and clean intervals.
  Disable passing control to post-flush after flush failure to prevent data 
loss.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7b102173
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7b102173
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7b102173

Branch: refs/heads/cassandra-3.9
Commit: 7b1021733b55c8865f80e261697b4c079d086633
Parents: 21c92ca 904cb5d
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:39:15 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:39:56 2016 +0200

--
 CHANGES.txt |   1 +
 .../cassandra/db/BlacklistedDirectories.java|  13 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  70 +---
 .../org/apache/cassandra/db/Directories.java|   2 +-
 src/java/org/apache/cassandra/db/Memtable.java  |  21 +-
 .../AbstractCommitLogSegmentManager.java|   4 +-
 .../cassandra/db/commitlog/CommitLog.java   |  11 +-
 .../db/commitlog/CommitLogReplayer.java | 105 ++
 .../db/commitlog/CommitLogSegment.java  |  82 +++--
 .../cassandra/db/commitlog/IntervalSet.java | 192 +++
 .../compaction/AbstractCompactionStrategy.java  |   3 +
 .../compaction/CompactionStrategyManager.java   |   3 +
 .../apache/cassandra/db/lifecycle/Tracker.java  |  45 +--
 .../org/apache/cassandra/db/lifecycle/View.java |  37 +--
 .../cassandra/io/sstable/format/Version.java|   2 +
 .../io/sstable/format/big/BigFormat.java|  12 +-
 .../metadata/LegacyMetadataSerializer.java  |  17 +-
 .../io/sstable/metadata/MetadataCollector.java  |  37 +--
 .../io/sstable/metadata/StatsMetadata.java  |  44 +--
 .../cassandra/tools/SSTableMetadataViewer.java  |   3 +-
 .../apache/cassandra/utils/IntegerInterval.java | 227 +
 .../legacy_mc_clust/mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust/mc-1-big-Data.db| Bin 0 -> 5355 bytes
 .../legacy_mc_clust/mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../legacy_mc_clust/mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust_compact/mc-1-big-Data.db| Bin 0 -> 5382 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_compact/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_compact/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust_compact/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_compact/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../legacy_mc_clust_counter/mc-1-big-Data.db| Bin 0 -> 4631 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_counter/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_counter/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../legacy_mc_clust_counter/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_counter/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../mc-1-big-Data.db| Bin 0 -> 4625 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple/mc-1-big-Data.db   | Bin 0 -> 89 bytes
 .../legacy_mc_simple/mc-1-big-Digest.crc32  |   1 +
 .../legacy_mc_simple/mc-1-big-Filter.db | Bin 0 -> 24 bytes
 .../legacy_mc_simple/mc-1-big-Index.db  | Bin 0 -> 26 bytes
 .../legacy_mc_simple/mc-1-big-Statistics.db | Bin 0 -> 4639 bytes
 .../legacy_mc_simple/mc-1-big-Summary.db| Bin 0 -> 47 bytes
 .../legacy_mc_simple/mc-1-big-TOC.txt   |   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple_compact/mc-1-big-Data.db   | Bin 0 -> 91 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_

[20/23] cassandra git commit: Merge commit '904cb5d10e0de1a6ca89249be8c257ed38a80ef0' into cassandra-3.9

2016-08-05 Thread slebresne
Merge commit '904cb5d10e0de1a6ca89249be8c257ed38a80ef0' into cassandra-3.9

* commit '904cb5d10e0de1a6ca89249be8c257ed38a80ef0':
  Change commitlog and sstables to track dirty and clean intervals.
  Disable passing control to post-flush after flush failure to prevent data 
loss.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7b102173
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7b102173
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7b102173

Branch: refs/heads/trunk
Commit: 7b1021733b55c8865f80e261697b4c079d086633
Parents: 21c92ca 904cb5d
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:39:15 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:39:56 2016 +0200

--
 CHANGES.txt |   1 +
 .../cassandra/db/BlacklistedDirectories.java|  13 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  70 +---
 .../org/apache/cassandra/db/Directories.java|   2 +-
 src/java/org/apache/cassandra/db/Memtable.java  |  21 +-
 .../AbstractCommitLogSegmentManager.java|   4 +-
 .../cassandra/db/commitlog/CommitLog.java   |  11 +-
 .../db/commitlog/CommitLogReplayer.java | 105 ++
 .../db/commitlog/CommitLogSegment.java  |  82 +++--
 .../cassandra/db/commitlog/IntervalSet.java | 192 +++
 .../compaction/AbstractCompactionStrategy.java  |   3 +
 .../compaction/CompactionStrategyManager.java   |   3 +
 .../apache/cassandra/db/lifecycle/Tracker.java  |  45 +--
 .../org/apache/cassandra/db/lifecycle/View.java |  37 +--
 .../cassandra/io/sstable/format/Version.java|   2 +
 .../io/sstable/format/big/BigFormat.java|  12 +-
 .../metadata/LegacyMetadataSerializer.java  |  17 +-
 .../io/sstable/metadata/MetadataCollector.java  |  37 +--
 .../io/sstable/metadata/StatsMetadata.java  |  44 +--
 .../cassandra/tools/SSTableMetadataViewer.java  |   3 +-
 .../apache/cassandra/utils/IntegerInterval.java | 227 +
 .../legacy_mc_clust/mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust/mc-1-big-Data.db| Bin 0 -> 5355 bytes
 .../legacy_mc_clust/mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../legacy_mc_clust/mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust_compact/mc-1-big-Data.db| Bin 0 -> 5382 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_compact/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_compact/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust_compact/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_compact/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../legacy_mc_clust_counter/mc-1-big-Data.db| Bin 0 -> 4631 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_counter/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_counter/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../legacy_mc_clust_counter/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_counter/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../mc-1-big-Data.db| Bin 0 -> 4625 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple/mc-1-big-Data.db   | Bin 0 -> 89 bytes
 .../legacy_mc_simple/mc-1-big-Digest.crc32  |   1 +
 .../legacy_mc_simple/mc-1-big-Filter.db | Bin 0 -> 24 bytes
 .../legacy_mc_simple/mc-1-big-Index.db  | Bin 0 -> 26 bytes
 .../legacy_mc_simple/mc-1-big-Statistics.db | Bin 0 -> 4639 bytes
 .../legacy_mc_simple/mc-1-big-Summary.db| Bin 0 -> 47 bytes
 .../legacy_mc_simple/mc-1-big-TOC.txt   |   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple_compact/mc-1-big-Data.db   | Bin 0 -> 91 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_simpl

[12/23] cassandra git commit: Merge commit '6dc1745' into cassandra-3.0

2016-08-05 Thread slebresne
Merge commit '6dc1745' into cassandra-3.0

* commit '6dc1745':
  Disable passing control to post-flush after flush failure to prevent data 
loss.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cf85f520
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cf85f520
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cf85f520

Branch: refs/heads/trunk
Commit: cf85f520c768a6494281dd5e94fb12b0b07dd1b0
Parents: b66e5a1 6dc1745
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:37:43 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:37:43 2016 +0200

--

--




[10/23] cassandra git commit: Merge commit '6dc1745' into cassandra-3.0

2016-08-05 Thread slebresne
Merge commit '6dc1745' into cassandra-3.0

* commit '6dc1745':
  Disable passing control to post-flush after flush failure to prevent data 
loss.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cf85f520
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cf85f520
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cf85f520

Branch: refs/heads/cassandra-3.0
Commit: cf85f520c768a6494281dd5e94fb12b0b07dd1b0
Parents: b66e5a1 6dc1745
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:37:43 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:37:43 2016 +0200

--

--




[21/23] cassandra git commit: Merge commit '904cb5d10e0de1a6ca89249be8c257ed38a80ef0' into cassandra-3.9

2016-08-05 Thread slebresne
http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b102173/src/java/org/apache/cassandra/db/lifecycle/Tracker.java
--
diff --cc src/java/org/apache/cassandra/db/lifecycle/Tracker.java
index b1c706e,5a3d524..f464e08
--- a/src/java/org/apache/cassandra/db/lifecycle/Tracker.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/Tracker.java
@@@ -353,35 -347,13 +349,16 @@@ public class Tracke
  
  Throwable fail;
  fail = updateSizeTracking(emptySet(), sstables, null);
 +
 +notifyDiscarded(memtable);
 +
- maybeFail(fail);
- }
- 
- /**
-  * permit compaction of the provided sstable; this translates to 
notifying compaction
-  * strategies of its existence, and potentially submitting a background 
task
-  */
- public void permitCompactionOfFlushed(Collection sstables)
- {
- if (sstables.isEmpty())
- return;
+ // TODO: if we're invalidated, should we notifyadded AND removed, or 
just skip both?
+ fail = notifyAdded(sstables, fail);
  
- apply(View.permitCompactionOfFlushed(sstables));
- 
- if (isDummy())
- return;
- 
- if (cfstore.isValid())
- {
- notifyAdded(sstables);
- CompactionManager.instance.submitBackground(cfstore);
- }
- else
- {
+ if (!isDummy() && !cfstore.isValid())
  dropSSTables();
- }
+ 
+ maybeFail(fail);
  }
  
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b102173/src/java/org/apache/cassandra/db/lifecycle/View.java
--
diff --cc src/java/org/apache/cassandra/db/lifecycle/View.java
index a5c781d,4b3aae0..b26426d
--- a/src/java/org/apache/cassandra/db/lifecycle/View.java
+++ b/src/java/org/apache/cassandra/db/lifecycle/View.java
@@@ -40,7 -39,7 +39,6 @@@ import static com.google.common.collect
  import static com.google.common.collect.Iterables.all;
  import static com.google.common.collect.Iterables.concat;
  import static com.google.common.collect.Iterables.filter;
--import static com.google.common.collect.Iterables.transform;
  import static org.apache.cassandra.db.lifecycle.Helpers.emptySet;
  import static org.apache.cassandra.db.lifecycle.Helpers.filterOut;
  import static org.apache.cassandra.db.lifecycle.Helpers.replace;
@@@ -336,14 -333,12 +332,12 @@@ public class Vie
  List flushingMemtables = 
copyOf(filter(view.flushingMemtables, not(equalTo(memtable;
  assert flushingMemtables.size() == 
view.flushingMemtables.size() - 1;
  
 -if (flushed == null || flushed.isEmpty())
 +if (flushed == null || Iterables.isEmpty(flushed))
  return new View(view.liveMemtables, flushingMemtables, 
view.sstablesMap,
- view.compactingMap, view.premature, 
view.intervalTree);
+ view.compactingMap, view.intervalTree);
  
  Map sstableMap = 
replace(view.sstablesMap, emptySet(), flushed);
- Map compactingMap = 
replace(view.compactingMap, emptySet(), flushed);
- Set premature = replace(view.premature, 
emptySet(), flushed);
- return new View(view.liveMemtables, flushingMemtables, 
sstableMap, compactingMap, premature,
+ return new View(view.liveMemtables, flushingMemtables, 
sstableMap, view.compactingMap,
  
SSTableIntervalTree.build(sstableMap.keySet()));
  }
  };

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b102173/src/java/org/apache/cassandra/io/sstable/format/big/BigFormat.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b102173/src/java/org/apache/cassandra/io/sstable/metadata/LegacyMetadataSerializer.java
--
diff --cc 
src/java/org/apache/cassandra/io/sstable/metadata/LegacyMetadataSerializer.java
index 505de49,a683513..14e391b
--- 
a/src/java/org/apache/cassandra/io/sstable/metadata/LegacyMetadataSerializer.java
+++ 
b/src/java/org/apache/cassandra/io/sstable/metadata/LegacyMetadataSerializer.java
@@@ -24,7 -24,8 +24,8 @@@ import java.util.*
  import com.google.common.collect.Maps;
  
  import org.apache.cassandra.db.TypeSizes;
 +import org.apache.cassandra.db.commitlog.CommitLogPosition;
+ import org.apache.cassandra.db.commitlog.IntervalSet;
 -import org.apache.cassandra.db.commitlog.ReplayPosition;
  import org.apache.cassandra.io.sstable.Component;
  import org.apache.cassandra.io.sstable.Descriptor;
  import org.apache.cassandra.io.sstable.format.Version;
@@@ -35,6 -36,8 +36,8 @@@ import org.apache.cassandra.utils.ByteB
  import org.apache.cassandra.utils.EstimatedHistogram;
  i

[16/23] cassandra git commit: Change commitlog and sstables to track dirty and clean intervals.

2016-08-05 Thread slebresne
Change commitlog and sstables to track dirty and clean intervals.

patch by Branimir Lambov; reviewed by Sylvain Lebresne for
CASSANDRA-11828


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/904cb5d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/904cb5d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/904cb5d1

Branch: refs/heads/cassandra-3.0
Commit: 904cb5d10e0de1a6ca89249be8c257ed38a80ef0
Parents: cf85f52
Author: Branimir Lambov 
Authored: Sat May 14 11:31:16 2016 +0300
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:38:37 2016 +0200

--
 CHANGES.txt |   1 +
 .../cassandra/db/BlacklistedDirectories.java|  13 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  66 +---
 .../org/apache/cassandra/db/Directories.java|   2 +-
 src/java/org/apache/cassandra/db/Memtable.java  |  18 +-
 .../cassandra/db/commitlog/CommitLog.java   |  11 +-
 .../db/commitlog/CommitLogReplayer.java |  59 +++-
 .../db/commitlog/CommitLogSegment.java  |  77 ++---
 .../db/commitlog/CommitLogSegmentManager.java   |   4 +-
 .../cassandra/db/commitlog/IntervalSet.java | 192 +++
 .../cassandra/db/commitlog/ReplayPosition.java  |  71 
 .../compaction/AbstractCompactionStrategy.java  |   3 +
 .../compaction/CompactionStrategyManager.java   |   3 +
 .../apache/cassandra/db/lifecycle/Tracker.java  |  44 +--
 .../org/apache/cassandra/db/lifecycle/View.java |  36 +-
 .../cassandra/io/sstable/format/Version.java|   2 +
 .../io/sstable/format/big/BigFormat.java|  14 +-
 .../metadata/LegacyMetadataSerializer.java  |  17 +-
 .../io/sstable/metadata/MetadataCollector.java  |  38 +--
 .../io/sstable/metadata/StatsMetadata.java  |  44 +--
 .../cassandra/tools/SSTableMetadataViewer.java  |   3 +-
 .../apache/cassandra/utils/IntegerInterval.java | 227 +
 .../legacy_mc_clust/mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust/mc-1-big-Data.db| Bin 0 -> 5355 bytes
 .../legacy_mc_clust/mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../legacy_mc_clust/mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust_compact/mc-1-big-Data.db| Bin 0 -> 5382 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_compact/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_compact/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust_compact/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_compact/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../legacy_mc_clust_counter/mc-1-big-Data.db| Bin 0 -> 4631 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_counter/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_counter/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../legacy_mc_clust_counter/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_counter/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../mc-1-big-Data.db| Bin 0 -> 4625 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple/mc-1-big-Data.db   | Bin 0 -> 89 bytes
 .../legacy_mc_simple/mc-1-big-Digest.crc32  |   1 +
 .../legacy_mc_simple/mc-1-big-Filter.db | Bin 0 -> 24 bytes
 .../legacy_mc_simple/mc-1-big-Index.db  | Bin 0 -> 26 bytes
 .../legacy_mc_simple/mc-1-big-Statistics.db | Bin 0 -> 4639 bytes
 .../legacy_mc_simple/mc-1-big-Summary.db| Bin 0 -> 47 bytes
 .../legacy_mc_simple/mc-1-big-TOC.txt   |   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple_compact/mc-1-big-Data.db   | Bin 0 -> 91 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_simple_compact/mc-1-big-Filter.db | Bin 0 -> 24 bytes
 .../legacy_mc_simple_compact/mc

[11/23] cassandra git commit: Merge commit '6dc1745' into cassandra-3.0

2016-08-05 Thread slebresne
Merge commit '6dc1745' into cassandra-3.0

* commit '6dc1745':
  Disable passing control to post-flush after flush failure to prevent data 
loss.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cf85f520
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cf85f520
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cf85f520

Branch: refs/heads/cassandra-3.9
Commit: cf85f520c768a6494281dd5e94fb12b0b07dd1b0
Parents: b66e5a1 6dc1745
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:37:43 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:37:43 2016 +0200

--

--




[23/23] cassandra git commit: Merge branch 'cassandra-3.9' into trunk

2016-08-05 Thread slebresne
Merge branch 'cassandra-3.9' into trunk

* cassandra-3.9:
  Change commitlog and sstables to track dirty and clean intervals.
  Disable passing control to post-flush after flush failure to prevent data 
loss.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/624ed783
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/624ed783
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/624ed783

Branch: refs/heads/trunk
Commit: 624ed7838bafa96c2083d5a10ebe9ef44f12dcf8
Parents: 7fe4309 7b10217
Author: Sylvain Lebresne 
Authored: Fri Aug 5 15:43:46 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:48:18 2016 +0200

--
 CHANGES.txt |   1 +
 .../cassandra/db/BlacklistedDirectories.java|  13 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  70 +---
 .../org/apache/cassandra/db/Directories.java|   2 +-
 src/java/org/apache/cassandra/db/Memtable.java  |  21 +-
 .../AbstractCommitLogSegmentManager.java|   4 +-
 .../cassandra/db/commitlog/CommitLog.java   |  11 +-
 .../db/commitlog/CommitLogReplayer.java | 105 ++
 .../db/commitlog/CommitLogSegment.java  |  82 +++--
 .../cassandra/db/commitlog/IntervalSet.java | 192 +++
 .../compaction/AbstractCompactionStrategy.java  |   3 +
 .../compaction/CompactionStrategyManager.java   |   3 +
 .../apache/cassandra/db/lifecycle/Tracker.java  |  45 +--
 .../org/apache/cassandra/db/lifecycle/View.java |  35 +-
 .../cassandra/io/sstable/format/Version.java|   2 +
 .../io/sstable/format/big/BigFormat.java|  12 +-
 .../metadata/LegacyMetadataSerializer.java  |  17 +-
 .../io/sstable/metadata/MetadataCollector.java  |  38 +--
 .../io/sstable/metadata/StatsMetadata.java  |  44 +--
 .../cassandra/tools/SSTableMetadataViewer.java  |   3 +-
 .../apache/cassandra/utils/IntegerInterval.java | 227 +
 .../legacy_mc_clust/mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust/mc-1-big-Data.db| Bin 0 -> 5355 bytes
 .../legacy_mc_clust/mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../legacy_mc_clust/mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust_compact/mc-1-big-Data.db| Bin 0 -> 5382 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_compact/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_compact/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust_compact/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_compact/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../legacy_mc_clust_counter/mc-1-big-Data.db| Bin 0 -> 4631 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_counter/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_counter/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../legacy_mc_clust_counter/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_counter/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../mc-1-big-Data.db| Bin 0 -> 4625 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple/mc-1-big-Data.db   | Bin 0 -> 89 bytes
 .../legacy_mc_simple/mc-1-big-Digest.crc32  |   1 +
 .../legacy_mc_simple/mc-1-big-Filter.db | Bin 0 -> 24 bytes
 .../legacy_mc_simple/mc-1-big-Index.db  | Bin 0 -> 26 bytes
 .../legacy_mc_simple/mc-1-big-Statistics.db | Bin 0 -> 4639 bytes
 .../legacy_mc_simple/mc-1-big-Summary.db| Bin 0 -> 47 bytes
 .../legacy_mc_simple/mc-1-big-TOC.txt   |   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple_compact/mc-1-big-Data.db   | Bin 0 -> 91 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_simple_compact/mc-1-big-Filter.db | Bin 0 -> 24 bytes
 .../legacy_mc_simple_c

[jira] [Updated] (CASSANDRA-11828) Commit log needs to track unflushed intervals rather than positions

2016-08-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-11828:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   (was: 2.2.x)
   (was: 3.x)
   3.9
   3.0.9
   2.2.8
   2.1.16
   Status: Resolved  (was: Patch Available)

Committed (including to 2.1 since this can result in data loss, which feels 
critical enough). Thanks.

> Commit log needs to track unflushed intervals rather than positions
> ---
>
> Key: CASSANDRA-11828
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11828
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
> Fix For: 2.1.16, 2.2.8, 3.0.9, 3.9
>
>
> In CASSANDRA-11448 in an effort to give a more thorough handling of flush 
> errors I have introduced a possible correctness bug with disk failure policy 
> ignore if a flush fails with an error:
> - we report the error but continue
> - we correctly do not update the commit log with the flush position
> - but we allow the post-flush executor to resume
> - a successful later flush can thus move the log's clear position beyond the 
> data from the failed flush
> - the log will then delete segment(s) that contain unflushed data.
> After CASSANDRA-9669 it is relatively easy to fix this problem by making the 
> commit log track sets of intervals of unflushed data (as described in 
> CASSANDRA-8496).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11726) IndexOutOfBoundsException when selecting (distinct) row ids from counter table.

2016-08-05 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409473#comment-15409473
 ] 

Aleksey Yeschenko commented on CASSANDRA-11726:
---

I do prefer CASSANDRA-10657 working w/ counters, which is to say the 
alternative one. +1 conditional on tests passing.

> IndexOutOfBoundsException when selecting (distinct) row ids from counter 
> table.
> ---
>
> Key: CASSANDRA-11726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11726
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: C* 3.5, cluster of 4 nodes.
>Reporter: Jaroslav Kamenik
>Assignee: Sylvain Lebresne
> Fix For: 3.x
>
>
> I have simple table containing counters:
> {code}
> CREATE TABLE tablename (
> object_id ascii,
> counter_id ascii,
> count counter,
> PRIMARY KEY (object_id, counter_id)
> ) WITH CLUSTERING ORDER BY (counter_id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'enabled': 'false'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> Counters are often inc/decreased, whole rows are queried, deleted sometimes.
> After some time I tried to query all object_ids, but it failed with:
> {code}
> cqlsh:woc> consistency quorum;
> cqlsh:woc> select object_id from tablename;
> ServerError:  message="java.lang.IndexOutOfBoundsException">
> {code}
> select * from ..., select where .., updates works well..
> With consistency one it works sometimes, so it seems something is broken at 
> one server, but I tried to repair table there and it did not help. 
> Whole exception from server log:
> {code}
> java.lang.IndexOutOfBoundsException: null
> at java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_73]
> at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) 
> ~[na:1.8.0_73]
> at 
> org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:141)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext.access$100(CounterContext.java:76)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext$ContextState.(CounterContext.java:758)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext$ContextState.wrap(CounterContext.java:765)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.context.CounterContext.merge(CounterContext.java:271) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.Conflicts.mergeCounterValues(Conflicts.java:76) 
> ~[apache-cassandra-3.5.jar:3.5]
> at org.apache.cassandra.db.rows.Cells.reconcile(Cells.java:143) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:591)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:549)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.5.jar:3.5]
> at org.apache.cassandra.db.rows.Row$Merger.merge(Row.java:526) 
> ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:473)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators$UnfilteredRowMergeIterator$MergeReducer.getReduced(UnfilteredRowIterators.java:437)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:217)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.5.jar:3.5]
> at 
> org.apache.cassandra.utils.AbstractIterato

[18/23] cassandra git commit: Change commitlog and sstables to track dirty and clean intervals.

2016-08-05 Thread slebresne
Change commitlog and sstables to track dirty and clean intervals.

patch by Branimir Lambov; reviewed by Sylvain Lebresne for
CASSANDRA-11828


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/904cb5d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/904cb5d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/904cb5d1

Branch: refs/heads/cassandra-3.9
Commit: 904cb5d10e0de1a6ca89249be8c257ed38a80ef0
Parents: cf85f52
Author: Branimir Lambov 
Authored: Sat May 14 11:31:16 2016 +0300
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 15:38:37 2016 +0200

--
 CHANGES.txt |   1 +
 .../cassandra/db/BlacklistedDirectories.java|  13 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |  66 +---
 .../org/apache/cassandra/db/Directories.java|   2 +-
 src/java/org/apache/cassandra/db/Memtable.java  |  18 +-
 .../cassandra/db/commitlog/CommitLog.java   |  11 +-
 .../db/commitlog/CommitLogReplayer.java |  59 +++-
 .../db/commitlog/CommitLogSegment.java  |  77 ++---
 .../db/commitlog/CommitLogSegmentManager.java   |   4 +-
 .../cassandra/db/commitlog/IntervalSet.java | 192 +++
 .../cassandra/db/commitlog/ReplayPosition.java  |  71 
 .../compaction/AbstractCompactionStrategy.java  |   3 +
 .../compaction/CompactionStrategyManager.java   |   3 +
 .../apache/cassandra/db/lifecycle/Tracker.java  |  44 +--
 .../org/apache/cassandra/db/lifecycle/View.java |  36 +-
 .../cassandra/io/sstable/format/Version.java|   2 +
 .../io/sstable/format/big/BigFormat.java|  14 +-
 .../metadata/LegacyMetadataSerializer.java  |  17 +-
 .../io/sstable/metadata/MetadataCollector.java  |  38 +--
 .../io/sstable/metadata/StatsMetadata.java  |  44 +--
 .../cassandra/tools/SSTableMetadataViewer.java  |   3 +-
 .../apache/cassandra/utils/IntegerInterval.java | 227 +
 .../legacy_mc_clust/mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust/mc-1-big-Data.db| Bin 0 -> 5355 bytes
 .../legacy_mc_clust/mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../legacy_mc_clust/mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 83 bytes
 .../legacy_mc_clust_compact/mc-1-big-Data.db| Bin 0 -> 5382 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_compact/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_compact/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7086 bytes
 .../legacy_mc_clust_compact/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_compact/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../legacy_mc_clust_counter/mc-1-big-Data.db| Bin 0 -> 4631 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_clust_counter/mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../legacy_mc_clust_counter/mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../legacy_mc_clust_counter/mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../legacy_mc_clust_counter/mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 75 bytes
 .../mc-1-big-Data.db| Bin 0 -> 4625 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../mc-1-big-Filter.db  | Bin 0 -> 24 bytes
 .../mc-1-big-Index.db   | Bin 0 -> 157553 bytes
 .../mc-1-big-Statistics.db  | Bin 0 -> 7095 bytes
 .../mc-1-big-Summary.db | Bin 0 -> 47 bytes
 .../mc-1-big-TOC.txt|   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple/mc-1-big-Data.db   | Bin 0 -> 89 bytes
 .../legacy_mc_simple/mc-1-big-Digest.crc32  |   1 +
 .../legacy_mc_simple/mc-1-big-Filter.db | Bin 0 -> 24 bytes
 .../legacy_mc_simple/mc-1-big-Index.db  | Bin 0 -> 26 bytes
 .../legacy_mc_simple/mc-1-big-Statistics.db | Bin 0 -> 4639 bytes
 .../legacy_mc_simple/mc-1-big-Summary.db| Bin 0 -> 47 bytes
 .../legacy_mc_simple/mc-1-big-TOC.txt   |   8 +
 .../mc-1-big-CompressionInfo.db | Bin 0 -> 43 bytes
 .../legacy_mc_simple_compact/mc-1-big-Data.db   | Bin 0 -> 91 bytes
 .../mc-1-big-Digest.crc32   |   1 +
 .../legacy_mc_simple_compact/mc-1-big-Filter.db | Bin 0 -> 24 bytes
 .../legacy_mc_simple_compact/mc

[jira] [Updated] (CASSANDRA-12377) Add byteman support for 2.2

2016-08-05 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12377:

Reviewer: Paulo Motta

> Add byteman support for 2.2
> ---
>
> Key: CASSANDRA-12377
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12377
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Trivial
> Fix For: 2.2.x
>
>
> Using byteman for dtest is useful to interrupt streaming reliably 
> (CASSANDRA-10810 / https://github.com/riptano/cassandra-dtest/pull/1145).
> Unfortunately, it is available for 3.0+.
> This ticket backports it to 2.2.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12377) Add byteman support for 2.2

2016-08-05 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12377:

Status: Ready to Commit  (was: Patch Available)

> Add byteman support for 2.2
> ---
>
> Key: CASSANDRA-12377
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12377
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Trivial
> Fix For: 2.2.x
>
>
> Using byteman for dtest is useful to interrupt streaming reliably 
> (CASSANDRA-10810 / https://github.com/riptano/cassandra-dtest/pull/1145).
> Unfortunately, it is available for 3.0+.
> This ticket backports it to 2.2.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12377) Add byteman support for 2.2

2016-08-05 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409482#comment-15409482
 ] 

Paulo Motta commented on CASSANDRA-12377:
-

+1

> Add byteman support for 2.2
> ---
>
> Key: CASSANDRA-12377
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12377
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Trivial
> Fix For: 2.2.x
>
>
> Using byteman for dtest is useful to interrupt streaming reliably 
> (CASSANDRA-10810 / https://github.com/riptano/cassandra-dtest/pull/1145).
> Unfortunately, it is available for 3.0+.
> This ticket backports it to 2.2.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12223) SASI Indexes querying incorrectly return 0 rows

2016-08-05 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12223:

Status: Patch Available  (was: Open)

This bug is related to the fact that reversed column is never checked. Fix is 
quite trivial.

|[trunk|https://github.com/ifesdjeen/cassandra/tree/12223-trunk]|[dtest|https://cassci.datastax.com/job/ifesdjeen-12223-trunk-dtest/]|[testall|https://cassci.datastax.com/job/ifesdjeen-12223-trunk-dtest/]|

> SASI Indexes querying incorrectly return 0 rows
> ---
>
> Key: CASSANDRA-12223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12223
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: Windows, DataStax Distribution
>Reporter: Qiu Zhida
>Assignee: Alex Petrov
> Fix For: 3.7
>
>
> I just started working with the SASI index on Cassandra 3.7.0 and I 
> encountered a problem which as I suspected was a bug. I had hardly tracked 
> down the situation in which the bug showed up, here is what I found:
> When querying with a SASI index, *it may incorrectly return 0 rows*, and 
> changing a little conditions, it works again, like the following CQL code:
> {code:title=CQL|borderStyle=solid}
> CREATE TABLE IF NOT EXISTS roles (
> name text,
> a int,
> b int,
> PRIMARY KEY ((name, a), b)
> ) WITH CLUSTERING ORDER BY (b DESC);
> 
> insert into roles (name,a,b) values ('Joe',1,1);
> insert into roles (name,a,b) values ('Joe',2,2);
> insert into roles (name,a,b) values ('Joe',3,3);
> insert into roles (name,a,b) values ('Joe',4,4);
> CREATE TABLE IF NOT EXISTS roles2 (
> name text,
> a int,
> b int,
> PRIMARY KEY ((name, a), b)
> ) WITH CLUSTERING ORDER BY (b ASC);
> 
> insert into roles2 (name,a,b) values ('Joe',1,1);
> insert into roles2 (name,a,b) values ('Joe',2,2);
> insert into roles2 (name,a,b) values ('Joe',3,3);
> insert into roles2 (name,a,b) values ('Joe',4,4);
> CREATE CUSTOM INDEX ON roles (b) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex' 
> WITH OPTIONS = { 'mode': 'SPARSE' };
> CREATE CUSTOM INDEX ON roles2 (b) USING 
> 'org.apache.cassandra.index.sasi.SASIIndex' 
> WITH OPTIONS = { 'mode': 'SPARSE' };
> {code}
> Noticing that I only change table *roles2* from table *roles*'s '*CLUSTERING 
> ORDER BY (b DESC)*' into '*CLUSTERING ORDER BY (b ASC)*'.
> When querying with statement +select * from roles2 where b<3+, the rusult is 
> two rows:
> {code:title=CQL|borderStyle=solid}
>  name | a | b
> --+---+---
>   Joe | 1 | 1
>   Joe | 2 | 2
> (2 rows)
> {code}
> However, if querying with +select * from roles where b<3+, it returned no 
> rows at all:
> {code:title=CQL|borderStyle=solid}
>  name | a | b
> --+---+---
> (0 rows)
> {code}
> This is not the only situation where the bug would show up, one time I 
> created a SASI index with specific name like 'end_idx' on column 'end', the 
> bug showed up, when I didn't specify the index name, it gone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12008) Make decommission operations resumable

2016-08-05 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409486#comment-15409486
 ] 

Paulo Motta commented on CASSANDRA-12008:
-

It seems the CI dtests failed I think due to some recent changes in dtest that 
probably require a update/rebase in your branch. 

Now that CASSANDRA-12377 will add byteman support to 2.2+, can you modify 
{{simple_decommission_test}} to use byteman to abort the stream session since 
that's more reliable? Thanks!

> Make decommission operations resumable
> --
>
> Key: CASSANDRA-12008
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12008
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Tom van der Woerdt
>Assignee: Kaide Mu
>Priority: Minor
>
> We're dealing with large data sets (multiple terabytes per node) and 
> sometimes we need to add or remove nodes. These operations are very dependent 
> on the entire cluster being up, so while we're joining a new node (which 
> sometimes takes 6 hours or longer) a lot can go wrong and in a lot of cases 
> something does.
> It would be great if the ability to retry streams was implemented.
> Example to illustrate the problem :
> {code}
> 03:18 PM   ~ $ nodetool decommission
> error: Stream failed
> -- StackTrace --
> org.apache.cassandra.streaming.StreamException: Stream failed
> at 
> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
> at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
> at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
> at 
> com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
> at 
> com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
> at 
> com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
> at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:210)
> at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:186)
> at 
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:430)
> at 
> org.apache.cassandra.streaming.StreamSession.complete(StreamSession.java:622)
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:486)
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:274)
> at java.lang.Thread.run(Thread.java:745)
> 08:04 PM   ~ $ nodetool decommission
> nodetool: Unsupported operation: Node in LEAVING state; wait for status to 
> become normal or restart
> See 'nodetool help' or 'nodetool help '.
> {code}
> Streaming failed, probably due to load :
> {code}
> ERROR [STREAM-IN-/] 2016-06-14 18:05:47,275 StreamSession.java:520 - 
> [Stream #] Streaming error occurred
> java.net.SocketTimeoutException: null
> at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:211) 
> ~[na:1.8.0_77]
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) 
> ~[na:1.8.0_77]
> at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385) 
> ~[na:1.8.0_77]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:54)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:268)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
> {code}
> If implementing retries is not possible, can we have a 'nodetool decommission 
> resume'?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10810) Make rebuild operations resumable

2016-08-05 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409494#comment-15409494
 ] 

Paulo Motta commented on CASSANDRA-10810:
-

The test is failing locally with:
{noformat}
==
ERROR: resumable_rebuild_test (rebuild_test.TestRebuild)
--
Traceback (most recent call last):
  File "/home/paulo/Workspace/cassandra/cassandra-dtest/tools.py", line 290, in 
wrapped
f(obj)
  File "/home/paulo/Workspace/cassandra/cassandra-dtest/rebuild_test.py", line 
194, in resumable_rebuild_test
node3.byteman_submit(script)
  File "/home/paulo/Workspace/cassandra/ccm/ccmlib/node.py", line 1845, in 
byteman_submit
byteman_cmd.append(os.path.join(os.environ['JAVA_HOME'],
  File "/usr/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'JAVA_HOME'
{noformat}
which works if {{$JAVA_HOME}} is defined, so I think it's not a big deal.

I submitted a [CI 
run|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-10810-dtest/1/]
 to check if this will work, as well as a [multiplexer 50x 
run|https://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/216/]
 to verify the test is not flaky.

> Make rebuild operations resumable
> -
>
> Key: CASSANDRA-10810
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10810
> Project: Cassandra
>  Issue Type: Wish
>  Components: Streaming and Messaging
>Reporter: Jeremy Hanna
>Assignee: Kaide Mu
> Fix For: 3.x
>
>
> Related to CASSANDRA-8942, now that we can resume bootstrap operations, this 
> could also be possible with rebuild operations, such as when you bootstrap 
> new nodes in a completely new datacenter in two steps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12335) Super columns are broken after upgrading to 3.0 on thrift

2016-08-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12335:
-
Status: Patch Available  (was: Reopened)

> Super columns are broken after upgrading to 3.0 on thrift
> -
>
> Key: CASSANDRA-12335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12335
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jeremiah Jordan
>Assignee: Sylvain Lebresne
> Fix For: 3.0.9, 3.9
>
> Attachments: 0001-Fix-encoding-of-cell-names-for-super-columns.txt, 
> 0001-Force-super-column-families-to-be-compound.txt
>
>
> Super Columns are broken after upgrading to cassandra-3.0 HEAD.  The below 
> script shows this.
> 2.1 cli output for get:
> {code}
> [default@test] get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;
> => (name=name, value=Bob, timestamp=1469724504357000)
> {code}
> cqlsh:
> {code}
> [default@test]
>  key  | blobAsText(column1)
> --+-
>  0x53696d6f6e |attr
>  0x426f62 |attr
> {code}
> 3.0 cli:
> {code}
> [default@unknown] use test;
> unconfigured table schema_columnfamilies
> [default@test] get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;
> null
> [default@test]
> {code}
> cqlsh:
> {code}
>  key  | system.blobastext(column1)
> --+--
>  0x53696d6f6e | \x00\x04attr\x00\x00\x04name\x00
>  0x426f62 | \x00\x04attr\x00\x00\x04name\x00
> {code}
> Run this from a directory with cassandra-3.0 checked out and compiled
> {code}
> ccm create -n 2 -v 2.1.14 testsuper
> echo "### Starting 2.1 ###"
> ccm start
> MYFILE=`mktemp`
> echo "create keyspace test with placement_strategy = 
> 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = 
> {replication_factor:2};
> use test;
> create column family Sites with column_type = 'Super' and comparator = 
> 'BytesType' and subcomparator='UTF8Type';
> set Sites[utf8('Simon')][utf8('attr')]['name'] = utf8('Simon');
> set Sites[utf8('Bob')][utf8('attr')]['name'] = utf8('Bob');
> get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;" > $MYFILE
> ~/.ccm/repository/2.1.14/bin/cassandra-cli < $MYFILE
> rm $MYFILE
> ~/.ccm/repository/2.1.14/bin/nodetool -p 7100 flush
> ~/.ccm/repository/2.1.14/bin/nodetool -p 7200 flush
> ccm stop
> # run from cassandra-3.0 checked out and compiled
> ccm setdir
> echo "### Starting Current Directory 
> ###"
> ccm start
> ./bin/nodetool -p 7100 upgradesstables
> ./bin/nodetool -p 7200 upgradesstables
> ./bin/nodetool -p 7100 enablethrift
> ./bin/nodetool -p 7200 enablethrift
> MYFILE=`mktemp`
> echo "use test;
> get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;" > $MYFILE
> ~/.ccm/repository/2.1.14/bin/cassandra-cli < $MYFILE
> rm $MYFILE
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12205) nodetool tablestats sstable count missing.

2016-08-05 Thread Cameron MacMinn (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409500#comment-15409500
 ] 

Cameron MacMinn commented on CASSANDRA-12205:
-

As a user, it is not a duplicate.

When I 1st saw that, I assumed it meant duplicate from non-user perspective, in 
the source code, patches, etc. If you cannot find any such duplication, could 
you separate these into 2 issues? (Please let me know if I should update my 
Jira 12205 problem report.)

Thanks again to you and your colleagues for fixing missing SStable count.


> nodetool tablestats sstable count missing.
> --
>
> Key: CASSANDRA-12205
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12205
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Cassandra 3.7
>Reporter: Cameron MacMinn
>Assignee: Edward Ribeiro
> Fix For: 3.9
>
> Attachments: CASSANDRA-12205.patch, bad.txt, good.txt
>
>
> As a user, I have used  nodetool cfstats  since v2.1. The most useful line is 
> the 1 like 'SSTable count: 12'.
> As a user, I want v3.7  nodetool tablestats  to continue showing SStable 
> count. At the moment, SStable count is missing from the output.
> Examples attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-12335) Super columns are broken after upgrading to 3.0 on thrift

2016-08-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reopened CASSANDRA-12335:
--

Reopening because the committed patch wasn't entirely enough (and we haven't 
released any of this yet). The problem is that when we encode the cell name for 
old nodes, we didn't take the super column layout into account. I'm attaching a 
fairly simple additional patch that handles that.

> Super columns are broken after upgrading to 3.0 on thrift
> -
>
> Key: CASSANDRA-12335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12335
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jeremiah Jordan
>Assignee: Sylvain Lebresne
> Fix For: 3.0.9, 3.9
>
> Attachments: 0001-Fix-encoding-of-cell-names-for-super-columns.txt, 
> 0001-Force-super-column-families-to-be-compound.txt
>
>
> Super Columns are broken after upgrading to cassandra-3.0 HEAD.  The below 
> script shows this.
> 2.1 cli output for get:
> {code}
> [default@test] get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;
> => (name=name, value=Bob, timestamp=1469724504357000)
> {code}
> cqlsh:
> {code}
> [default@test]
>  key  | blobAsText(column1)
> --+-
>  0x53696d6f6e |attr
>  0x426f62 |attr
> {code}
> 3.0 cli:
> {code}
> [default@unknown] use test;
> unconfigured table schema_columnfamilies
> [default@test] get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;
> null
> [default@test]
> {code}
> cqlsh:
> {code}
>  key  | system.blobastext(column1)
> --+--
>  0x53696d6f6e | \x00\x04attr\x00\x00\x04name\x00
>  0x426f62 | \x00\x04attr\x00\x00\x04name\x00
> {code}
> Run this from a directory with cassandra-3.0 checked out and compiled
> {code}
> ccm create -n 2 -v 2.1.14 testsuper
> echo "### Starting 2.1 ###"
> ccm start
> MYFILE=`mktemp`
> echo "create keyspace test with placement_strategy = 
> 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = 
> {replication_factor:2};
> use test;
> create column family Sites with column_type = 'Super' and comparator = 
> 'BytesType' and subcomparator='UTF8Type';
> set Sites[utf8('Simon')][utf8('attr')]['name'] = utf8('Simon');
> set Sites[utf8('Bob')][utf8('attr')]['name'] = utf8('Bob');
> get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;" > $MYFILE
> ~/.ccm/repository/2.1.14/bin/cassandra-cli < $MYFILE
> rm $MYFILE
> ~/.ccm/repository/2.1.14/bin/nodetool -p 7100 flush
> ~/.ccm/repository/2.1.14/bin/nodetool -p 7200 flush
> ccm stop
> # run from cassandra-3.0 checked out and compiled
> ccm setdir
> echo "### Starting Current Directory 
> ###"
> ccm start
> ./bin/nodetool -p 7100 upgradesstables
> ./bin/nodetool -p 7200 upgradesstables
> ./bin/nodetool -p 7100 enablethrift
> ./bin/nodetool -p 7200 enablethrift
> MYFILE=`mktemp`
> echo "use test;
> get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;" > $MYFILE
> ~/.ccm/repository/2.1.14/bin/cassandra-cli < $MYFILE
> rm $MYFILE
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12378) Creating SASI index on clustering column in presence of static column breaks writes

2016-08-05 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-12378:
---

Assignee: Alex Petrov

> Creating SASI index on clustering column in presence of static column breaks 
> writes
> ---
>
> Key: CASSANDRA-12378
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12378
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>Priority: Critical
>
> Steps to reproduce:
> {code}
> String simpleTable = "simple_table";
> QueryProcessor.executeOnceInternal(String.format("CREATE TABLE IF NOT EXISTS 
> %s.%s (pk int, ck1 int, ck2 int, s1 int static, reg1 int, PRIMARY KEY (pk, 
> ck1));", KS_NAME, simpleTable));
> QueryProcessor.executeOnceInternal(String.format("CREATE CUSTOM INDEX ON 
> %s.%s (ck1) USING 'org.apache.cassandra.index.sasi.SASIIndex';", KS_NAME, 
> simpleTable));
> QueryProcessor.executeOnceInternal(String.format("INSERT INTO %s.%s (pk, ck1, 
> ck2, s1, reg1) VALUES (1,1,1,1,1);", KS_NAME, simpleTable));
> {code}
> {code}
> ERROR [MutationStage-2] 2016-08-04 09:59:08,054 StorageProxy.java:1351 - 
> Failed to apply mutation locally : {}
> java.lang.RuntimeException: 0 for ks: test, table: sasi
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1371) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:555) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:425) 
> ~[main/:na]
> at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
> ~[main/:na]
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
> ~[main/:na]
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1345)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2520)
>  [main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.get(AbstractBufferClusteringPrefix.java:55)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.getValueOf(ColumnIndex.java:235)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.index(ColumnIndex.java:104) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.SASIIndex$1.insertRow(SASIIndex.java:254) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.SecondaryIndexManager$WriteTimeTransaction.onInserted(SecondaryIndexManager.java:808)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:335)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:155)
>  ~[main/:na]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:251) ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1358) 
> ~[main/:na]
> ... 12 common frames omitted
> {code}
> I would say this issue is critical, as if it occurs, the node will crash on 
> commitlog replay, too (if it was restarted for unrelated reason). 
> However, the fix is relatively simple: check for static clustering in 
> {{ColumnIndex}}. 
> cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12378) Creating SASI index on clustering column in presence of static column breaks writes

2016-08-05 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12378:

Reviewer: Pavel Yaskevich
  Status: Patch Available  (was: Open)

Patch is available here: 

|[trunk 
|https://github.com/ifesdjeen/cassandra/tree/12378-trunk]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12378-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12378-trunk-dtest/]|

> Creating SASI index on clustering column in presence of static column breaks 
> writes
> ---
>
> Key: CASSANDRA-12378
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12378
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Alex Petrov
>Priority: Critical
>
> Steps to reproduce:
> {code}
> String simpleTable = "simple_table";
> QueryProcessor.executeOnceInternal(String.format("CREATE TABLE IF NOT EXISTS 
> %s.%s (pk int, ck1 int, ck2 int, s1 int static, reg1 int, PRIMARY KEY (pk, 
> ck1));", KS_NAME, simpleTable));
> QueryProcessor.executeOnceInternal(String.format("CREATE CUSTOM INDEX ON 
> %s.%s (ck1) USING 'org.apache.cassandra.index.sasi.SASIIndex';", KS_NAME, 
> simpleTable));
> QueryProcessor.executeOnceInternal(String.format("INSERT INTO %s.%s (pk, ck1, 
> ck2, s1, reg1) VALUES (1,1,1,1,1);", KS_NAME, simpleTable));
> {code}
> {code}
> ERROR [MutationStage-2] 2016-08-04 09:59:08,054 StorageProxy.java:1351 - 
> Failed to apply mutation locally : {}
> java.lang.RuntimeException: 0 for ks: test, table: sasi
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1371) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:555) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:425) 
> ~[main/:na]
> at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) 
> ~[main/:na]
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) 
> ~[main/:na]
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1345)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2520)
>  [main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [main/:na]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.cassandra.db.AbstractBufferClusteringPrefix.get(AbstractBufferClusteringPrefix.java:55)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.getValueOf(ColumnIndex.java:235)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.conf.ColumnIndex.index(ColumnIndex.java:104) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.sasi.SASIIndex$1.insertRow(SASIIndex.java:254) 
> ~[main/:na]
> at 
> org.apache.cassandra.index.SecondaryIndexManager$WriteTimeTransaction.onInserted(SecondaryIndexManager.java:808)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:335)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:155)
>  ~[main/:na]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:251) ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1358) 
> ~[main/:na]
> ... 12 common frames omitted
> {code}
> I would say this issue is critical, as if it occurs, the node will crash on 
> commitlog replay, too (if it was restarted for unrelated reason). 
> However, the fix is relatively simple: check for static clustering in 
> {{ColumnIndex}}. 
> cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12379) CQLSH completion test broken by #12236

2016-08-05 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409512#comment-15409512
 ] 

Philip Thompson edited comment on CASSANDRA-12379 at 8/5/16 2:22 PM:
-

Given that the cqlshlib tests dont create or configure their own C* process for 
testing, are we sure we want to start requiring having non-default 
installations needed in order for all the tests to pass?


was (Author: philipthompson):
Given that the cqlshlib tests dont create or configure their own cluster, are 
we sure we want to start requiring having non-default installations needed in 
order for all the tests to pass?

> CQLSH completion test broken by #12236
> --
>
> Key: CASSANDRA-12379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12379
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Stefania
>
> The commit of CASSANDRA-12236 appears to have broken [cqlsh completion 
> tests|http://cassci.datastax.com/job/cassandra-3.8_cqlsh_tests/6/cython=yes,label=ctool-lab/testReport/junit/cqlshlib.test.test_cqlsh_completion/TestCqlshCompletion/test_complete_in_create_columnfamily/].
>  For the error message I suspect this may have to do with something like the 
> test comparing the completion output to what DESCRIBE shows and the later now 
> doesn't include the {{cdc}} option by default.
> Anyway, I'm not really familiar with cqlsh completion nor it's test so I'm 
> not sure what's the best option. I don't think we want to remove {{cdc}} from 
> completion so I suspect we want to either special case the test somehow (no 
> clue how to do that), or make the test run with cdc enabled so it doesn't 
> complain (which I think mostly apply a change to the CI environment since it 
> seems the tests themselves don't spin up the cluster).
> Anyway, pushing that fix to someone else as I'm not competent here and I 
> have't even be able to run those cqlsh test so far (getting stuck at the test 
> telling me that "No appropriate python interpreter found", even though I 
> totally have an appropriate interpreter and cqlsh works perfectly if I 
> execute it directly). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12379) CQLSH completion test broken by #12236

2016-08-05 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409512#comment-15409512
 ] 

Philip Thompson commented on CASSANDRA-12379:
-

Given that the cqlshlib tests dont create or configure their own cluster, are 
we sure we want to start requiring having non-default installations needed in 
order for all the tests to pass?

> CQLSH completion test broken by #12236
> --
>
> Key: CASSANDRA-12379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12379
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Stefania
>
> The commit of CASSANDRA-12236 appears to have broken [cqlsh completion 
> tests|http://cassci.datastax.com/job/cassandra-3.8_cqlsh_tests/6/cython=yes,label=ctool-lab/testReport/junit/cqlshlib.test.test_cqlsh_completion/TestCqlshCompletion/test_complete_in_create_columnfamily/].
>  For the error message I suspect this may have to do with something like the 
> test comparing the completion output to what DESCRIBE shows and the later now 
> doesn't include the {{cdc}} option by default.
> Anyway, I'm not really familiar with cqlsh completion nor it's test so I'm 
> not sure what's the best option. I don't think we want to remove {{cdc}} from 
> completion so I suspect we want to either special case the test somehow (no 
> clue how to do that), or make the test run with cdc enabled so it doesn't 
> complain (which I think mostly apply a change to the CI environment since it 
> seems the tests themselves don't spin up the cluster).
> Anyway, pushing that fix to someone else as I'm not competent here and I 
> have't even be able to run those cqlsh test so far (getting stuck at the test 
> telling me that "No appropriate python interpreter found", even though I 
> totally have an appropriate interpreter and cqlsh works perfectly if I 
> execute it directly). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/6] cassandra git commit: Ninja-fix test build

2016-08-05 Thread slebresne
Ninja-fix test build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/150307e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/150307e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/150307e9

Branch: refs/heads/cassandra-3.9
Commit: 150307e95c907939cd9c690f77d7b475edb86c9e
Parents: 904cb5d
Author: Sylvain Lebresne 
Authored: Fri Aug 5 16:23:19 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 16:23:19 2016 +0200

--
 .../org/apache/cassandra/streaming/StreamTransferTaskTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/150307e9/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java 
b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
index dce56eb..185498f 100644
--- a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
+++ b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
@@ -134,7 +134,7 @@ public class StreamTransferTaskTest
 
 // create streaming task that streams those two sstables
 StreamTransferTask task = new StreamTransferTask(session, 
cfs.metadata.cfId);
-List> refs = new 
ArrayList<>(cfs.getSSTables().size());
+List> refs = new 
ArrayList<>(cfs.getLiveSSTables().size());
 for (SSTableReader sstable : cfs.getLiveSSTables())
 {
 List> ranges = new ArrayList<>();



[3/6] cassandra git commit: Ninja-fix test build

2016-08-05 Thread slebresne
Ninja-fix test build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/150307e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/150307e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/150307e9

Branch: refs/heads/trunk
Commit: 150307e95c907939cd9c690f77d7b475edb86c9e
Parents: 904cb5d
Author: Sylvain Lebresne 
Authored: Fri Aug 5 16:23:19 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 16:23:19 2016 +0200

--
 .../org/apache/cassandra/streaming/StreamTransferTaskTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/150307e9/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java 
b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
index dce56eb..185498f 100644
--- a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
+++ b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
@@ -134,7 +134,7 @@ public class StreamTransferTaskTest
 
 // create streaming task that streams those two sstables
 StreamTransferTask task = new StreamTransferTask(session, 
cfs.metadata.cfId);
-List> refs = new 
ArrayList<>(cfs.getSSTables().size());
+List> refs = new 
ArrayList<>(cfs.getLiveSSTables().size());
 for (SSTableReader sstable : cfs.getLiveSSTables())
 {
 List> ranges = new ArrayList<>();



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-08-05 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.9

* cassandra-3.0:
  Ninja-fix test build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4805108d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4805108d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4805108d

Branch: refs/heads/cassandra-3.9
Commit: 4805108d849af722ea756eb5f15c02d6dee20f71
Parents: 7b10217 150307e
Author: Sylvain Lebresne 
Authored: Fri Aug 5 16:25:47 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 16:25:47 2016 +0200

--
 .../org/apache/cassandra/streaming/StreamTransferTaskTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4805108d/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
--



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.9

2016-08-05 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.9

* cassandra-3.0:
  Ninja-fix test build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4805108d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4805108d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4805108d

Branch: refs/heads/trunk
Commit: 4805108d849af722ea756eb5f15c02d6dee20f71
Parents: 7b10217 150307e
Author: Sylvain Lebresne 
Authored: Fri Aug 5 16:25:47 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 16:25:47 2016 +0200

--
 .../org/apache/cassandra/streaming/StreamTransferTaskTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4805108d/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
--



[1/6] cassandra git commit: Ninja-fix test build

2016-08-05 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 904cb5d10 -> 150307e95
  refs/heads/cassandra-3.9 7b1021733 -> 4805108d8
  refs/heads/trunk 624ed7838 -> 4b905bb58


Ninja-fix test build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/150307e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/150307e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/150307e9

Branch: refs/heads/cassandra-3.0
Commit: 150307e95c907939cd9c690f77d7b475edb86c9e
Parents: 904cb5d
Author: Sylvain Lebresne 
Authored: Fri Aug 5 16:23:19 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 16:23:19 2016 +0200

--
 .../org/apache/cassandra/streaming/StreamTransferTaskTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/150307e9/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java 
b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
index dce56eb..185498f 100644
--- a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
+++ b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
@@ -134,7 +134,7 @@ public class StreamTransferTaskTest
 
 // create streaming task that streams those two sstables
 StreamTransferTask task = new StreamTransferTask(session, 
cfs.metadata.cfId);
-List> refs = new 
ArrayList<>(cfs.getSSTables().size());
+List> refs = new 
ArrayList<>(cfs.getLiveSSTables().size());
 for (SSTableReader sstable : cfs.getLiveSSTables())
 {
 List> ranges = new ArrayList<>();



[6/6] cassandra git commit: Merge branch 'cassandra-3.9' into trunk

2016-08-05 Thread slebresne
Merge branch 'cassandra-3.9' into trunk

* cassandra-3.9:
  Ninja-fix test build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4b905bb5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4b905bb5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4b905bb5

Branch: refs/heads/trunk
Commit: 4b905bb5866cd60a6c3c3c8a7609eae4e8e6cc57
Parents: 624ed78 4805108
Author: Sylvain Lebresne 
Authored: Fri Aug 5 16:26:01 2016 +0200
Committer: Sylvain Lebresne 
Committed: Fri Aug 5 16:26:01 2016 +0200

--

--




[jira] [Updated] (CASSANDRA-12335) Super columns are broken after upgrading to 3.0 on thrift

2016-08-05 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-12335:
-
Attachment: 0001-Fix-encoding-of-cell-names-for-super-columns.txt

> Super columns are broken after upgrading to 3.0 on thrift
> -
>
> Key: CASSANDRA-12335
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12335
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jeremiah Jordan
>Assignee: Sylvain Lebresne
> Fix For: 3.0.9, 3.9
>
> Attachments: 0001-Fix-encoding-of-cell-names-for-super-columns.txt, 
> 0001-Force-super-column-families-to-be-compound.txt
>
>
> Super Columns are broken after upgrading to cassandra-3.0 HEAD.  The below 
> script shows this.
> 2.1 cli output for get:
> {code}
> [default@test] get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;
> => (name=name, value=Bob, timestamp=1469724504357000)
> {code}
> cqlsh:
> {code}
> [default@test]
>  key  | blobAsText(column1)
> --+-
>  0x53696d6f6e |attr
>  0x426f62 |attr
> {code}
> 3.0 cli:
> {code}
> [default@unknown] use test;
> unconfigured table schema_columnfamilies
> [default@test] get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;
> null
> [default@test]
> {code}
> cqlsh:
> {code}
>  key  | system.blobastext(column1)
> --+--
>  0x53696d6f6e | \x00\x04attr\x00\x00\x04name\x00
>  0x426f62 | \x00\x04attr\x00\x00\x04name\x00
> {code}
> Run this from a directory with cassandra-3.0 checked out and compiled
> {code}
> ccm create -n 2 -v 2.1.14 testsuper
> echo "### Starting 2.1 ###"
> ccm start
> MYFILE=`mktemp`
> echo "create keyspace test with placement_strategy = 
> 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = 
> {replication_factor:2};
> use test;
> create column family Sites with column_type = 'Super' and comparator = 
> 'BytesType' and subcomparator='UTF8Type';
> set Sites[utf8('Simon')][utf8('attr')]['name'] = utf8('Simon');
> set Sites[utf8('Bob')][utf8('attr')]['name'] = utf8('Bob');
> get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;" > $MYFILE
> ~/.ccm/repository/2.1.14/bin/cassandra-cli < $MYFILE
> rm $MYFILE
> ~/.ccm/repository/2.1.14/bin/nodetool -p 7100 flush
> ~/.ccm/repository/2.1.14/bin/nodetool -p 7200 flush
> ccm stop
> # run from cassandra-3.0 checked out and compiled
> ccm setdir
> echo "### Starting Current Directory 
> ###"
> ccm start
> ./bin/nodetool -p 7100 upgradesstables
> ./bin/nodetool -p 7200 upgradesstables
> ./bin/nodetool -p 7100 enablethrift
> ./bin/nodetool -p 7200 enablethrift
> MYFILE=`mktemp`
> echo "use test;
> get Sites[utf8('Bob')][utf8('attr')]['name'] as utf8;" > $MYFILE
> ~/.ccm/repository/2.1.14/bin/cassandra-cli < $MYFILE
> rm $MYFILE
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12379) CQLSH completion test broken by #12236

2016-08-05 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409529#comment-15409529
 ] 

Philip Thompson commented on CASSANDRA-12379:
-

Ahh, I didn't realize we already require setting enable_UDF. We'll go ahead and 
make the change

> CQLSH completion test broken by #12236
> --
>
> Key: CASSANDRA-12379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12379
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Stefania
>
> The commit of CASSANDRA-12236 appears to have broken [cqlsh completion 
> tests|http://cassci.datastax.com/job/cassandra-3.8_cqlsh_tests/6/cython=yes,label=ctool-lab/testReport/junit/cqlshlib.test.test_cqlsh_completion/TestCqlshCompletion/test_complete_in_create_columnfamily/].
>  For the error message I suspect this may have to do with something like the 
> test comparing the completion output to what DESCRIBE shows and the later now 
> doesn't include the {{cdc}} option by default.
> Anyway, I'm not really familiar with cqlsh completion nor it's test so I'm 
> not sure what's the best option. I don't think we want to remove {{cdc}} from 
> completion so I suspect we want to either special case the test somehow (no 
> clue how to do that), or make the test run with cdc enabled so it doesn't 
> complain (which I think mostly apply a change to the CI environment since it 
> seems the tests themselves don't spin up the cluster).
> Anyway, pushing that fix to someone else as I'm not competent here and I 
> have't even be able to run those cqlsh test so far (getting stuck at the test 
> telling me that "No appropriate python interpreter found", even though I 
> totally have an appropriate interpreter and cqlsh works perfectly if I 
> execute it directly). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12379) CQLSH completion test broken by #12236

2016-08-05 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409525#comment-15409525
 ] 

Stefania commented on CASSANDRA-12379:
--

We already require {{enable_user_defined_functions = true}}, which is not the 
default, that's why I thought adding another property wouldn't be so bad. 
However, we can just as easily change the test and remove {{AND cdc = false}}. 
I really have no preference.


> CQLSH completion test broken by #12236
> --
>
> Key: CASSANDRA-12379
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12379
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Stefania
>
> The commit of CASSANDRA-12236 appears to have broken [cqlsh completion 
> tests|http://cassci.datastax.com/job/cassandra-3.8_cqlsh_tests/6/cython=yes,label=ctool-lab/testReport/junit/cqlshlib.test.test_cqlsh_completion/TestCqlshCompletion/test_complete_in_create_columnfamily/].
>  For the error message I suspect this may have to do with something like the 
> test comparing the completion output to what DESCRIBE shows and the later now 
> doesn't include the {{cdc}} option by default.
> Anyway, I'm not really familiar with cqlsh completion nor it's test so I'm 
> not sure what's the best option. I don't think we want to remove {{cdc}} from 
> completion so I suspect we want to either special case the test somehow (no 
> clue how to do that), or make the test run with cdc enabled so it doesn't 
> complain (which I think mostly apply a change to the CI environment since it 
> seems the tests themselves don't spin up the cluster).
> Anyway, pushing that fix to someone else as I'm not competent here and I 
> have't even be able to run those cqlsh test so far (getting stuck at the test 
> telling me that "No appropriate python interpreter found", even though I 
> totally have an appropriate interpreter and cqlsh works perfectly if I 
> execute it directly). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >