[cassandra] Git Push Summary

2017-09-04 Thread paulo
Repository: cassandra
Updated Branches:
  refs/heads/master [deleted] 6d77ace53

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: CASSANDRA-11500: add dtest for complex update/delete tombstones in MV

2017-09-04 Thread paulo
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master 19b6613d7 -> 6d77ace53


CASSANDRA-11500: add dtest for complex update/delete tombstones in MV


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/6d77ace5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/6d77ace5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/6d77ace5

Branch: refs/heads/master
Commit: 6d77ace5361f020ba182072ade9f4ab98025c213
Parents: 19b6613
Author: Zhao Yang 
Authored: Mon May 1 23:24:12 2017 +0800
Committer: Paulo Motta 
Committed: Tue Sep 5 00:39:48 2017 -0500

--
 materialized_views_test.py | 308 +++-
 1 file changed, 307 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/6d77ace5/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 574d90f..637124d 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -66,6 +66,14 @@ class TestMaterializedViews(Tester):
 
 return session
 
+def update_view(self, session, query, flush, compact=False):
+session.execute(query)
+self._replay_batchlogs()
+if flush:
+self.cluster.flush()
+if compact:
+self.cluster.compact()
+
 def _settle_nodes(self):
 debug("Settling all nodes")
 stage_match = 
re.compile("(?P\S+)\s+(?P\d+)\s+(?P\d+)\s+(?P\d+)\s+(?P\d+)\s+(?P\d+)")
@@ -334,7 +342,7 @@ class TestMaterializedViews(Tester):
 assert_invalid(
 session,
 "ALTER TABLE ks.users DROP state;",
-"Cannot drop column state, depended on by materialized views"
+"Cannot drop column state on base table with materialized views."
 )
 
 def drop_table_test(self):
@@ -974,6 +982,304 @@ class TestMaterializedViews(Tester):
 cl=ConsistencyLevel.ALL
 )
 
+@since('3.0')
+def test_no_base_column_in_view_pk_complex_timestamp_with_flush(self):
+self._test_no_base_column_in_view_pk_complex_timestamp(flush=True)
+
+@since('3.0')
+def test_no_base_column_in_view_pk_complex_timestamp_without_flush(self):
+self._test_no_base_column_in_view_pk_complex_timestamp(flush=False)
+
+def _test_no_base_column_in_view_pk_complex_timestamp(self, flush):
+"""
+Able to shadow old view row if all columns in base are removed 
including unselected
+Able to recreate view row if at least one selected column alive
+
+@jira_ticket CASSANDRA-11500
+"""
+session = self.prepare(rf=3, nodes=3, 
options={'hinted_handoff_enabled': False}, 
consistency_level=ConsistencyLevel.QUORUM)
+node1, node2, node3 = self.cluster.nodelist()
+
+session.execute('USE ks')
+session.execute("CREATE TABLE t (k int, c int, a int, b int, e int, f 
int, primary key(k, c))")
+session.execute(("CREATE MATERIALIZED VIEW mv AS SELECT k,c,a,b FROM t 
"
+ "WHERE k IS NOT NULL AND c IS NOT NULL PRIMARY KEY 
(c, k)"))
+session.cluster.control_connection.wait_for_schema_agreement()
+
+# update unselected, view row should be alive
+self.update_view(session, "UPDATE t USING TIMESTAMP 1 SET e=1 WHERE 
k=1 AND c=1;", flush)
+assert_one(session, "SELECT * FROM t", [1, 1, None, None, 1, None])
+assert_one(session, "SELECT * FROM mv", [1, 1, None, None])
+
+# remove unselected, add selected column, view row should be alive
+self.update_view(session, "UPDATE t USING TIMESTAMP 2 SET e=null, b=1 
WHERE k=1 AND c=1;", flush)
+assert_one(session, "SELECT * FROM t", [1, 1, None, 1, None, None])
+assert_one(session, "SELECT * FROM mv", [1, 1, None, 1])
+
+# remove selected column, view row is removed
+self.update_view(session, "UPDATE t USING TIMESTAMP 2 SET e=null, 
b=null WHERE k=1 AND c=1;", flush)
+assert_none(session, "SELECT * FROM t")
+assert_none(session, "SELECT * FROM mv")
+
+# update unselected with ts=3, view row should be alive
+self.update_view(session, "UPDATE t USING TIMESTAMP 3 SET f=1 WHERE 
k=1 AND c=1;", flush)
+assert_one(session, "SELECT * FROM t", [1, 1, None, None, None, 1])
+assert_one(session, "SELECT * FROM mv", [1, 1, None, None])
+
+# insert livenesssInfo, view row should be alive
+self.update_view(session, "INSERT INTO t(k,c) VALUES(1,1) USING 
TIMESTAMP 3", flush)
+assert_one(session, "SELECT * FROM t", [1, 1, None, None, None, 1])
+assert_one(session, "SELECT * FROM mv", [1, 1, None, None])
+
+# r

[05/16] cassandra git commit: Fix outstanding MV timestamp issues and add documentation about unsupported cases (see CASSANDRA-11500 for a summary of fixes)

2017-09-04 Thread paulo
http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java 
b/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
new file mode 100644
index 000..9e32620
--- /dev/null
+++ b/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
@@ -0,0 +1,1343 @@
+package org.apache.cassandra.cql3;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import org.apache.cassandra.concurrent.SEPExecutor;
+import org.apache.cassandra.concurrent.Stage;
+import org.apache.cassandra.concurrent.StageManager;
+import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.Keyspace;
+import org.apache.cassandra.db.compaction.CompactionManager;
+import org.apache.cassandra.utils.FBUtilities;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import com.google.common.base.Objects;
+
+public class ViewComplexTest extends CQLTester
+{
+int protocolVersion = 4;
+private final List views = new ArrayList<>();
+
+@BeforeClass
+public static void startup()
+{
+requireNetwork();
+}
+@Before
+public void begin()
+{
+views.clear();
+}
+
+@After
+public void end() throws Throwable
+{
+for (String viewName : views)
+executeNet(protocolVersion, "DROP MATERIALIZED VIEW " + viewName);
+}
+
+private void createView(String name, String query) throws Throwable
+{
+executeNet(protocolVersion, String.format(query, name));
+// If exception is thrown, the view will not be added to the list; 
since it shouldn't have been created, this is
+// the desired behavior
+views.add(name);
+}
+
+private void updateView(String query, Object... params) throws Throwable
+{
+updateViewWithFlush(query, false, params);
+}
+
+private void updateViewWithFlush(String query, boolean flush, Object... 
params) throws Throwable
+{
+executeNet(protocolVersion, query, params);
+while (!(((SEPExecutor) 
StageManager.getStage(Stage.VIEW_MUTATION)).getPendingTasks() == 0
+&& ((SEPExecutor) 
StageManager.getStage(Stage.VIEW_MUTATION)).getActiveCount() == 0))
+{
+Thread.sleep(1);
+}
+if (flush)
+Keyspace.open(keyspace()).flush();
+}
+
+// for now, unselected column cannot be fully supported, SEE 
CASSANDRA-13826
+@Ignore
+@Test
+public void testPartialDeleteUnselectedColumn() throws Throwable
+{
+boolean flush = true;
+execute("USE " + keyspace());
+executeNet(protocolVersion, "USE " + keyspace());
+createTable("CREATE TABLE %s (k int, c int, a int, b int, PRIMARY KEY 
(k, c))");
+createView("mv",
+   "CREATE MATERIALIZED VIEW %s AS SELECT k,c FROM %%s WHERE k 
IS NOT NULL AND c IS NOT NULL PRIMARY KEY (k,c)");
+Keyspace ks = Keyspace.open(keyspace());
+ks.getColumnFamilyStore("mv").disableAutoCompaction();
+
+updateView("UPDATE %s USING TIMESTAMP 10 SET b=1 WHERE k=1 AND c=1");
+if (flush)
+FBUtilities.waitOnFutures(ks.flush());
+assertRows(execute("SELECT * from %s"), row(1, 1, null, 1));
+assertRows(execute("SELECT * from mv"), row(1, 1));
+updateView("DELETE b FROM %s USING TIMESTAMP 11 WHERE k=1 AND c=1");
+if (flush)
+FBUtilities.waitOnFutures(ks.flush());
+assertEmpty(execute("SELECT * from %s"));
+assertEmpty(execute("SELECT * from mv"));
+updateView("UPDATE %s USING TIMESTAMP 1 SET a=1 WHERE k=1 AND c=1");
+if (flush)
+FBUtilities.waitOnFutures(ks.flush());
+assertRows(execute("SELECT * from %s"), row(1, 1, 1, null));
+assertRows(execute("SELECT * from mv"), row(1, 1));
+
+execute("truncate %s;");
+
+// removal generated by unselected column should not shadow PK update 
with smaller timestamp
+updateViewWithFlush("UPDATE %s USING TIMESTAMP 18 SET a=1 WHERE k=1 
AND c=1", flush);
+assertRows(execute("SELECT * from %s"), row(1, 1, 1, null));
+assertRows(execute("SELECT * from mv"), row(1, 1));
+
+updateViewWithFlush("UPDATE %s USING TIMESTAMP 20 SET a=null WHERE k=1 
AND c=1", flush);
+assertRows(execute("SELECT * from %s"));
+assertRows(execute("SELECT * from mv"));
+
+  

[03/16] cassandra git commit: Fix outstanding MV timestamp issues and add documentation about unsupported cases (see CASSANDRA-11500 for a summary of fixes)

2017-09-04 Thread paulo
Fix outstanding MV timestamp issues and add documentation about unsupported 
cases (see CASSANDRA-11500 for a summary of fixes)

This patch introduces the following changes to fix MV timestamp issues:
 - Add strict liveness for view with non-key base column in pk
 - Deprecated shadowable tombstone and use expired livenessInfo instead
 - Include partition deletion for existing base row
 - Disallow dropping base column with MV

Patch by Zhao Yang and Paulo Motta; reviewed by Paulo Motta for CASSANDRA-11500


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b36740e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b36740e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b36740e

Branch: refs/heads/cassandra-3.0
Commit: 1b36740ebe66b8ed4c3d6cb64eb2419a9279dfbf
Parents: b0eba5f
Author: Zhao Yang 
Authored: Wed Jul 12 17:49:38 2017 +0800
Committer: Paulo Motta 
Committed: Tue Sep 5 01:03:24 2017 -0500

--
 NEWS.txt|   18 +
 doc/cql3/CQL.textile|6 +
 .../org/apache/cassandra/config/CFMetaData.java |   13 +
 .../apache/cassandra/cql3/UpdateParameters.java |2 +-
 .../cql3/statements/AlterTableStatement.java|   18 +-
 .../org/apache/cassandra/db/LivenessInfo.java   |   17 +-
 .../org/apache/cassandra/db/ReadCommand.java|7 +-
 .../db/compaction/CompactionIterator.java   |7 +-
 .../apache/cassandra/db/filter/RowFilter.java   |4 +-
 .../cassandra/db/partitions/PurgeFunction.java  |   14 +-
 .../org/apache/cassandra/db/rows/BTreeRow.java  |6 +-
 src/java/org/apache/cassandra/db/rows/Row.java  |   15 +-
 .../cassandra/db/rows/UnfilteredSerializer.java |5 +
 .../apache/cassandra/db/transform/Filter.java   |8 +-
 .../db/transform/FilteredPartitions.java|4 +-
 .../cassandra/db/transform/FilteredRows.java|2 +-
 .../apache/cassandra/db/view/TableViews.java|   18 +-
 src/java/org/apache/cassandra/db/view/View.java |   43 +-
 .../apache/cassandra/db/view/ViewManager.java   |5 +
 .../cassandra/db/view/ViewUpdateGenerator.java  |  163 ++-
 .../apache/cassandra/service/DataResolver.java  |4 +-
 .../org/apache/cassandra/cql3/CQLTester.java|2 +-
 .../apache/cassandra/cql3/ViewComplexTest.java  | 1343 ++
 .../cassandra/cql3/ViewFilteringTest.java   |  706 -
 .../org/apache/cassandra/cql3/ViewTest.java |   31 +-
 25 files changed, 1973 insertions(+), 488 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index bb5fdfe..7064c5d 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -21,6 +21,24 @@ Upgrading
- Nothing specific to this release, but please see previous upgrading 
sections,
  especially if you are upgrading from 2.2.
 
+Materialized Views
+---
+- Cassandra will no longer allow dropping columns on tables with 
Materialized Views.
+- A change was made in the way the Materialized View timestamp is 
computed, which
+  may cause an old deletion to a base column which is view primary key 
(PK) column
+  to not be reflected in the view when repairing the base table 
post-upgrade. This
+  condition is only possible when a column deletion to an MV primary key 
(PK) column
+  not present in the base table PK (via UPDATE base SET view_pk_col = null 
or DELETE
+  view_pk_col FROM base) is missed before the upgrade and received by 
repair after the upgrade.
+  If such column deletions are done on a view PK column which is not a 
base PK, it's advisable
+  to run repair on the base table of all nodes prior to the upgrade. 
Alternatively it's possible
+  to fix potential inconsistencies by running repair on the views after 
upgrade or drop and
+  re-create the views. See CASSANDRA-11500 for more details.
+- Removal of columns not selected in the Materialized View (via UPDATE 
base SET unselected_column
+  = null or DELETE unselected_column FROM base) may not be properly 
reflected in the view in some
+  situations so we advise against doing deletions on base columns not 
selected in views
+  until this is fixed on CASSANDRA-13826.
+
 3.0.14
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 1efa6d4..54888b8 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -524,6 +524,12 @@ h4(#createMVWhere). @WHERE@ Clause
 
 The @@ is similar to the "where clause of a @SELECT@ 
statement":#selectWhere, with a few differences.  First, the whe

[11/16] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-04 Thread paulo
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e624c663
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e624c663
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e624c663

Branch: refs/heads/trunk
Commit: e624c6638254ea410691f085a10d08d412eb5ac1
Parents: 14d67d8 1b36740
Author: Paulo Motta 
Authored: Tue Sep 5 01:04:34 2017 -0500
Committer: Paulo Motta 
Committed: Tue Sep 5 01:05:06 2017 -0500

--
 NEWS.txt|   30 +-
 doc/cql3/CQL.textile|6 +
 .../org/apache/cassandra/config/CFMetaData.java |   13 +
 .../apache/cassandra/cql3/UpdateParameters.java |2 +-
 .../cql3/statements/AlterTableStatement.java|   18 +-
 .../org/apache/cassandra/db/LivenessInfo.java   |   17 +-
 .../org/apache/cassandra/db/ReadCommand.java|7 +-
 .../db/compaction/CompactionIterator.java   |7 +-
 .../apache/cassandra/db/filter/RowFilter.java   |4 +-
 .../cassandra/db/partitions/PurgeFunction.java  |   14 +-
 .../org/apache/cassandra/db/rows/BTreeRow.java  |6 +-
 src/java/org/apache/cassandra/db/rows/Row.java  |   13 +-
 .../cassandra/db/rows/UnfilteredSerializer.java |5 +
 .../apache/cassandra/db/transform/Filter.java   |8 +-
 .../db/transform/FilteredPartitions.java|4 +-
 .../cassandra/db/transform/FilteredRows.java|2 +-
 .../apache/cassandra/db/view/TableViews.java|   18 +-
 src/java/org/apache/cassandra/db/view/View.java |   41 +-
 .../apache/cassandra/db/view/ViewManager.java   |5 +
 .../cassandra/db/view/ViewUpdateGenerator.java  |  163 ++-
 .../apache/cassandra/service/DataResolver.java  |4 +-
 .../org/apache/cassandra/cql3/CQLTester.java|2 +-
 .../apache/cassandra/cql3/ViewComplexTest.java  | 1344 ++
 .../cassandra/cql3/ViewFilteringTest.java   | 1030 +-
 .../org/apache/cassandra/cql3/ViewTest.java |   25 +-
 25 files changed, 2293 insertions(+), 495 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e624c663/NEWS.txt
--
diff --cc NEWS.txt
index 8e39667,7064c5d..0682ae9
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -18,10 -18,28 +18,34 @@@ using the provided 'sstableupgrade' too
  
  Upgrading
  -
 -   - Nothing specific to this release, but please see previous upgrading 
sections,
 - especially if you are upgrading from 2.2.
 +- Nothing specific to this version but please see previous upgrading 
sections,
 +  especially if you are upgrading from 2.2.
  
+ Materialized Views
+ ---
+ - Cassandra will no longer allow dropping columns on tables with 
Materialized Views.
+ - A change was made in the way the Materialized View timestamp is 
computed, which
+   may cause an old deletion to a base column which is view primary key 
(PK) column
+   to not be reflected in the view when repairing the base table 
post-upgrade. This
+   condition is only possible when a column deletion to an MV primary key 
(PK) column
+   not present in the base table PK (via UPDATE base SET view_pk_col = 
null or DELETE
+   view_pk_col FROM base) is missed before the upgrade and received by 
repair after the upgrade.
+   If such column deletions are done on a view PK column which is not a 
base PK, it's advisable
+   to run repair on the base table of all nodes prior to the upgrade. 
Alternatively it's possible
+   to fix potential inconsistencies by running repair on the views after 
upgrade or drop and
+   re-create the views. See CASSANDRA-11500 for more details.
+ - Removal of columns not selected in the Materialized View (via UPDATE 
base SET unselected_column
+   = null or DELETE unselected_column FROM base) may not be properly 
reflected in the view in some
+   situations so we advise against doing deletions on base columns not 
selected in views
+   until this is fixed on CASSANDRA-13826.
 -
 -3.0.14
++- Creating Materialized View with filtering on non-primary-key base column
++  (added in CASSANDRA-10368) is disabled, because the liveness of view row
++  is depending on multiple filtered base non-key columns and base non-key
++  column used in view primary-key. This semantic cannot be supported 
without
++  storage format change, see CASSANDRA-13826. For append-only use case, 
you
++  may still use this feature with a startup flag: 
"-Dcassandra.mv.allow_filtering_nonkey_columns_unsafe=true"
++
 +3.11.0
  ==
  
  Upgrading

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e624c663/doc/cql3/CQL.textile
--

http://

[09/16] cassandra git commit: Fix outstanding MV timestamp issues and add documentation about unsupported cases (see CASSANDRA-11500 for a summary of fixes)

2017-09-04 Thread paulo
Fix outstanding MV timestamp issues and add documentation about unsupported 
cases (see CASSANDRA-11500 for a summary of fixes)

This patch introduces the following changes to fix MV timestamp issues:
 - Add strict liveness for view with non-key base column in pk
 - Deprecated shadowable tombstone and use expired livenessInfo instead
 - Include partition deletion for existing base row
 - Disallow dropping base column with MV

Patch by Zhao Yang and Paulo Motta; reviewed by Paulo Motta for CASSANDRA-11500


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b36740e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b36740e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b36740e

Branch: refs/heads/trunk
Commit: 1b36740ebe66b8ed4c3d6cb64eb2419a9279dfbf
Parents: b0eba5f
Author: Zhao Yang 
Authored: Wed Jul 12 17:49:38 2017 +0800
Committer: Paulo Motta 
Committed: Tue Sep 5 01:03:24 2017 -0500

--
 NEWS.txt|   18 +
 doc/cql3/CQL.textile|6 +
 .../org/apache/cassandra/config/CFMetaData.java |   13 +
 .../apache/cassandra/cql3/UpdateParameters.java |2 +-
 .../cql3/statements/AlterTableStatement.java|   18 +-
 .../org/apache/cassandra/db/LivenessInfo.java   |   17 +-
 .../org/apache/cassandra/db/ReadCommand.java|7 +-
 .../db/compaction/CompactionIterator.java   |7 +-
 .../apache/cassandra/db/filter/RowFilter.java   |4 +-
 .../cassandra/db/partitions/PurgeFunction.java  |   14 +-
 .../org/apache/cassandra/db/rows/BTreeRow.java  |6 +-
 src/java/org/apache/cassandra/db/rows/Row.java  |   15 +-
 .../cassandra/db/rows/UnfilteredSerializer.java |5 +
 .../apache/cassandra/db/transform/Filter.java   |8 +-
 .../db/transform/FilteredPartitions.java|4 +-
 .../cassandra/db/transform/FilteredRows.java|2 +-
 .../apache/cassandra/db/view/TableViews.java|   18 +-
 src/java/org/apache/cassandra/db/view/View.java |   43 +-
 .../apache/cassandra/db/view/ViewManager.java   |5 +
 .../cassandra/db/view/ViewUpdateGenerator.java  |  163 ++-
 .../apache/cassandra/service/DataResolver.java  |4 +-
 .../org/apache/cassandra/cql3/CQLTester.java|2 +-
 .../apache/cassandra/cql3/ViewComplexTest.java  | 1343 ++
 .../cassandra/cql3/ViewFilteringTest.java   |  706 -
 .../org/apache/cassandra/cql3/ViewTest.java |   31 +-
 25 files changed, 1973 insertions(+), 488 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index bb5fdfe..7064c5d 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -21,6 +21,24 @@ Upgrading
- Nothing specific to this release, but please see previous upgrading 
sections,
  especially if you are upgrading from 2.2.
 
+Materialized Views
+---
+- Cassandra will no longer allow dropping columns on tables with 
Materialized Views.
+- A change was made in the way the Materialized View timestamp is 
computed, which
+  may cause an old deletion to a base column which is view primary key 
(PK) column
+  to not be reflected in the view when repairing the base table 
post-upgrade. This
+  condition is only possible when a column deletion to an MV primary key 
(PK) column
+  not present in the base table PK (via UPDATE base SET view_pk_col = null 
or DELETE
+  view_pk_col FROM base) is missed before the upgrade and received by 
repair after the upgrade.
+  If such column deletions are done on a view PK column which is not a 
base PK, it's advisable
+  to run repair on the base table of all nodes prior to the upgrade. 
Alternatively it's possible
+  to fix potential inconsistencies by running repair on the views after 
upgrade or drop and
+  re-create the views. See CASSANDRA-11500 for more details.
+- Removal of columns not selected in the Materialized View (via UPDATE 
base SET unselected_column
+  = null or DELETE unselected_column FROM base) may not be properly 
reflected in the view in some
+  situations so we advise against doing deletions on base columns not 
selected in views
+  until this is fixed on CASSANDRA-13826.
+
 3.0.14
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 1efa6d4..54888b8 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -524,6 +524,12 @@ h4(#createMVWhere). @WHERE@ Clause
 
 The @@ is similar to the "where clause of a @SELECT@ 
statement":#selectWhere, with a few differences.  First, the where claus

[08/16] cassandra git commit: Fix outstanding MV timestamp issues and add documentation about unsupported cases (see CASSANDRA-11500 for a summary of fixes)

2017-09-04 Thread paulo
http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java 
b/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
new file mode 100644
index 000..9e32620
--- /dev/null
+++ b/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
@@ -0,0 +1,1343 @@
+package org.apache.cassandra.cql3;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import org.apache.cassandra.concurrent.SEPExecutor;
+import org.apache.cassandra.concurrent.Stage;
+import org.apache.cassandra.concurrent.StageManager;
+import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.Keyspace;
+import org.apache.cassandra.db.compaction.CompactionManager;
+import org.apache.cassandra.utils.FBUtilities;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import com.google.common.base.Objects;
+
+public class ViewComplexTest extends CQLTester
+{
+int protocolVersion = 4;
+private final List views = new ArrayList<>();
+
+@BeforeClass
+public static void startup()
+{
+requireNetwork();
+}
+@Before
+public void begin()
+{
+views.clear();
+}
+
+@After
+public void end() throws Throwable
+{
+for (String viewName : views)
+executeNet(protocolVersion, "DROP MATERIALIZED VIEW " + viewName);
+}
+
+private void createView(String name, String query) throws Throwable
+{
+executeNet(protocolVersion, String.format(query, name));
+// If exception is thrown, the view will not be added to the list; 
since it shouldn't have been created, this is
+// the desired behavior
+views.add(name);
+}
+
+private void updateView(String query, Object... params) throws Throwable
+{
+updateViewWithFlush(query, false, params);
+}
+
+private void updateViewWithFlush(String query, boolean flush, Object... 
params) throws Throwable
+{
+executeNet(protocolVersion, query, params);
+while (!(((SEPExecutor) 
StageManager.getStage(Stage.VIEW_MUTATION)).getPendingTasks() == 0
+&& ((SEPExecutor) 
StageManager.getStage(Stage.VIEW_MUTATION)).getActiveCount() == 0))
+{
+Thread.sleep(1);
+}
+if (flush)
+Keyspace.open(keyspace()).flush();
+}
+
+// for now, unselected column cannot be fully supported, SEE 
CASSANDRA-13826
+@Ignore
+@Test
+public void testPartialDeleteUnselectedColumn() throws Throwable
+{
+boolean flush = true;
+execute("USE " + keyspace());
+executeNet(protocolVersion, "USE " + keyspace());
+createTable("CREATE TABLE %s (k int, c int, a int, b int, PRIMARY KEY 
(k, c))");
+createView("mv",
+   "CREATE MATERIALIZED VIEW %s AS SELECT k,c FROM %%s WHERE k 
IS NOT NULL AND c IS NOT NULL PRIMARY KEY (k,c)");
+Keyspace ks = Keyspace.open(keyspace());
+ks.getColumnFamilyStore("mv").disableAutoCompaction();
+
+updateView("UPDATE %s USING TIMESTAMP 10 SET b=1 WHERE k=1 AND c=1");
+if (flush)
+FBUtilities.waitOnFutures(ks.flush());
+assertRows(execute("SELECT * from %s"), row(1, 1, null, 1));
+assertRows(execute("SELECT * from mv"), row(1, 1));
+updateView("DELETE b FROM %s USING TIMESTAMP 11 WHERE k=1 AND c=1");
+if (flush)
+FBUtilities.waitOnFutures(ks.flush());
+assertEmpty(execute("SELECT * from %s"));
+assertEmpty(execute("SELECT * from mv"));
+updateView("UPDATE %s USING TIMESTAMP 1 SET a=1 WHERE k=1 AND c=1");
+if (flush)
+FBUtilities.waitOnFutures(ks.flush());
+assertRows(execute("SELECT * from %s"), row(1, 1, 1, null));
+assertRows(execute("SELECT * from mv"), row(1, 1));
+
+execute("truncate %s;");
+
+// removal generated by unselected column should not shadow PK update 
with smaller timestamp
+updateViewWithFlush("UPDATE %s USING TIMESTAMP 18 SET a=1 WHERE k=1 
AND c=1", flush);
+assertRows(execute("SELECT * from %s"), row(1, 1, 1, null));
+assertRows(execute("SELECT * from mv"), row(1, 1));
+
+updateViewWithFlush("UPDATE %s USING TIMESTAMP 20 SET a=null WHERE k=1 
AND c=1", flush);
+assertRows(execute("SELECT * from %s"));
+assertRows(execute("SELECT * from mv"));
+
+  

[15/16] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-04 Thread paulo
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e6fb8302/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
--
diff --cc test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
index 000,ece3e6d..1992b17
mode 00,100644..100644
--- a/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
+++ b/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
@@@ -1,0 -1,1344 +1,1344 @@@
+ package org.apache.cassandra.cql3;
+ 
+ import static org.junit.Assert.assertEquals;
+ import static org.junit.Assert.assertTrue;
+ import static org.junit.Assert.fail;
+ 
+ import java.nio.ByteBuffer;
+ import java.util.ArrayList;
+ import java.util.Arrays;
+ import java.util.Comparator;
+ import java.util.HashMap;
+ import java.util.Iterator;
+ import java.util.List;
+ import java.util.Map;
+ import java.util.concurrent.TimeUnit;
+ import java.util.stream.Collectors;
+ 
+ import org.apache.cassandra.concurrent.SEPExecutor;
+ import org.apache.cassandra.concurrent.Stage;
+ import org.apache.cassandra.concurrent.StageManager;
+ import org.apache.cassandra.db.ColumnFamilyStore;
+ import org.apache.cassandra.db.Keyspace;
+ import org.apache.cassandra.db.compaction.CompactionManager;
+ import org.apache.cassandra.transport.ProtocolVersion;
+ import org.apache.cassandra.utils.FBUtilities;
+ import org.junit.After;
+ import org.junit.Before;
+ import org.junit.BeforeClass;
+ import org.junit.Ignore;
+ import org.junit.Test;
+ 
+ import com.google.common.base.Objects;
+ 
+ public class ViewComplexTest extends CQLTester
+ {
+ ProtocolVersion protocolVersion = ProtocolVersion.V4;
+ private final List views = new ArrayList<>();
+ 
+ @BeforeClass
+ public static void startup()
+ {
+ requireNetwork();
+ }
++
+ @Before
+ public void begin()
+ {
+ views.clear();
+ }
+ 
+ @After
+ public void end() throws Throwable
+ {
+ for (String viewName : views)
+ executeNet(protocolVersion, "DROP MATERIALIZED VIEW " + viewName);
+ }
+ 
+ private void createView(String name, String query) throws Throwable
+ {
+ executeNet(protocolVersion, String.format(query, name));
+ // If exception is thrown, the view will not be added to the list; 
since it shouldn't have been created, this is
+ // the desired behavior
+ views.add(name);
+ }
+ 
+ private void updateView(String query, Object... params) throws Throwable
+ {
+ updateViewWithFlush(query, false, params);
+ }
+ 
+ private void updateViewWithFlush(String query, boolean flush, Object... 
params) throws Throwable
+ {
+ executeNet(protocolVersion, query, params);
+ while (!(((SEPExecutor) 
StageManager.getStage(Stage.VIEW_MUTATION)).getPendingTasks() == 0
+ && ((SEPExecutor) 
StageManager.getStage(Stage.VIEW_MUTATION)).getActiveCount() == 0))
+ {
+ Thread.sleep(1);
+ }
+ if (flush)
+ Keyspace.open(keyspace()).flush();
+ }
+ 
 -// for now, unselected column cannot be fully supported, SEE 
CASSANDRA-13826
++// for now, unselected column cannot be fully supported, SEE 
CASSANDRA-11500
+ @Ignore
+ @Test
+ public void testPartialDeleteUnselectedColumn() throws Throwable
+ {
+ boolean flush = true;
+ execute("USE " + keyspace());
+ executeNet(protocolVersion, "USE " + keyspace());
+ createTable("CREATE TABLE %s (k int, c int, a int, b int, PRIMARY KEY 
(k, c))");
+ createView("mv",
+"CREATE MATERIALIZED VIEW %s AS SELECT k,c FROM %%s WHERE 
k IS NOT NULL AND c IS NOT NULL PRIMARY KEY (k,c)");
+ Keyspace ks = Keyspace.open(keyspace());
+ ks.getColumnFamilyStore("mv").disableAutoCompaction();
+ 
+ updateView("UPDATE %s USING TIMESTAMP 10 SET b=1 WHERE k=1 AND c=1");
+ if (flush)
+ FBUtilities.waitOnFutures(ks.flush());
+ assertRows(execute("SELECT * from %s"), row(1, 1, null, 1));
+ assertRows(execute("SELECT * from mv"), row(1, 1));
+ updateView("DELETE b FROM %s USING TIMESTAMP 11 WHERE k=1 AND c=1");
+ if (flush)
+ FBUtilities.waitOnFutures(ks.flush());
+ assertEmpty(execute("SELECT * from %s"));
+ assertEmpty(execute("SELECT * from mv"));
+ updateView("UPDATE %s USING TIMESTAMP 1 SET a=1 WHERE k=1 AND c=1");
+ if (flush)
+ FBUtilities.waitOnFutures(ks.flush());
+ assertRows(execute("SELECT * from %s"), row(1, 1, 1, null));
+ assertRows(execute("SELECT * from mv"), row(1, 1));
+ 
+ execute("truncate %s;");
+ 
+ // removal generated by unselected column should not shadow PK update 
with smaller timestamp
+ updateViewWithFlush("UPDATE %s USING TIMESTAMP 18 SET a=1 WHERE k=1 
AND c=1", flush);
+ assertRows(execute("SELECT * from 

[01/16] cassandra git commit: Fix outstanding MV timestamp issues and add documentation about unsupported cases (see CASSANDRA-11500 for a summary of fixes)

2017-09-04 Thread paulo
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 b0eba5f9c -> 1b36740eb
  refs/heads/cassandra-3.11 14d67d81c -> e624c6638
  refs/heads/trunk 460360093 -> e6fb83028


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java 
b/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
index 245ceb7..fe618b6 100644
--- a/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
+++ b/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
@@ -77,13 +77,13 @@ public class ViewFilteringTest extends CQLTester
 
 // IS NOT NULL is required on all PK statements that are not otherwise 
restricted
 List badStatements = Arrays.asList(
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE b IS 
NOT NULL AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = ? 
AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
blobAsInt(?) AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s PRIMARY KEY 
(a, b, c, d)"
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE b IS NOT NULL 
AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT NULL 
AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT NULL 
AND b IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT NULL 
AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = ? AND b IS 
NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
blobAsInt(?) AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s PRIMARY KEY (a, b, 
c, d)"
 );
 
 for (String badStatement : badStatements)
@@ -96,19 +96,19 @@ public class ViewFilteringTest extends CQLTester
 catch (InvalidQueryException exc) {}
 }
 
-List goodStatements = Arrays.asList(
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND c = 1 AND d IS NOT NULL PRIMARY KEY ((a, b), c, 
d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c > 1 AND d IS NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c = 1 AND d IN (1, 2, 3) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND (c, d) = (1, 1) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND (c, d) > (1, 1) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND (c, d) IN ((1, 1), (2, 2)) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
(int) 1 AND b = 1 AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
blobAsInt(intAsBlob(1)) AND b = 1 AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, 
d)"
-);
+List goodStatements = Arrays.asList(
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 AND 
b = 1 AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s 

[13/16] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-04 Thread paulo
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e624c663
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e624c663
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e624c663

Branch: refs/heads/cassandra-3.11
Commit: e624c6638254ea410691f085a10d08d412eb5ac1
Parents: 14d67d8 1b36740
Author: Paulo Motta 
Authored: Tue Sep 5 01:04:34 2017 -0500
Committer: Paulo Motta 
Committed: Tue Sep 5 01:05:06 2017 -0500

--
 NEWS.txt|   30 +-
 doc/cql3/CQL.textile|6 +
 .../org/apache/cassandra/config/CFMetaData.java |   13 +
 .../apache/cassandra/cql3/UpdateParameters.java |2 +-
 .../cql3/statements/AlterTableStatement.java|   18 +-
 .../org/apache/cassandra/db/LivenessInfo.java   |   17 +-
 .../org/apache/cassandra/db/ReadCommand.java|7 +-
 .../db/compaction/CompactionIterator.java   |7 +-
 .../apache/cassandra/db/filter/RowFilter.java   |4 +-
 .../cassandra/db/partitions/PurgeFunction.java  |   14 +-
 .../org/apache/cassandra/db/rows/BTreeRow.java  |6 +-
 src/java/org/apache/cassandra/db/rows/Row.java  |   13 +-
 .../cassandra/db/rows/UnfilteredSerializer.java |5 +
 .../apache/cassandra/db/transform/Filter.java   |8 +-
 .../db/transform/FilteredPartitions.java|4 +-
 .../cassandra/db/transform/FilteredRows.java|2 +-
 .../apache/cassandra/db/view/TableViews.java|   18 +-
 src/java/org/apache/cassandra/db/view/View.java |   41 +-
 .../apache/cassandra/db/view/ViewManager.java   |5 +
 .../cassandra/db/view/ViewUpdateGenerator.java  |  163 ++-
 .../apache/cassandra/service/DataResolver.java  |4 +-
 .../org/apache/cassandra/cql3/CQLTester.java|2 +-
 .../apache/cassandra/cql3/ViewComplexTest.java  | 1344 ++
 .../cassandra/cql3/ViewFilteringTest.java   | 1030 +-
 .../org/apache/cassandra/cql3/ViewTest.java |   25 +-
 25 files changed, 2293 insertions(+), 495 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e624c663/NEWS.txt
--
diff --cc NEWS.txt
index 8e39667,7064c5d..0682ae9
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -18,10 -18,28 +18,34 @@@ using the provided 'sstableupgrade' too
  
  Upgrading
  -
 -   - Nothing specific to this release, but please see previous upgrading 
sections,
 - especially if you are upgrading from 2.2.
 +- Nothing specific to this version but please see previous upgrading 
sections,
 +  especially if you are upgrading from 2.2.
  
+ Materialized Views
+ ---
+ - Cassandra will no longer allow dropping columns on tables with 
Materialized Views.
+ - A change was made in the way the Materialized View timestamp is 
computed, which
+   may cause an old deletion to a base column which is view primary key 
(PK) column
+   to not be reflected in the view when repairing the base table 
post-upgrade. This
+   condition is only possible when a column deletion to an MV primary key 
(PK) column
+   not present in the base table PK (via UPDATE base SET view_pk_col = 
null or DELETE
+   view_pk_col FROM base) is missed before the upgrade and received by 
repair after the upgrade.
+   If such column deletions are done on a view PK column which is not a 
base PK, it's advisable
+   to run repair on the base table of all nodes prior to the upgrade. 
Alternatively it's possible
+   to fix potential inconsistencies by running repair on the views after 
upgrade or drop and
+   re-create the views. See CASSANDRA-11500 for more details.
+ - Removal of columns not selected in the Materialized View (via UPDATE 
base SET unselected_column
+   = null or DELETE unselected_column FROM base) may not be properly 
reflected in the view in some
+   situations so we advise against doing deletions on base columns not 
selected in views
+   until this is fixed on CASSANDRA-13826.
 -
 -3.0.14
++- Creating Materialized View with filtering on non-primary-key base column
++  (added in CASSANDRA-10368) is disabled, because the liveness of view row
++  is depending on multiple filtered base non-key columns and base non-key
++  column used in view primary-key. This semantic cannot be supported 
without
++  storage format change, see CASSANDRA-13826. For append-only use case, 
you
++  may still use this feature with a startup flag: 
"-Dcassandra.mv.allow_filtering_nonkey_columns_unsafe=true"
++
 +3.11.0
  ==
  
  Upgrading

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e624c663/doc/cql3/CQL.textile
--

[14/16] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-04 Thread paulo
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e6fb8302/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
--
diff --cc test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
index cb15e83,6803230..f2d8f2a
--- a/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
+++ b/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
@@@ -80,7 -84,7 +81,7 @@@ public class ViewFilteringTest extends 
  {
  executeNet(protocolVersion, query, params);
  while (!(((SEPExecutor) 
StageManager.getStage(Stage.VIEW_MUTATION)).getPendingTasks() == 0
--&& ((SEPExecutor) 
StageManager.getStage(Stage.VIEW_MUTATION)).getActiveCount() == 0))
++ && ((SEPExecutor) 
StageManager.getStage(Stage.VIEW_MUTATION)).getActiveCount() == 0))
  {
  Thread.sleep(1);
  }
@@@ -92,6 -96,324 +93,318 @@@
  views.remove(name);
  }
  
 -private static void waitForView(String keyspace, String view) throws 
InterruptedException
 -{
 -while (!SystemKeyspace.isViewBuilt(keyspace, view))
 -Thread.sleep(10);
 -}
 -
 -// TODO will revise the non-pk filter condition in MV, see CASSANDRA-13826
++// TODO will revise the non-pk filter condition in MV, see CASSANDRA-11500
+ @Ignore
+ @Test
+ public void testViewFilteringWithFlush() throws Throwable
+ {
+ testViewFiltering(true);
+ }
+ 
 -// TODO will revise the non-pk filter condition in MV, see CASSANDRA-13826
++// TODO will revise the non-pk filter condition in MV, see CASSANDRA-11500
+ @Ignore
+ @Test
+ public void testViewFilteringWithoutFlush() throws Throwable
+ {
+ testViewFiltering(false);
+ }
+ 
+ public void testViewFiltering(boolean flush) throws Throwable
+ {
+ // CASSANDRA-13547: able to shadow entire view row if base column 
used in filter condition is modified
+ createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
(a))");
+ 
+ execute("USE " + keyspace());
+ executeNet(protocolVersion, "USE " + keyspace());
+ 
+ createView("mv_test1",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a 
IS NOT NULL AND b IS NOT NULL and c = 1  PRIMARY KEY (a, b)");
+ createView("mv_test2",
+"CREATE MATERIALIZED VIEW %s AS SELECT c, d FROM %%s WHERE 
a IS NOT NULL AND b IS NOT NULL and c = 1 and d = 1 PRIMARY KEY (a, b)");
+ createView("mv_test3",
+"CREATE MATERIALIZED VIEW %s AS SELECT a, b, c, d FROM %%s 
WHERE a IS NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");
+ createView("mv_test4",
+"CREATE MATERIALIZED VIEW %s AS SELECT c FROM %%s WHERE a 
IS NOT NULL AND b IS NOT NULL and c = 1 PRIMARY KEY (a, b)");
+ createView("mv_test5",
+"CREATE MATERIALIZED VIEW %s AS SELECT c FROM %%s WHERE a 
IS NOT NULL and d = 1 PRIMARY KEY (a, d)");
+ createView("mv_test6",
+"CREATE MATERIALIZED VIEW %s AS SELECT c FROM %%s WHERE a 
= 1 and d IS NOT NULL PRIMARY KEY (a, d)");
+ 
+ waitForView(keyspace(), "mv_test1");
+ waitForView(keyspace(), "mv_test2");
+ waitForView(keyspace(), "mv_test3");
+ waitForView(keyspace(), "mv_test4");
+ waitForView(keyspace(), "mv_test5");
+ waitForView(keyspace(), "mv_test6");
+ 
+ Keyspace ks = Keyspace.open(keyspace());
+ ks.getColumnFamilyStore("mv_test1").disableAutoCompaction();
+ ks.getColumnFamilyStore("mv_test2").disableAutoCompaction();
+ ks.getColumnFamilyStore("mv_test3").disableAutoCompaction();
+ ks.getColumnFamilyStore("mv_test4").disableAutoCompaction();
+ ks.getColumnFamilyStore("mv_test5").disableAutoCompaction();
+ ks.getColumnFamilyStore("mv_test6").disableAutoCompaction();
+ 
+ 
+ execute("INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?) using 
timestamp 0", 1, 1, 1, 1);
+ if (flush)
+ FBUtilities.waitOnFutures(ks.flush());
+ 
+ // views should be updated.
+ assertRowsIgnoringOrder(execute("SELECT * FROM mv_test1"), row(1, 1, 
1, 1));
+ assertRowsIgnoringOrder(execute("SELECT * FROM mv_test2"), row(1, 1, 
1, 1));
+ assertRowsIgnoringOrder(execute("SELECT * FROM mv_test3"), row(1, 1, 
1, 1));
+ assertRowsIgnoringOrder(execute("SELECT * FROM mv_test4"), row(1, 1, 
1));
+ assertRowsIgnoringOrder(execute("SELECT * FROM mv_test5"), row(1, 1, 
1));
+ assertRowsIgnoringOrder(execute("SELECT * FROM mv_test6"), row(1, 1, 
1));
+ 
+ updateView("UPDATE %s using timestamp 1 set c = ? WHERE a=?", 0, 1);
+ if (flush)
+ FBUtilities.waitOnFutures(ks.flush());
+ 
+ assertRowCount(execute("SELECT * FROM mv_test1"), 0);
+ assertRowCount(execute("SELECT * FRO

[16/16] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-04 Thread paulo
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e6fb8302
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e6fb8302
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e6fb8302

Branch: refs/heads/trunk
Commit: e6fb8302848bc43888b0a742a9b0abce09872c45
Parents: 4603600 e624c66
Author: Paulo Motta 
Authored: Tue Sep 5 01:05:32 2017 -0500
Committer: Paulo Motta 
Committed: Tue Sep 5 01:06:24 2017 -0500

--
 NEWS.txt|   19 +
 doc/source/cql/mvs.rst  |8 +
 .../apache/cassandra/cql3/UpdateParameters.java |2 +-
 .../cql3/statements/AlterTableStatement.java|   19 +-
 .../org/apache/cassandra/db/LivenessInfo.java   |   17 +-
 .../org/apache/cassandra/db/ReadCommand.java|4 +-
 .../db/compaction/CompactionIterator.java   |4 +-
 .../apache/cassandra/db/filter/RowFilter.java   |4 +-
 .../cassandra/db/partitions/PurgeFunction.java  |   10 +-
 .../org/apache/cassandra/db/rows/BTreeRow.java  |6 +-
 src/java/org/apache/cassandra/db/rows/Row.java  |   15 +-
 .../cassandra/db/rows/UnfilteredSerializer.java |5 +
 .../apache/cassandra/db/transform/Filter.java   |8 +-
 .../db/transform/FilteredPartitions.java|2 +-
 .../cassandra/db/transform/FilteredRows.java|2 +-
 .../apache/cassandra/db/view/TableViews.java|   14 +-
 src/java/org/apache/cassandra/db/view/View.java |   45 +-
 .../apache/cassandra/db/view/ViewManager.java   |5 +
 .../cassandra/db/view/ViewUpdateGenerator.java  |  161 ++-
 .../apache/cassandra/schema/TableMetadata.java  |   13 +
 .../apache/cassandra/service/DataResolver.java  |2 +-
 .../org/apache/cassandra/cql3/CQLTester.java|2 +-
 .../apache/cassandra/cql3/ViewComplexTest.java  | 1344 ++
 .../cassandra/cql3/ViewFilteringTest.java   | 1106 --
 .../org/apache/cassandra/cql3/ViewTest.java |   35 +-
 25 files changed, 2319 insertions(+), 533 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e6fb8302/NEWS.txt
--
diff --cc NEWS.txt
index c9963c3,0682ae9..7d0fe9a
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -33,41 -18,33 +33,60 @@@ New feature
  
  Upgrading
  -
 -- Nothing specific to this version but please see previous upgrading 
sections,
 -  especially if you are upgrading from 2.2.
 +- Support for legacy auth tables in the system_auth keyspace (users,
 +  permissions, credentials) and the migration code has been removed. 
Migration
 +  of these legacy auth tables must have been completed before the upgrade 
to
 +  4.0 and the legacy tables must have been removed. See the 'Upgrading' 
section
 +  for version 2.2 for migration instructions.
 +- Cassandra 4.0 removed support for the deprecated Thrift interface. 
Amongst
 +  Tother things, this imply the removal of all yaml option related to 
thrift
 +  ('start_rpc', rpc_port, ...).
 +- Cassandra 4.0 removed support for any pre-3.0 format. This means you
 +  cannot upgrade from a 2.x version to 4.0 directly, you have to upgrade 
to
 +  a 3.0.x/3.x version first (and run upgradesstable). In particular, this
 +  mean Cassandra 4.0 cannot load or read pre-3.0 sstables in any way: you
 +  will need to upgrade those sstable in 3.0.x/3.x first.
 +- Upgrades from 3.0.x or 3.x are supported since 3.0.13 or 3.11.0, 
previous
 +  versions will causes issues during rolling upgrades (CASSANDRA-13274).
 +- Cassandra will no longer allow invalid keyspace replication options, 
such
 +  as invalid datacenter names for NetworkTopologyStrategy. Operators MUST
 +  add new nodes to a datacenter before they can set set ALTER or CREATE
 +  keyspace replication policies using that datacenter. Existing keyspaces
 +  will continue to operate, but CREATE and ALTER will validate that all
 +  datacenters specified exist in the cluster.
 +- Cassandra 4.0 fixes a problem with incremental repair which caused 
repaired
 +  data to be inconsistent between nodes. The fix changes the behavior of 
both
 +  full and incremental repairs. For full repairs, data is no longer marked
 +  repaired. For incremental repairs, anticompaction is run at the 
beginning
 +  of the repair, instead of at the end. If incremental repair was being 
used
 +  prior to upgrading, a full repair should be run after upgrading to 
resolve
 +  any inconsistencies.
 +- Config option index_interval has been removed (it was deprecated since 
2.0)
 +- Deprecated repair JMX APIs are removed.
 +- The version of snappy-java has been upgraded to 1.1.2.6
 +  - the m

[04/16] cassandra git commit: Fix outstanding MV timestamp issues and add documentation about unsupported cases (see CASSANDRA-11500 for a summary of fixes)

2017-09-04 Thread paulo
http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java 
b/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
index 245ceb7..fe618b6 100644
--- a/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
+++ b/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
@@ -77,13 +77,13 @@ public class ViewFilteringTest extends CQLTester
 
 // IS NOT NULL is required on all PK statements that are not otherwise 
restricted
 List badStatements = Arrays.asList(
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE b IS 
NOT NULL AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = ? 
AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
blobAsInt(?) AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s PRIMARY KEY 
(a, b, c, d)"
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE b IS NOT NULL 
AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT NULL 
AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT NULL 
AND b IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT NULL 
AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = ? AND b IS 
NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
blobAsInt(?) AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s PRIMARY KEY (a, b, 
c, d)"
 );
 
 for (String badStatement : badStatements)
@@ -96,19 +96,19 @@ public class ViewFilteringTest extends CQLTester
 catch (InvalidQueryException exc) {}
 }
 
-List goodStatements = Arrays.asList(
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND c = 1 AND d IS NOT NULL PRIMARY KEY ((a, b), c, 
d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c > 1 AND d IS NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c = 1 AND d IN (1, 2, 3) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND (c, d) = (1, 1) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND (c, d) > (1, 1) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND (c, d) IN ((1, 1), (2, 2)) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
(int) 1 AND b = 1 AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
blobAsInt(intAsBlob(1)) AND b = 1 AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, 
d)"
-);
+List goodStatements = Arrays.asList(
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 AND 
b = 1 AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT 
NULL AND b IS NOT NULL AND c = 1 AND d IS NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT 
NULL AND b

[12/16] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-04 Thread paulo
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e624c663/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
--
diff --cc test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
index 87b19ad,fe618b6..6803230
--- a/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
+++ b/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
@@@ -21,9 -21,8 +21,10 @@@ package org.apache.cassandra.cql3
  import java.util.*;
  
  import org.junit.After;
 +import org.junit.AfterClass;
  import org.junit.Before;
  import org.junit.BeforeClass;
++import org.junit.Ignore;
  import org.junit.Test;
  
  import com.datastax.driver.core.exceptions.InvalidQueryException;
@@@ -95,6 -67,6 +96,324 @@@ public class ViewFilteringTest extends 
  views.remove(name);
  }
  
++private static void waitForView(String keyspace, String view) throws 
InterruptedException
++{
++while (!SystemKeyspace.isViewBuilt(keyspace, view))
++Thread.sleep(10);
++}
++
++// TODO will revise the non-pk filter condition in MV, see CASSANDRA-13826
++@Ignore
++@Test
++public void testViewFilteringWithFlush() throws Throwable
++{
++testViewFiltering(true);
++}
++
++// TODO will revise the non-pk filter condition in MV, see CASSANDRA-13826
++@Ignore
++@Test
++public void testViewFilteringWithoutFlush() throws Throwable
++{
++testViewFiltering(false);
++}
++
++public void testViewFiltering(boolean flush) throws Throwable
++{
++// CASSANDRA-13547: able to shadow entire view row if base column 
used in filter condition is modified
++createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
(a))");
++
++execute("USE " + keyspace());
++executeNet(protocolVersion, "USE " + keyspace());
++
++createView("mv_test1",
++   "CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a 
IS NOT NULL AND b IS NOT NULL and c = 1  PRIMARY KEY (a, b)");
++createView("mv_test2",
++   "CREATE MATERIALIZED VIEW %s AS SELECT c, d FROM %%s WHERE 
a IS NOT NULL AND b IS NOT NULL and c = 1 and d = 1 PRIMARY KEY (a, b)");
++createView("mv_test3",
++   "CREATE MATERIALIZED VIEW %s AS SELECT a, b, c, d FROM %%s 
WHERE a IS NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");
++createView("mv_test4",
++   "CREATE MATERIALIZED VIEW %s AS SELECT c FROM %%s WHERE a 
IS NOT NULL AND b IS NOT NULL and c = 1 PRIMARY KEY (a, b)");
++createView("mv_test5",
++   "CREATE MATERIALIZED VIEW %s AS SELECT c FROM %%s WHERE a 
IS NOT NULL and d = 1 PRIMARY KEY (a, d)");
++createView("mv_test6",
++   "CREATE MATERIALIZED VIEW %s AS SELECT c FROM %%s WHERE a 
= 1 and d IS NOT NULL PRIMARY KEY (a, d)");
++
++waitForView(keyspace(), "mv_test1");
++waitForView(keyspace(), "mv_test2");
++waitForView(keyspace(), "mv_test3");
++waitForView(keyspace(), "mv_test4");
++waitForView(keyspace(), "mv_test5");
++waitForView(keyspace(), "mv_test6");
++
++Keyspace ks = Keyspace.open(keyspace());
++ks.getColumnFamilyStore("mv_test1").disableAutoCompaction();
++ks.getColumnFamilyStore("mv_test2").disableAutoCompaction();
++ks.getColumnFamilyStore("mv_test3").disableAutoCompaction();
++ks.getColumnFamilyStore("mv_test4").disableAutoCompaction();
++ks.getColumnFamilyStore("mv_test5").disableAutoCompaction();
++ks.getColumnFamilyStore("mv_test6").disableAutoCompaction();
++
++
++execute("INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?) using 
timestamp 0", 1, 1, 1, 1);
++if (flush)
++FBUtilities.waitOnFutures(ks.flush());
++
++// views should be updated.
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test1"), row(1, 1, 
1, 1));
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test2"), row(1, 1, 
1, 1));
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test3"), row(1, 1, 
1, 1));
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test4"), row(1, 1, 
1));
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test5"), row(1, 1, 
1));
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test6"), row(1, 1, 
1));
++
++updateView("UPDATE %s using timestamp 1 set c = ? WHERE a=?", 0, 1);
++if (flush)
++FBUtilities.waitOnFutures(ks.flush());
++
++assertRowCount(execute("SELECT * FROM mv_test1"), 0);
++assertRowCount(execute("SELECT * FROM mv_test2"), 0);
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test3"), row(1, 1, 
0, 1));
++assertRowCount(execute("SELECT * FROM mv_test4"), 0);
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test5"), row(1, 1, 
0));
++assertRowsIgnoringOrder(ex

[02/16] cassandra git commit: Fix outstanding MV timestamp issues and add documentation about unsupported cases (see CASSANDRA-11500 for a summary of fixes)

2017-09-04 Thread paulo
http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java 
b/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
new file mode 100644
index 000..9e32620
--- /dev/null
+++ b/test/unit/org/apache/cassandra/cql3/ViewComplexTest.java
@@ -0,0 +1,1343 @@
+package org.apache.cassandra.cql3;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import java.util.stream.Collectors;
+
+import org.apache.cassandra.concurrent.SEPExecutor;
+import org.apache.cassandra.concurrent.Stage;
+import org.apache.cassandra.concurrent.StageManager;
+import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.Keyspace;
+import org.apache.cassandra.db.compaction.CompactionManager;
+import org.apache.cassandra.utils.FBUtilities;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import com.google.common.base.Objects;
+
+public class ViewComplexTest extends CQLTester
+{
+int protocolVersion = 4;
+private final List views = new ArrayList<>();
+
+@BeforeClass
+public static void startup()
+{
+requireNetwork();
+}
+@Before
+public void begin()
+{
+views.clear();
+}
+
+@After
+public void end() throws Throwable
+{
+for (String viewName : views)
+executeNet(protocolVersion, "DROP MATERIALIZED VIEW " + viewName);
+}
+
+private void createView(String name, String query) throws Throwable
+{
+executeNet(protocolVersion, String.format(query, name));
+// If exception is thrown, the view will not be added to the list; 
since it shouldn't have been created, this is
+// the desired behavior
+views.add(name);
+}
+
+private void updateView(String query, Object... params) throws Throwable
+{
+updateViewWithFlush(query, false, params);
+}
+
+private void updateViewWithFlush(String query, boolean flush, Object... 
params) throws Throwable
+{
+executeNet(protocolVersion, query, params);
+while (!(((SEPExecutor) 
StageManager.getStage(Stage.VIEW_MUTATION)).getPendingTasks() == 0
+&& ((SEPExecutor) 
StageManager.getStage(Stage.VIEW_MUTATION)).getActiveCount() == 0))
+{
+Thread.sleep(1);
+}
+if (flush)
+Keyspace.open(keyspace()).flush();
+}
+
+// for now, unselected column cannot be fully supported, SEE 
CASSANDRA-13826
+@Ignore
+@Test
+public void testPartialDeleteUnselectedColumn() throws Throwable
+{
+boolean flush = true;
+execute("USE " + keyspace());
+executeNet(protocolVersion, "USE " + keyspace());
+createTable("CREATE TABLE %s (k int, c int, a int, b int, PRIMARY KEY 
(k, c))");
+createView("mv",
+   "CREATE MATERIALIZED VIEW %s AS SELECT k,c FROM %%s WHERE k 
IS NOT NULL AND c IS NOT NULL PRIMARY KEY (k,c)");
+Keyspace ks = Keyspace.open(keyspace());
+ks.getColumnFamilyStore("mv").disableAutoCompaction();
+
+updateView("UPDATE %s USING TIMESTAMP 10 SET b=1 WHERE k=1 AND c=1");
+if (flush)
+FBUtilities.waitOnFutures(ks.flush());
+assertRows(execute("SELECT * from %s"), row(1, 1, null, 1));
+assertRows(execute("SELECT * from mv"), row(1, 1));
+updateView("DELETE b FROM %s USING TIMESTAMP 11 WHERE k=1 AND c=1");
+if (flush)
+FBUtilities.waitOnFutures(ks.flush());
+assertEmpty(execute("SELECT * from %s"));
+assertEmpty(execute("SELECT * from mv"));
+updateView("UPDATE %s USING TIMESTAMP 1 SET a=1 WHERE k=1 AND c=1");
+if (flush)
+FBUtilities.waitOnFutures(ks.flush());
+assertRows(execute("SELECT * from %s"), row(1, 1, 1, null));
+assertRows(execute("SELECT * from mv"), row(1, 1));
+
+execute("truncate %s;");
+
+// removal generated by unselected column should not shadow PK update 
with smaller timestamp
+updateViewWithFlush("UPDATE %s USING TIMESTAMP 18 SET a=1 WHERE k=1 
AND c=1", flush);
+assertRows(execute("SELECT * from %s"), row(1, 1, 1, null));
+assertRows(execute("SELECT * from mv"), row(1, 1));
+
+updateViewWithFlush("UPDATE %s USING TIMESTAMP 20 SET a=null WHERE k=1 
AND c=1", flush);
+assertRows(execute("SELECT * from %s"));
+assertRows(execute("SELECT * from mv"));
+
+  

[06/16] cassandra git commit: Fix outstanding MV timestamp issues and add documentation about unsupported cases (see CASSANDRA-11500 for a summary of fixes)

2017-09-04 Thread paulo
Fix outstanding MV timestamp issues and add documentation about unsupported 
cases (see CASSANDRA-11500 for a summary of fixes)

This patch introduces the following changes to fix MV timestamp issues:
 - Add strict liveness for view with non-key base column in pk
 - Deprecated shadowable tombstone and use expired livenessInfo instead
 - Include partition deletion for existing base row
 - Disallow dropping base column with MV

Patch by Zhao Yang and Paulo Motta; reviewed by Paulo Motta for CASSANDRA-11500


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b36740e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b36740e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b36740e

Branch: refs/heads/cassandra-3.11
Commit: 1b36740ebe66b8ed4c3d6cb64eb2419a9279dfbf
Parents: b0eba5f
Author: Zhao Yang 
Authored: Wed Jul 12 17:49:38 2017 +0800
Committer: Paulo Motta 
Committed: Tue Sep 5 01:03:24 2017 -0500

--
 NEWS.txt|   18 +
 doc/cql3/CQL.textile|6 +
 .../org/apache/cassandra/config/CFMetaData.java |   13 +
 .../apache/cassandra/cql3/UpdateParameters.java |2 +-
 .../cql3/statements/AlterTableStatement.java|   18 +-
 .../org/apache/cassandra/db/LivenessInfo.java   |   17 +-
 .../org/apache/cassandra/db/ReadCommand.java|7 +-
 .../db/compaction/CompactionIterator.java   |7 +-
 .../apache/cassandra/db/filter/RowFilter.java   |4 +-
 .../cassandra/db/partitions/PurgeFunction.java  |   14 +-
 .../org/apache/cassandra/db/rows/BTreeRow.java  |6 +-
 src/java/org/apache/cassandra/db/rows/Row.java  |   15 +-
 .../cassandra/db/rows/UnfilteredSerializer.java |5 +
 .../apache/cassandra/db/transform/Filter.java   |8 +-
 .../db/transform/FilteredPartitions.java|4 +-
 .../cassandra/db/transform/FilteredRows.java|2 +-
 .../apache/cassandra/db/view/TableViews.java|   18 +-
 src/java/org/apache/cassandra/db/view/View.java |   43 +-
 .../apache/cassandra/db/view/ViewManager.java   |5 +
 .../cassandra/db/view/ViewUpdateGenerator.java  |  163 ++-
 .../apache/cassandra/service/DataResolver.java  |4 +-
 .../org/apache/cassandra/cql3/CQLTester.java|2 +-
 .../apache/cassandra/cql3/ViewComplexTest.java  | 1343 ++
 .../cassandra/cql3/ViewFilteringTest.java   |  706 -
 .../org/apache/cassandra/cql3/ViewTest.java |   31 +-
 25 files changed, 1973 insertions(+), 488 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index bb5fdfe..7064c5d 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -21,6 +21,24 @@ Upgrading
- Nothing specific to this release, but please see previous upgrading 
sections,
  especially if you are upgrading from 2.2.
 
+Materialized Views
+---
+- Cassandra will no longer allow dropping columns on tables with 
Materialized Views.
+- A change was made in the way the Materialized View timestamp is 
computed, which
+  may cause an old deletion to a base column which is view primary key 
(PK) column
+  to not be reflected in the view when repairing the base table 
post-upgrade. This
+  condition is only possible when a column deletion to an MV primary key 
(PK) column
+  not present in the base table PK (via UPDATE base SET view_pk_col = null 
or DELETE
+  view_pk_col FROM base) is missed before the upgrade and received by 
repair after the upgrade.
+  If such column deletions are done on a view PK column which is not a 
base PK, it's advisable
+  to run repair on the base table of all nodes prior to the upgrade. 
Alternatively it's possible
+  to fix potential inconsistencies by running repair on the views after 
upgrade or drop and
+  re-create the views. See CASSANDRA-11500 for more details.
+- Removal of columns not selected in the Materialized View (via UPDATE 
base SET unselected_column
+  = null or DELETE unselected_column FROM base) may not be properly 
reflected in the view in some
+  situations so we advise against doing deletions on base columns not 
selected in views
+  until this is fixed on CASSANDRA-13826.
+
 3.0.14
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 1efa6d4..54888b8 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -524,6 +524,12 @@ h4(#createMVWhere). @WHERE@ Clause
 
 The @@ is similar to the "where clause of a @SELECT@ 
statement":#selectWhere, with a few differences.  First, the wh

[07/16] cassandra git commit: Fix outstanding MV timestamp issues and add documentation about unsupported cases (see CASSANDRA-11500 for a summary of fixes)

2017-09-04 Thread paulo
http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b36740e/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java 
b/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
index 245ceb7..fe618b6 100644
--- a/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
+++ b/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
@@ -77,13 +77,13 @@ public class ViewFilteringTest extends CQLTester
 
 // IS NOT NULL is required on all PK statements that are not otherwise 
restricted
 List badStatements = Arrays.asList(
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE b IS 
NOT NULL AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = ? 
AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
blobAsInt(?) AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s PRIMARY KEY 
(a, b, c, d)"
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE b IS NOT NULL 
AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT NULL 
AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT NULL 
AND b IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT NULL 
AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = ? AND b IS 
NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
blobAsInt(?) AND b IS NOT NULL AND c is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s PRIMARY KEY (a, b, 
c, d)"
 );
 
 for (String badStatement : badStatements)
@@ -96,19 +96,19 @@ public class ViewFilteringTest extends CQLTester
 catch (InvalidQueryException exc) {}
 }
 
-List goodStatements = Arrays.asList(
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND c = 1 AND d IS NOT NULL PRIMARY KEY ((a, b), c, 
d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS 
NOT NULL AND b IS NOT NULL AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c > 1 AND d IS NOT NULL PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND c = 1 AND d IN (1, 2, 3) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND (c, d) = (1, 1) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND (c, d) > (1, 1) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 
AND b = 1 AND (c, d) IN ((1, 1), (2, 2)) PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
(int) 1 AND b = 1 AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, d)",
-"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 
blobAsInt(intAsBlob(1)) AND b = 1 AND c = 1 AND d = 1 PRIMARY KEY ((a, b), c, 
d)"
-);
+List goodStatements = Arrays.asList(
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a = 1 AND 
b = 1 AND c IS NOT NULL AND d is NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT 
NULL AND b IS NOT NULL AND c = 1 AND d IS NOT NULL PRIMARY KEY ((a, b), c, d)",
+"CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a IS NOT 
NULL AND b

[10/16] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-04 Thread paulo
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e624c663/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
--
diff --cc test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
index 87b19ad,fe618b6..6803230
--- a/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
+++ b/test/unit/org/apache/cassandra/cql3/ViewFilteringTest.java
@@@ -21,9 -21,8 +21,10 @@@ package org.apache.cassandra.cql3
  import java.util.*;
  
  import org.junit.After;
 +import org.junit.AfterClass;
  import org.junit.Before;
  import org.junit.BeforeClass;
++import org.junit.Ignore;
  import org.junit.Test;
  
  import com.datastax.driver.core.exceptions.InvalidQueryException;
@@@ -95,6 -67,6 +96,324 @@@ public class ViewFilteringTest extends 
  views.remove(name);
  }
  
++private static void waitForView(String keyspace, String view) throws 
InterruptedException
++{
++while (!SystemKeyspace.isViewBuilt(keyspace, view))
++Thread.sleep(10);
++}
++
++// TODO will revise the non-pk filter condition in MV, see CASSANDRA-13826
++@Ignore
++@Test
++public void testViewFilteringWithFlush() throws Throwable
++{
++testViewFiltering(true);
++}
++
++// TODO will revise the non-pk filter condition in MV, see CASSANDRA-13826
++@Ignore
++@Test
++public void testViewFilteringWithoutFlush() throws Throwable
++{
++testViewFiltering(false);
++}
++
++public void testViewFiltering(boolean flush) throws Throwable
++{
++// CASSANDRA-13547: able to shadow entire view row if base column 
used in filter condition is modified
++createTable("CREATE TABLE %s (a int, b int, c int, d int, PRIMARY KEY 
(a))");
++
++execute("USE " + keyspace());
++executeNet(protocolVersion, "USE " + keyspace());
++
++createView("mv_test1",
++   "CREATE MATERIALIZED VIEW %s AS SELECT * FROM %%s WHERE a 
IS NOT NULL AND b IS NOT NULL and c = 1  PRIMARY KEY (a, b)");
++createView("mv_test2",
++   "CREATE MATERIALIZED VIEW %s AS SELECT c, d FROM %%s WHERE 
a IS NOT NULL AND b IS NOT NULL and c = 1 and d = 1 PRIMARY KEY (a, b)");
++createView("mv_test3",
++   "CREATE MATERIALIZED VIEW %s AS SELECT a, b, c, d FROM %%s 
WHERE a IS NOT NULL AND b IS NOT NULL PRIMARY KEY (a, b)");
++createView("mv_test4",
++   "CREATE MATERIALIZED VIEW %s AS SELECT c FROM %%s WHERE a 
IS NOT NULL AND b IS NOT NULL and c = 1 PRIMARY KEY (a, b)");
++createView("mv_test5",
++   "CREATE MATERIALIZED VIEW %s AS SELECT c FROM %%s WHERE a 
IS NOT NULL and d = 1 PRIMARY KEY (a, d)");
++createView("mv_test6",
++   "CREATE MATERIALIZED VIEW %s AS SELECT c FROM %%s WHERE a 
= 1 and d IS NOT NULL PRIMARY KEY (a, d)");
++
++waitForView(keyspace(), "mv_test1");
++waitForView(keyspace(), "mv_test2");
++waitForView(keyspace(), "mv_test3");
++waitForView(keyspace(), "mv_test4");
++waitForView(keyspace(), "mv_test5");
++waitForView(keyspace(), "mv_test6");
++
++Keyspace ks = Keyspace.open(keyspace());
++ks.getColumnFamilyStore("mv_test1").disableAutoCompaction();
++ks.getColumnFamilyStore("mv_test2").disableAutoCompaction();
++ks.getColumnFamilyStore("mv_test3").disableAutoCompaction();
++ks.getColumnFamilyStore("mv_test4").disableAutoCompaction();
++ks.getColumnFamilyStore("mv_test5").disableAutoCompaction();
++ks.getColumnFamilyStore("mv_test6").disableAutoCompaction();
++
++
++execute("INSERT INTO %s (a, b, c, d) VALUES (?, ?, ?, ?) using 
timestamp 0", 1, 1, 1, 1);
++if (flush)
++FBUtilities.waitOnFutures(ks.flush());
++
++// views should be updated.
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test1"), row(1, 1, 
1, 1));
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test2"), row(1, 1, 
1, 1));
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test3"), row(1, 1, 
1, 1));
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test4"), row(1, 1, 
1));
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test5"), row(1, 1, 
1));
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test6"), row(1, 1, 
1));
++
++updateView("UPDATE %s using timestamp 1 set c = ? WHERE a=?", 0, 1);
++if (flush)
++FBUtilities.waitOnFutures(ks.flush());
++
++assertRowCount(execute("SELECT * FROM mv_test1"), 0);
++assertRowCount(execute("SELECT * FROM mv_test2"), 0);
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test3"), row(1, 1, 
0, 1));
++assertRowCount(execute("SELECT * FROM mv_test4"), 0);
++assertRowsIgnoringOrder(execute("SELECT * FROM mv_test5"), row(1, 1, 
0));
++assertRowsIgnoringOrder(ex

[30/50] cassandra git commit: Add a test for CASSANDRA-13346 (#1467)

2017-09-04 Thread paulo
Add a test for CASSANDRA-13346 (#1467)

* Add a test for CASSANDRA-13346; Optionally make reading JMX attributes 
verbose or not

* Compliance with Pep8


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6f4e41e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6f4e41e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6f4e41e0

Branch: refs/heads/master
Commit: 6f4e41e04c3d48f1dbbcd0fc636e39e8d114a6be
Parents: 058b952
Author: juiceblender 
Authored: Fri Jul 7 18:39:00 2017 +1000
Committer: Philip Thompson 
Committed: Fri Jul 7 10:39:00 2017 +0200

--
 jmx_test.py   | 93 +-
 tools/jmxutils.py | 12 +++
 2 files changed, 90 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6f4e41e0/jmx_test.py
--
diff --git a/jmx_test.py b/jmx_test.py
index 7df84ac..16c1ece 100644
--- a/jmx_test.py
+++ b/jmx_test.py
@@ -13,7 +13,6 @@ from tools.misc import generate_ssl_stores
 
 
 class TestJMX(Tester):
-
 def netstats_test(self):
 """
 Check functioning of nodetool netstats, especially with restarts.
@@ -48,7 +47,8 @@ class TestJMX(Tester):
 if not isinstance(e, ToolError):
 raise
 else:
-self.assertRegexpMatches(str(e), "ConnectException: 
'Connection refused( \(Connection refused\))?'.")
+self.assertRegexpMatches(str(e),
+ "ConnectException: 'Connection 
refused( \(Connection refused\))?'.")
 
 self.assertTrue(running, msg='node1 never started')
 
@@ -69,9 +69,12 @@ class TestJMX(Tester):
 debug('Version {} typeName {}'.format(version, typeName))
 
 # TODO the keyspace and table name are capitalized in 2.0
-memtable_size = make_mbean('metrics', type=typeName, 
keyspace='keyspace1', scope='standard1', name='AllMemtablesHeapSize')
-disk_size = make_mbean('metrics', type=typeName, keyspace='keyspace1', 
scope='standard1', name='LiveDiskSpaceUsed')
-sstable_count = make_mbean('metrics', type=typeName, 
keyspace='keyspace1', scope='standard1', name='LiveSSTableCount')
+memtable_size = make_mbean('metrics', type=typeName, 
keyspace='keyspace1', scope='standard1',
+   name='AllMemtablesHeapSize')
+disk_size = make_mbean('metrics', type=typeName, keyspace='keyspace1', 
scope='standard1',
+   name='LiveDiskSpaceUsed')
+sstable_count = make_mbean('metrics', type=typeName, 
keyspace='keyspace1', scope='standard1',
+   name='LiveSSTableCount')
 
 with JolokiaAgent(node1) as jmx:
 mem_size = jmx.read_attribute(memtable_size, "Value")
@@ -88,6 +91,76 @@ class TestJMX(Tester):
 sstables = jmx.read_attribute(sstable_count, "Value")
 self.assertGreaterEqual(int(sstables), 1)
 
+@since('3.0')
+def mv_metric_mbeans_release_test(self):
+"""
+Test that the right mbeans are created and released when creating mvs
+"""
+cluster = self.cluster
+cluster.populate(1)
+node = cluster.nodelist()[0]
+remove_perf_disable_shared_mem(node)
+cluster.start(wait_for_binary_proto=True)
+
+node.run_cqlsh(cmds="""
+CREATE KEYSPACE mvtest WITH REPLICATION = { 'class' : 
'SimpleStrategy', 'replication_factor': 1 };
+CREATE TABLE mvtest.testtable (
+foo int,
+bar text,
+baz text,
+PRIMARY KEY (foo, bar)
+);
+
+CREATE MATERIALIZED VIEW mvtest.testmv AS
+SELECT foo, bar, baz FROM mvtest.testtable WHERE
+foo IS NOT NULL AND bar IS NOT NULL AND baz IS NOT NULL
+PRIMARY KEY (foo, bar, baz);""")
+
+table_memtable_size = make_mbean('metrics', type='Table', 
keyspace='mvtest', scope='testtable',
+ name='AllMemtablesHeapSize')
+table_view_read_time = make_mbean('metrics', type='Table', 
keyspace='mvtest', scope='testtable',
+  name='ViewReadTime')
+table_view_lock_time = make_mbean('metrics', type='Table', 
keyspace='mvtest', scope='testtable',
+  name='ViewLockAcquireTime')
+mv_memtable_size = make_mbean('metrics', type='Table', 
keyspace='mvtest', scope='testmv',
+  name='AllMemtablesHeapSize')
+mv_view_read_time = make_mbean('metrics', type='Table', 
keyspace='mvtest', scope='testmv',
+

[37/50] cassandra git commit: Fix incorrect [2.1 <- 3.0] serialization of counter cells created in 2.0

2017-09-04 Thread paulo
Fix incorrect [2.1 <- 3.0] serialization of counter cells created in 2.0

Also fixes calculation of legacy counter update cells' serialized size.

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-13691


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/55c4ca8b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/55c4ca8b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/55c4ca8b

Branch: refs/heads/master
Commit: 55c4ca8bd450b81da6eed5055981b629b55dea15
Parents: d9c8ceb
Author: Aleksey Yeschenko 
Authored: Sat Jul 15 01:21:04 2017 -0700
Committer: Aleksey Yeschenko 
Committed: Tue Aug 1 15:43:34 2017 +0100

--
 counter_tests.py | 61 +++
 1 file changed, 61 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/55c4ca8b/counter_tests.py
--
diff --git a/counter_tests.py b/counter_tests.py
index 80e6eca..c377060 100644
--- a/counter_tests.py
+++ b/counter_tests.py
@@ -13,6 +13,67 @@ from tools.decorators import since
 
 class TestCounters(Tester):
 
+@since('3.0', max_version='3.12')
+def test_13691(self):
+"""
+2.0 -> 2.1 -> 3.0 counters upgrade test
+@jira_ticket CASSANDRA-13691
+"""
+cluster = self.cluster
+default_install_dir = cluster.get_install_dir()
+
+#
+# set up a 2.0 cluster with 3 nodes and set up schema
+#
+
+cluster.set_install_dir(version='2.0.17')
+cluster.populate(3)
+cluster.start()
+
+node1, node2, node3 = cluster.nodelist()
+
+session = self.patient_cql_connection(node1)
+session.execute("""
+CREATE KEYSPACE test
+WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 3};
+""")
+session.execute("CREATE TABLE test.test (id int PRIMARY KEY, c 
counter);")
+
+#
+# generate some 2.0 counter columns with local shards
+#
+
+query = "UPDATE test.test SET c = c + 1 WHERE id = ?"
+prepared = session.prepare(query)
+for i in range(0, 1000):
+session.execute(prepared, [i])
+
+cluster.flush()
+cluster.stop()
+
+#
+# upgrade cluster to 2.1
+#
+
+cluster.set_install_dir(version='2.1.17')
+cluster.start();
+cluster.nodetool("upgradesstables")
+
+#
+# upgrade node3 to current (3.0.x or 3.11.x)
+#
+
+node3.stop(wait_other_notice=True)
+node3.set_install_dir(install_dir=default_install_dir)
+node3.start(wait_other_notice=True)
+
+#
+# with a 2.1 coordinator, try to read the table with CL.ALL
+#
+
+session = self.patient_cql_connection(node1, 
consistency_level=ConsistencyLevel.ALL)
+assert_one(session, "SELECT COUNT(*) FROM test.test", [1000])
+
 def simple_increment_test(self):
 """ Simple incrementation test (Created for #3465, that wasn't a bug) 
"""
 cluster = self.cluster


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[35/50] cassandra git commit: Added test to verify indexes are not rebuilt at startup if not actually needed.

2017-09-04 Thread paulo
Added test to verify indexes are not rebuilt at startup if not actually needed.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b724df80
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b724df80
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b724df80

Branch: refs/heads/master
Commit: b724df80d3bbb55b6b41845633e3a9034116f3be
Parents: 894bc92
Author: Sergio Bossa 
Authored: Mon Jul 24 14:08:32 2017 +0100
Committer: Sergio Bossa 
Committed: Tue Jul 25 18:47:50 2017 +0100

--
 secondary_indexes_test.py | 69 --
 tools/data.py | 18 ++-
 2 files changed, 37 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b724df80/secondary_indexes_test.py
--
diff --git a/secondary_indexes_test.py b/secondary_indexes_test.py
index 1edd30e..11cd3af 100644
--- a/secondary_indexes_test.py
+++ b/secondary_indexes_test.py
@@ -14,11 +14,10 @@ from cassandra.query import BatchStatement, SimpleStatement
 from dtest import (DISABLE_VNODES, OFFHEAP_MEMTABLES, DtestTimeoutError,
Tester, debug, CASSANDRA_VERSION_FROM_BUILD, create_ks, 
create_cf)
 from tools.assertions import assert_bootstrap_state, assert_invalid, 
assert_none, assert_one, assert_row_count
-from tools.data import index_is_built, rows_to_list
+from tools.data import block_until_index_is_built, index_is_built, rows_to_list
 from tools.decorators import since
 from tools.misc import new_node
 
-
 class TestSecondaryIndexes(Tester):
 
 @staticmethod
@@ -306,28 +305,14 @@ class TestSecondaryIndexes(Tester):
 lookup_value = session.execute('select "C0" from standard1 limit 
1')[0].C0
 session.execute('CREATE INDEX ix_c0 ON standard1("C0");')
 
-start = time.time()
-while time.time() < start + 30:
-debug("waiting for index to build")
-time.sleep(1)
-if index_is_built(node1, session, 'keyspace1', 'standard1', 
'ix_c0'):
-break
-else:
-raise DtestTimeoutError()
+block_until_index_is_built(node1, session, 'keyspace1', 'standard1', 
'ix_c0')
 
 stmt = session.prepare('select * from standard1 where "C0" = ?')
 self.assertEqual(1, len(list(session.execute(stmt, [lookup_value]
 before_files = self._index_sstables_files(node1, 'keyspace1', 
'standard1', 'ix_c0')
 
 node1.nodetool("rebuild_index keyspace1 standard1 ix_c0")
-start = time.time()
-while time.time() < start + 30:
-debug("waiting for index to rebuild")
-time.sleep(1)
-if index_is_built(node1, session, 'keyspace1', 'standard1', 
'ix_c0'):
-break
-else:
-raise DtestTimeoutError()
+block_until_index_is_built(node1, session, 'keyspace1', 'standard1', 
'ix_c0')
 
 after_files = self._index_sstables_files(node1, 'keyspace1', 
'standard1', 'ix_c0')
 self.assertNotEqual(before_files, after_files)
@@ -447,39 +432,39 @@ class TestSecondaryIndexes(Tester):
'Cannot execute this query as it might involve data 
filtering')
 
 @since('4.0')
-def test_index_is_not_always_rebuilt_at_start(self):
+def test_index_is_not_rebuilt_at_restart(self):
 """
-@jira_ticket CASSANDRA-10130
+@jira_ticket CASSANDRA-13725
 
-Tests the management of index status during manual index rebuilding 
failures.
+Tests the index is not rebuilt at restart if already built.
 """
 
 cluster = self.cluster
-cluster.populate(1, 
install_byteman=True).start(wait_for_binary_proto=True)
+cluster.populate(1).start(wait_for_binary_proto=True)
 node = cluster.nodelist()[0]
 
 session = self.patient_cql_connection(node)
 create_ks(session, 'k', 1)
 session.execute("CREATE TABLE k.t (k int PRIMARY KEY, v int)")
-session.execute("CREATE INDEX idx ON k.t(v)")
 session.execute("INSERT INTO k.t(k, v) VALUES (0, 1)")
-session.execute("INSERT INTO k.t(k, v) VALUES (2, 3)")
 
-# Verify that the index is marked as built and it can answer queries
+debug("Create the index")
+session.execute("CREATE INDEX idx ON k.t(v)")
+block_until_index_is_built(node, session, 'k', 't', 'idx')
+before_files = self._index_sstables_files(node, 'k', 't', 'idx')
+
+debug("Verify the index is marked as built and it can be queried")
 assert_one(session, """SELECT * FROM system."IndexInfo" WHERE 
table_name='k'""", ['k', 'idx'])
 assert_one(session, "SELECT * FROM k.t WHERE v = 1", [0, 1])
 
-# Resta

[44/50] cassandra git commit: update dtests to support netty-based internode messaging/streaming

2017-09-04 Thread paulo
update dtests to support netty-based internode messaging/streaming

patch by jasobrown, reviewed by Marcus Eriksson for CASSANDRA-13635


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a0e2660
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a0e2660
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a0e2660

Branch: refs/heads/master
Commit: 1a0e266038e75930c69842e338c6a6ee196f721c
Parents: b8842b9
Author: Jason Brown 
Authored: Fri Jun 16 05:03:36 2017 -0700
Committer: Jason Brown 
Committed: Tue Aug 22 13:56:41 2017 -0700

--
 bootstrap_test.py| 11 ---
 byteman/4.0/decommission_failure_inject.btm  | 17 +
 .../4.0/inject_failure_streaming_to_node2.btm| 17 +
 byteman/4.0/stream_failure.btm   | 17 +
 byteman/decommission_failure_inject.btm  | 17 -
 byteman/inject_failure_streaming_to_node2.btm| 17 -
 byteman/pre4.0/decommission_failure_inject.btm   | 17 +
 .../pre4.0/inject_failure_streaming_to_node2.btm | 17 +
 byteman/pre4.0/stream_failure.btm| 17 +
 byteman/stream_failure.btm   | 17 -
 native_transport_ssl_test.py |  2 +-
 nodetool_test.py |  8 +---
 rebuild_test.py  |  5 -
 replace_address_test.py  | 10 +++---
 secondary_indexes_test.py| 13 +++--
 sslnodetonode_test.py| 19 +--
 topology_test.py |  5 -
 17 files changed, 151 insertions(+), 75 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a0e2660/bootstrap_test.py
--
diff --git a/bootstrap_test.py b/bootstrap_test.py
index 1d149e6..54c49c1 100644
--- a/bootstrap_test.py
+++ b/bootstrap_test.py
@@ -148,8 +148,10 @@ class TestBootstrap(BaseBootstrapTest):
 2*streaming_keep_alive_period_in_secs to receive a single sstable
 """
 cluster = self.cluster
-
cluster.set_configuration_options(values={'streaming_socket_timeout_in_ms': 
1000,
-  
'streaming_keep_alive_period_in_secs': 2})
+yaml_opts = {'streaming_keep_alive_period_in_secs': 2}
+if cluster.version() < '4.0':
+yamp_opts['streaming_socket_timeout_in_ms'] = 1000
+cluster.set_configuration_options(values=yaml_opts)
 
 # Create a single node cluster
 cluster.populate(1)
@@ -306,7 +308,10 @@ class TestBootstrap(BaseBootstrapTest):
 
 cluster.start(wait_other_notice=True)
 # kill stream to node3 in the middle of streaming to let it fail
-node1.byteman_submit(['./byteman/stream_failure.btm'])
+if cluster.version() < '4.0':
+node1.byteman_submit(['./byteman/pre4.0/stream_failure.btm'])
+else:
+node1.byteman_submit(['./byteman/4.0/stream_failure.btm'])
 node1.stress(['write', 'n=1K', 'no-warmup', 'cl=TWO', '-schema', 
'replication(factor=2)', '-rate', 'threads=50'])
 cluster.flush()
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a0e2660/byteman/4.0/decommission_failure_inject.btm
--
diff --git a/byteman/4.0/decommission_failure_inject.btm 
b/byteman/4.0/decommission_failure_inject.btm
new file mode 100644
index 000..a6418fc
--- /dev/null
+++ b/byteman/4.0/decommission_failure_inject.btm
@@ -0,0 +1,17 @@
+#
+# Inject decommission failure to fail streaming from 127.0.0.1
+#
+# Before start streaming files in `StreamSession#onInitializationComplete()` 
method,
+# interrupt streaming by throwing RuntimeException.
+#
+RULE inject decommission failure
+CLASS org.apache.cassandra.streaming.StreamSession
+METHOD prepareSynAck
+AT INVOKE startStreamingFiles
+BIND peer = $0.peer
+# set flag to only run this rule once.
+IF peer.equals(InetAddress.getByName("127.0.0.1")) AND NOT flagged("done")
+DO
+   flag("done");
+   throw new java.lang.RuntimeException("Triggering network failure")
+ENDRULE
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a0e2660/byteman/4.0/inject_failure_streaming_to_node2.btm
--
diff --git a/byteman/4.0/inject_failure_streaming_to_node2.btm 
b/byteman/4.0/inject_failure_streaming_to_node2.btm
new file mode 100644
index 000..761950f
--- /dev/null
+++ b/byteman/4.0/inject_failure_streaming_to_node2.bt

[49/50] cassandra git commit: ninja-fix misspelled variable name

2017-09-04 Thread paulo
ninja-fix misspelled variable name


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/19b6613d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/19b6613d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/19b6613d

Branch: refs/heads/master
Commit: 19b6613d7c1cd220432af1157b07dbba8fd4a0bb
Parents: 2ad557d
Author: Jason Brown 
Authored: Tue Aug 29 08:52:25 2017 -0700
Committer: Jason Brown 
Committed: Tue Aug 29 08:52:25 2017 -0700

--
 bootstrap_test.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/19b6613d/bootstrap_test.py
--
diff --git a/bootstrap_test.py b/bootstrap_test.py
index 54c49c1..d29390c 100644
--- a/bootstrap_test.py
+++ b/bootstrap_test.py
@@ -150,7 +150,7 @@ class TestBootstrap(BaseBootstrapTest):
 cluster = self.cluster
 yaml_opts = {'streaming_keep_alive_period_in_secs': 2}
 if cluster.version() < '4.0':
-yamp_opts['streaming_socket_timeout_in_ms'] = 1000
+yaml_opts['streaming_socket_timeout_in_ms'] = 1000
 cluster.set_configuration_options(values=yaml_opts)
 
 # Create a single node cluster


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[36/50] cassandra git commit: Add tests for MVs when a column in the base table is renamed with ALTER TABLE

2017-09-04 Thread paulo
Add tests for MVs when a column in the base table is renamed with ALTER TABLE

patch by Andres de la Peña; reviewed by Zhao Yang for CASSANDRA-12952


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d9c8cebc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d9c8cebc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d9c8cebc

Branch: refs/heads/master
Commit: d9c8cebc2d7907d04efb1ce81bda9e2fa2780530
Parents: b724df8
Author: Andrés de la Peña 
Authored: Fri Jul 28 11:56:38 2017 +0100
Committer: Andrés de la Peña 
Committed: Fri Jul 28 11:56:38 2017 +0100

--
 byteman/merge_schema_failure_3x.btm | 12 ++
 byteman/merge_schema_failure_4x.btm | 12 ++
 materialized_views_test.py  | 73 +++-
 3 files changed, 95 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d9c8cebc/byteman/merge_schema_failure_3x.btm
--
diff --git a/byteman/merge_schema_failure_3x.btm 
b/byteman/merge_schema_failure_3x.btm
new file mode 100644
index 000..d4c9b36
--- /dev/null
+++ b/byteman/merge_schema_failure_3x.btm
@@ -0,0 +1,12 @@
+#
+# Inject node failure on merge schema exit.
+#
+RULE inject node failure on merge schema exit
+CLASS org.apache.cassandra.schema.SchemaKeyspace
+METHOD mergeSchema
+AT EXIT
+# set flag to only run this rule once.
+IF TRUE
+DO
+   System.exit(0)
+ENDRULE

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d9c8cebc/byteman/merge_schema_failure_4x.btm
--
diff --git a/byteman/merge_schema_failure_4x.btm 
b/byteman/merge_schema_failure_4x.btm
new file mode 100644
index 000..bee5c3c
--- /dev/null
+++ b/byteman/merge_schema_failure_4x.btm
@@ -0,0 +1,12 @@
+#
+# Inject node failure on merge schema exit.
+#
+RULE inject node failure on merge schema exit
+CLASS org.apache.cassandra.schema.Schema
+METHOD merge
+AT EXIT
+# set flag to only run this rule once.
+IF TRUE
+DO
+   System.exit(0)
+ENDRULE

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d9c8cebc/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 0c9cdcb..306d719 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -8,6 +8,7 @@ from multiprocessing import Process, Queue
 from unittest import skip, skipIf
 
 from cassandra import ConsistencyLevel
+from cassandra.cluster import NoHostAvailable
 from cassandra.concurrent import execute_concurrent_with_args
 from cassandra.cluster import Cluster
 from cassandra.query import SimpleStatement
@@ -40,9 +41,9 @@ class TestMaterializedViews(Tester):
 @since 3.0
 """
 
-def prepare(self, user_table=False, rf=1, options=None, nodes=3, **kwargs):
+def prepare(self, user_table=False, rf=1, options=None, nodes=3, 
install_byteman=False, **kwargs):
 cluster = self.cluster
-cluster.populate([nodes, 0])
+cluster.populate([nodes, 0], install_byteman=install_byteman)
 if options:
 cluster.set_configuration_options(values=options)
 cluster.start()
@@ -773,6 +774,74 @@ class TestMaterializedViews(Tester):
 ['TX', 'user1']
 )
 
+def rename_column_test(self):
+"""
+Test that a materialized view created with a 'SELECT *' works as 
expected when renaming a column
+@expected_result The column is also renamed in the view.
+"""
+
+session = self.prepare(user_table=True)
+
+self._insert_data(session)
+
+assert_one(
+session,
+"SELECT * FROM users_by_state WHERE state = 'TX' AND username = 
'user1'",
+['TX', 'user1', 1968, 'f', 'ch@ngem3a', None]
+)
+
+session.execute("ALTER TABLE users RENAME username TO user")
+
+results = list(session.execute("SELECT * FROM users_by_state WHERE 
state = 'TX' AND user = 'user1'"))
+self.assertEqual(len(results), 1)
+self.assertTrue(hasattr(results[0], 'user'), 'Column "user" not found')
+assert_one(
+session,
+"SELECT state, user, birth_year, gender FROM users_by_state WHERE 
state = 'TX' AND user = 'user1'",
+['TX', 'user1', 1968, 'f']
+)
+
+def rename_column_atomicity_test(self):
+"""
+Test that column renaming is atomically done between a table and its 
materialized views
+@jira_ticket CASSANDRA-12952
+"""
+
+session = self.prepare(nodes=1, user_table=True, install_byteman=True)
+node = self.cluster.nodelist()[0]
+
+self._insert_data(session)
+
+assert_o

[33/50] cassandra git commit: Handle index order in describe output on 2.1

2017-09-04 Thread paulo
Handle index order in describe output on 2.1

Patch by Joel Knighton; reviewed by Philip Thompson


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d040629b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d040629b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d040629b

Branch: refs/heads/master
Commit: d040629b2a71286105346c3cc637f8d1e16cf0a1
Parents: cc355ff
Author: Joel Knighton 
Authored: Wed Jul 12 16:56:08 2017 -0500
Committer: Joel Knighton 
Committed: Thu Jul 13 15:17:52 2017 -0500

--
 cqlsh_tests/cqlsh_tests.py | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d040629b/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index e7bc11c..418b9f7 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -913,6 +913,9 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 return ret + "\n" + col_idx_def
 
 def get_users_table_output(self):
+quoted_index_output = self.get_index_output('"QuotedNameIndex"', 
'test', 'users', 'firstname')
+myindex_output = self.get_index_output('myindex', 'test', 'users', 
'age')
+
 if self.cluster.version() >= LooseVersion('3.9'):
 return """
 CREATE TABLE test.users (
@@ -934,8 +937,7 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99PERCENTILE';
-""" + self.get_index_output('"QuotedNameIndex"', 'test', 'users', 
'firstname') \
-   + "\n" + self.get_index_output('myindex', 'test', 'users', 
'age')
+""" + quoted_index_output + "\n" + myindex_output
 elif self.cluster.version() >= LooseVersion('3.0'):
 return """
 CREATE TABLE test.users (
@@ -957,8 +959,7 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99PERCENTILE';
-""" + self.get_index_output('"QuotedNameIndex"', 'test', 'users', 
'firstname') \
-   + "\n" + self.get_index_output('myindex', 'test', 'users', 
'age')
+""" + quoted_index_output + "\n" + myindex_output
 else:
 return """
 CREATE TABLE test.users (
@@ -979,8 +980,8 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
-""" + self.get_index_output('QuotedNameIndex', 'test', 'users', 
'firstname') \
-   + "\n" + self.get_index_output('myindex', 'test', 'users', 
'age')
+""" + (quoted_index_output + "\n" + myindex_output if 
self.cluster.version() >= LooseVersion('2.2') else
+   myindex_output + "\n" + quoted_index_output)
 
 def get_index_output(self, index, ks, table, col):
 # a quoted index name (e.g. "FooIndex") is only correctly echoed by 
DESCRIBE


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[50/50] cassandra git commit: CASSANDRA-11500: add dtest for complex update/delete tombstones in MV

2017-09-04 Thread paulo
CASSANDRA-11500: add dtest for complex update/delete tombstones in MV


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d77ace5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d77ace5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d77ace5

Branch: refs/heads/master
Commit: 6d77ace5361f020ba182072ade9f4ab98025c213
Parents: 19b6613
Author: Zhao Yang 
Authored: Mon May 1 23:24:12 2017 +0800
Committer: Paulo Motta 
Committed: Tue Sep 5 00:39:48 2017 -0500

--
 materialized_views_test.py | 308 +++-
 1 file changed, 307 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d77ace5/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 574d90f..637124d 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -66,6 +66,14 @@ class TestMaterializedViews(Tester):
 
 return session
 
+def update_view(self, session, query, flush, compact=False):
+session.execute(query)
+self._replay_batchlogs()
+if flush:
+self.cluster.flush()
+if compact:
+self.cluster.compact()
+
 def _settle_nodes(self):
 debug("Settling all nodes")
 stage_match = 
re.compile("(?P\S+)\s+(?P\d+)\s+(?P\d+)\s+(?P\d+)\s+(?P\d+)\s+(?P\d+)")
@@ -334,7 +342,7 @@ class TestMaterializedViews(Tester):
 assert_invalid(
 session,
 "ALTER TABLE ks.users DROP state;",
-"Cannot drop column state, depended on by materialized views"
+"Cannot drop column state on base table with materialized views."
 )
 
 def drop_table_test(self):
@@ -974,6 +982,304 @@ class TestMaterializedViews(Tester):
 cl=ConsistencyLevel.ALL
 )
 
+@since('3.0')
+def test_no_base_column_in_view_pk_complex_timestamp_with_flush(self):
+self._test_no_base_column_in_view_pk_complex_timestamp(flush=True)
+
+@since('3.0')
+def test_no_base_column_in_view_pk_complex_timestamp_without_flush(self):
+self._test_no_base_column_in_view_pk_complex_timestamp(flush=False)
+
+def _test_no_base_column_in_view_pk_complex_timestamp(self, flush):
+"""
+Able to shadow old view row if all columns in base are removed 
including unselected
+Able to recreate view row if at least one selected column alive
+
+@jira_ticket CASSANDRA-11500
+"""
+session = self.prepare(rf=3, nodes=3, 
options={'hinted_handoff_enabled': False}, 
consistency_level=ConsistencyLevel.QUORUM)
+node1, node2, node3 = self.cluster.nodelist()
+
+session.execute('USE ks')
+session.execute("CREATE TABLE t (k int, c int, a int, b int, e int, f 
int, primary key(k, c))")
+session.execute(("CREATE MATERIALIZED VIEW mv AS SELECT k,c,a,b FROM t 
"
+ "WHERE k IS NOT NULL AND c IS NOT NULL PRIMARY KEY 
(c, k)"))
+session.cluster.control_connection.wait_for_schema_agreement()
+
+# update unselected, view row should be alive
+self.update_view(session, "UPDATE t USING TIMESTAMP 1 SET e=1 WHERE 
k=1 AND c=1;", flush)
+assert_one(session, "SELECT * FROM t", [1, 1, None, None, 1, None])
+assert_one(session, "SELECT * FROM mv", [1, 1, None, None])
+
+# remove unselected, add selected column, view row should be alive
+self.update_view(session, "UPDATE t USING TIMESTAMP 2 SET e=null, b=1 
WHERE k=1 AND c=1;", flush)
+assert_one(session, "SELECT * FROM t", [1, 1, None, 1, None, None])
+assert_one(session, "SELECT * FROM mv", [1, 1, None, 1])
+
+# remove selected column, view row is removed
+self.update_view(session, "UPDATE t USING TIMESTAMP 2 SET e=null, 
b=null WHERE k=1 AND c=1;", flush)
+assert_none(session, "SELECT * FROM t")
+assert_none(session, "SELECT * FROM mv")
+
+# update unselected with ts=3, view row should be alive
+self.update_view(session, "UPDATE t USING TIMESTAMP 3 SET f=1 WHERE 
k=1 AND c=1;", flush)
+assert_one(session, "SELECT * FROM t", [1, 1, None, None, None, 1])
+assert_one(session, "SELECT * FROM mv", [1, 1, None, None])
+
+# insert livenesssInfo, view row should be alive
+self.update_view(session, "INSERT INTO t(k,c) VALUES(1,1) USING 
TIMESTAMP 3", flush)
+assert_one(session, "SELECT * FROM t", [1, 1, None, None, None, 1])
+assert_one(session, "SELECT * FROM mv", [1, 1, None, None])
+
+# remove unselected, view row should be alive because of base 
livenessInfo alive
+self.update_view(session, "UPDATE

[48/50] cassandra git commit: Fix short read protection

2017-09-04 Thread paulo
Fix short read protection

patch by Aleksey Yeschenko; reviewed by Benedict Elliott Smith for
CASSANDRA-13747


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2ad557df
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2ad557df
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2ad557df

Branch: refs/heads/master
Commit: 2ad557dff9f9d4a3c09f0781b3eeeb5fe75b57d0
Parents: ac9c956
Author: Aleksey Yeschenko 
Authored: Mon Aug 7 14:06:05 2017 +0100
Committer: Aleksey Yeschenko 
Committed: Tue Aug 29 12:47:10 2017 +0100

--
 consistency_test.py | 53 
 1 file changed, 53 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2ad557df/consistency_test.py
--
diff --git a/consistency_test.py b/consistency_test.py
index 9424b4c..b50d81b 100644
--- a/consistency_test.py
+++ b/consistency_test.py
@@ -772,6 +772,59 @@ class TestAccuracy(TestHelper):
 
 class TestConsistency(Tester):
 
+@since('3.0')
+def test_13747(self):
+"""
+@jira_ticket CASSANDRA-13747
+"""
+cluster = self.cluster
+
+# disable hinted handoff and set batch commit log so this doesn't 
interfere with the test
+cluster.set_configuration_options(values={'hinted_handoff_enabled': 
False})
+cluster.set_batch_commitlog(enabled=True)
+
+cluster.populate(2).start(wait_other_notice=True)
+node1, node2 = cluster.nodelist()
+
+session = self.patient_cql_connection(node1)
+
+query = "CREATE KEYSPACE IF NOT EXISTS test WITH replication = 
{'class': 'NetworkTopologyStrategy', 'datacenter1': 2};"
+session.execute(query)
+
+query = "CREATE TABLE IF NOT EXISTS test.test (id int PRIMARY KEY);"
+session.execute(query)
+
+#
+# populate the table with 10 rows:
+#
+
+# -7509452495886106294 |  5
+# -4069959284402364209 |  1 x
+# -3799847372828181882 |  8
+# -3485513579396041028 |  0 x
+# -3248873570005575792 |  2
+# -2729420104000364805 |  4 x
+#  1634052884888577606 |  7
+#  2705480034054113608 |  6 x
+#  3728482343045213994 |  9
+#  9010454139840013625 |  3 x
+
+stmt = session.prepare("INSERT INTO test.test (id) VALUES (?);")
+for id in range(0, 10):
+session.execute(stmt, [id], ConsistencyLevel.ALL)
+
+# with node2 down and hints disabled, delete every other row on node1
+node2.stop(wait_other_notice=True)
+session.execute("DELETE FROM test.test WHERE id IN (1, 0, 4, 6, 3);")
+
+# with both nodes up, do a DISTINCT range query with CL.ALL;
+# prior to CASSANDRA-13747 this would cause an assertion in short read 
protection code
+node2.start(wait_other_notice=True)
+stmt = SimpleStatement("SELECT DISTINCT token(id), id FROM test.test;",
+   consistency_level = ConsistencyLevel.ALL)
+result = list(session.execute(stmt))
+assert_length_equal(result, 5)
+
 def short_read_test(self):
 """
 @jira_ticket CASSANDRA-9460


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[45/50] cassandra git commit: Add test to verify that it is possible to list roles after a successful login (CASSANDRA-13640)

2017-09-04 Thread paulo
Add test to verify that it is possible to list roles after a successful login 
(CASSANDRA-13640)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/da6ad831
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/da6ad831
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/da6ad831

Branch: refs/heads/master
Commit: da6ad8317e18ebaa5e8b428df79d1da086a19dd9
Parents: 1a0e266
Author: Andrés de la Peña 
Authored: Thu Aug 24 16:47:58 2017 +0100
Committer: Andrés de la Peña 
Committed: Thu Aug 24 16:47:58 2017 +0100

--
 cqlsh_tests/cqlsh_tests.py | 16 
 1 file changed, 16 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/da6ad831/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index 418b9f7..8b66a53 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -2032,3 +2032,19 @@ class CqlLoginTest(Tester):
 cqlsh_options=['-u', 'cassandra', '-p', 'cassandra'])
 self.assertEqual([x for x in cqlsh_stdout.split() if x], ['ks1table'])
 self.assert_login_not_allowed('user1', cqlsh_stderr)
+
+def test_list_roles_after_login(self):
+"""
+@jira_ticket CASSANDRA-13640
+
+Verifies that it is possible to list roles after a successful login.
+"""
+out, err, _ = self.node1.run_cqlsh(
+'''
+CREATE ROLE super WITH superuser = true AND password = 'p' AND 
login = true;
+LOGIN super 'p';
+LIST ROLES;
+''',
+cqlsh_options=['-u', 'cassandra', '-p', 'cassandra'])
+self.assertTrue('super' in out)
+self.assertEqual('', err)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[31/50] cassandra git commit: Revert "Adds the ability to use uncompressed chunks in compressed files"

2017-09-04 Thread paulo
Revert "Adds the ability to use uncompressed chunks in compressed files"

This reverts commit 058b95289bf815495fced0ac55a78bcceceea9fa.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f1b0ba8a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f1b0ba8a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f1b0ba8a

Branch: refs/heads/master
Commit: f1b0ba8a1d60937b79ccac43b23c887da8ced32a
Parents: 6f4e41e
Author: Joel Knighton 
Authored: Wed Jul 12 12:11:02 2017 -0500
Committer: Joel Knighton 
Committed: Wed Jul 12 12:11:02 2017 -0500

--
 cqlsh_tests/cqlsh_tests.py | 44 ++---
 1 file changed, 2 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f1b0ba8a/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index dee1891..e7bc11c 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -847,25 +847,7 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 PRIMARY KEY (id, col)
 """
 
-if self.cluster.version() >= LooseVersion('4.0'):
-ret += """
-) WITH CLUSTERING ORDER BY (col ASC)
-AND bloom_filter_fp_chance = 0.01
-AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
-AND comment = ''
-AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
-AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor', 'min_compress_ratio': '1.1'}
-AND crc_check_chance = 1.0
-AND dclocal_read_repair_chance = 0.1
-AND default_time_to_live = 0
-AND gc_grace_seconds = 864000
-AND max_index_interval = 2048
-AND memtable_flush_period_in_ms = 0
-AND min_index_interval = 128
-AND read_repair_chance = 0.0
-AND speculative_retry = '99PERCENTILE';
-"""
-elif self.cluster.version() >= LooseVersion('3.9'):
+if self.cluster.version() >= LooseVersion('3.9'):
 ret += """
 ) WITH CLUSTERING ORDER BY (col ASC)
 AND bloom_filter_fp_chance = 0.01
@@ -931,29 +913,7 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 return ret + "\n" + col_idx_def
 
 def get_users_table_output(self):
-if self.cluster.version() >= LooseVersion('4.0'):
-return """
-CREATE TABLE test.users (
-userid text PRIMARY KEY,
-age int,
-firstname text,
-lastname text
-) WITH bloom_filter_fp_chance = 0.01
-AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
-AND comment = ''
-AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
-AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor', 'min_compress_ratio': '1.1'}
-AND crc_check_chance = 1.0
-AND dclocal_read_repair_chance = 0.1
-AND default_time_to_live = 0
-AND gc_grace_seconds = 864000
-AND max_index_interval = 2048
-AND memtable_flush_period_in_ms = 0
-AND min_index_interval = 128
-AND read_repair_chance = 0.0
-AND speculative_retry = '99PERCENTILE';
-""" + self.get_index_output('myindex', 'test', 'users', 'age')
-elif self.cluster.version() >= LooseVersion('3.9'):
+if self.cluster.version() >= LooseVersion('3.9'):
 return """
 CREATE TABLE test.users (
 userid text PRIMARY KEY,


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[26/50] cassandra git commit: Expand 9673 tests to also run on 3.x

2017-09-04 Thread paulo
Expand 9673 tests to also run on 3.x


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d2d9e6d4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d2d9e6d4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d2d9e6d4

Branch: refs/heads/master
Commit: d2d9e6d4ef638233b8dc403c25c2265cc40df9be
Parents: 50e1e7b
Author: Philip Thompson 
Authored: Tue Jul 4 15:27:28 2017 +0200
Committer: Philip Thompson 
Committed: Wed Jul 5 11:51:36 2017 +0200

--
 batch_test.py | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d2d9e6d4/batch_test.py
--
diff --git a/batch_test.py b/batch_test.py
index 6dcf786..e67d185 100644
--- a/batch_test.py
+++ b/batch_test.py
@@ -285,51 +285,51 @@ class TestBatch(Tester):
 assert_one(session, "SELECT * FROM users", [0, 'Jack', 'Sparrow'])
 assert_one(session, "SELECT * FROM dogs", [0, 'Pluto'])
 
-@since('3.0', max_version='3.0.x')
+@since('3.0', max_version='3.x')
 def logged_batch_compatibility_1_test(self):
 """
 @jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
 
-Here we have one 3.0 node and two 2.2 nodes and we send the batch 
request to the 3.0 node.
+Here we have one 3.0/3.x node and two 2.2 nodes and we send the batch 
request to the 3.0 node.
 """
 self._logged_batch_compatibility_test(0, 1, 
'github:apache/cassandra-2.2', 2, 4)
 
-@since('3.0', max_version='3.0.x')
+@since('3.0', max_version='3.x')
 @skipIf(sys.platform == 'win32', 'Windows production support only on 2.2+')
 def logged_batch_compatibility_2_test(self):
 """
 @jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
 
-Here we have one 3.0 node and two 2.1 nodes and we send the batch 
request to the 3.0 node.
+Here we have one 3.0/3.x node and two 2.1 nodes and we send the batch 
request to the 3.0 node.
 """
 self._logged_batch_compatibility_test(0, 1, 
'github:apache/cassandra-2.1', 2, 3)
 
-@since('3.0', max_version='3.0.x')
+@since('3.0', max_version='3.x')
 @skipIf(sys.platform == 'win32', 'Windows production support only on 2.2+')
 def logged_batch_compatibility_3_test(self):
 """
 @jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
 
-Here we have two 3.0 nodes and one 2.1 node and we send the batch 
request to the 3.0 node.
+Here we have two 3.0/3.x nodes and one 2.1 node and we send the batch 
request to the 3.0 node.
 """
 self._logged_batch_compatibility_test(0, 2, 
'github:apache/cassandra-2.1', 1, 3)
 
-@since('3.0', max_version='3.0.x')
+@since('3.0', max_version='3.x')
 def logged_batch_compatibility_4_test(self):
 """
 @jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
 
-Here we have two 3.0 nodes and one 2.2 node and we send the batch 
request to the 2.2 node.
+Here we have two 3.0/3.x nodes and one 2.2 node and we send the batch 
request to the 2.2 node.
 """
 self._logged_batch_compatibility_test(2, 2, 
'github:apache/cassandra-2.2', 1, 4)
 
-@since('3.0', max_version='3.0.x')
+@since('3.0', max_version='3.x')
 @skipIf(sys.platform == 'win32', 'Windows production support only on 2.2+')
 def logged_batch_compatibility_5_test(self):
 """
 @jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
 
-Here we have two 3.0 nodes and one 2.1 node and we send the batch 
request to the 2.1 node.
+Here we have two 3.0/3.x nodes and one 2.1 node and we send the batch 
request to the 2.1 node.
 """
 self._logged_batch_compatibility_test(2, 2, 
'github:apache/cassandra-2.1', 1, 3)
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[41/50] cassandra git commit: Restore <4.0 compatibility for digest mismatch log message matching

2017-09-04 Thread paulo
Restore <4.0 compatibility for digest mismatch log message matching


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/61cbd5cd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/61cbd5cd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/61cbd5cd

Branch: refs/heads/master
Commit: 61cbd5cdcb435503bcb828249cce60ca779995e0
Parents: 459943a
Author: Stefan Podkowinski 
Authored: Thu Aug 10 09:02:24 2017 +0200
Committer: Stefan Podkowinski 
Committed: Thu Aug 10 09:02:24 2017 +0200

--
 materialized_views_test.py | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/61cbd5cd/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 79679ca..574d90f 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -1063,8 +1063,9 @@ class TestMaterializedViews(Tester):
 # execution happening
 
 # Look for messages like:
-# Digest mismatch: Mismatch for key DecoratedKey
-regex = r"Digest mismatch: Mismatch for key DecoratedKey"
+#  4.0+Digest mismatch: Mismatch for key DecoratedKey
+# <4.0 Digest mismatch: 
org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
DecoratedKey
+regex = r"Digest mismatch: ([a-zA-Z.]+:\s)?Mismatch for key 
DecoratedKey"
 for event in trace.events:
 desc = event.description
 match = re.match(regex, desc)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[42/50] cassandra git commit: Handle difference in sstablemetadata output for pending repairs following CASSANDRA-11483

2017-09-04 Thread paulo
Handle difference in sstablemetadata output for pending repairs following 
CASSANDRA-11483

Patch by Joel Knighton; reviewed by Blake Eggleston for CASSANDRA-13755


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/013efa11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/013efa11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/013efa11

Branch: refs/heads/master
Commit: 013efa11f3d7bd2e3f64a4a5a865ff5dad565552
Parents: 61cbd5c
Author: Joel Knighton 
Authored: Wed Aug 9 13:03:21 2017 -0500
Committer: Blake Eggleston 
Committed: Thu Aug 10 15:34:00 2017 -0700

--
 repair_tests/incremental_repair_test.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/013efa11/repair_tests/incremental_repair_test.py
--
diff --git a/repair_tests/incremental_repair_test.py 
b/repair_tests/incremental_repair_test.py
index a447d56..b081d44 100644
--- a/repair_tests/incremental_repair_test.py
+++ b/repair_tests/incremental_repair_test.py
@@ -34,7 +34,7 @@ class TestIncRepair(Tester):
 def _get_repaired_data(cls, node, keyspace):
 _sstable_name = compile('SSTable: (.+)')
 _repaired_at = compile('Repaired at: (\d+)')
-_pending_repair = compile('Pending repair: (null|[a-f0-9\-]+)')
+_pending_repair = compile('Pending repair: (\-\-|null|[a-f0-9\-]+)')
 _sstable_data = namedtuple('_sstabledata', ('name', 'repaired', 
'pending_id'))
 
 out = node.run_sstablemetadata(keyspace=keyspace).stdout
@@ -45,7 +45,7 @@ class TestIncRepair(Tester):
 repaired_times = [int(m.group(1)) for m in matches(_repaired_at)]
 
 def uuid_or_none(s):
-return None if s == 'null' else UUID(s)
+return None if s == 'null' or s == '--' else UUID(s)
 pending_repairs = [uuid_or_none(m.group(1)) for m in 
matches(_pending_repair)]
 assert names
 assert repaired_times


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[09/50] cassandra git commit: Include quoted index names in describe test

2017-09-04 Thread paulo
Include quoted index names in describe test


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f2925484
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f2925484
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f2925484

Branch: refs/heads/master
Commit: f2925484f8e3375a3373b689b425a80f7ec54f36
Parents: 6540ba4
Author: Sam Tunnicliffe 
Authored: Thu Oct 27 09:25:26 2016 +0100
Committer: Philip Thompson 
Committed: Thu May 11 14:24:05 2017 -0400

--
 cqlsh_tests/cqlsh_tests.py | 17 +
 1 file changed, 13 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2925484/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index caacaa5..7734848 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -688,6 +688,7 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 CREATE KEYSPACE test WITH REPLICATION = {'class' : 
'SimpleStrategy', 'replication_factor' : 1};
 CREATE TABLE test.users ( userid text PRIMARY KEY, firstname 
text, lastname text, age int);
 CREATE INDEX myindex ON test.users (age);
+CREATE INDEX "QuotedNameIndex" on test.users (firstName);
 CREATE TABLE test.test (id int, col int, val text, PRIMARY 
KEY(id, col));
 CREATE INDEX ON test.test (col);
 CREATE INDEX ON test.test (val)
@@ -738,7 +739,8 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 self.execute(cql='DESCRIBE test.myindex', expected_err="'myindex' not 
found in keyspace 'test'")
 self.execute(cql="""
 CREATE TABLE test.users ( userid text PRIMARY KEY, firstname 
text, lastname text, age int);
-CREATE INDEX myindex ON test.users (age)
+CREATE INDEX myindex ON test.users (age);
+CREATE INDEX "QuotedNameIndex" on test.users (firstname)
 """)
 self.execute(cql="DESCRIBE test.users", 
expected_output=self.get_users_table_output())
 self.execute(cql='DESCRIBE test.myindex', 
expected_output=self.get_index_output('myindex', 'test', 'users', 'age'))
@@ -748,6 +750,10 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 self.execute(cql='DESCRIBE test.myindex', expected_err="'myindex' not 
found in keyspace 'test'")
 self.execute(cql='CREATE INDEX myindex ON test.users (age)')
 self.execute(cql='DESCRIBE INDEX test.myindex', 
expected_output=self.get_index_output('myindex', 'test', 'users', 'age'))
+self.execute(cql='DROP INDEX test."QuotedNameIndex"')
+self.execute(cql='DESCRIBE test."QuotedNameIndex"', 
expected_err="'QuotedNameIndex' not found in keyspace 'test'")
+self.execute(cql='CREATE INDEX "QuotedNameIndex" ON test.users 
(firstname)')
+self.execute(cql='DESCRIBE INDEX test."QuotedNameIndex"', 
expected_output=self.get_index_output('"QuotedNameIndex"', 'test', 'users', 
'firstname'))
 
 # Alter table. Renaming indexed columns is not allowed, and since 3.0 
neither is dropping them
 # Prior to 3.0 the index would have been automatically dropped, but 
now we need to explicitly do that.
@@ -929,7 +935,8 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99PERCENTILE';
-""" + self.get_index_output('myindex', 'test', 'users', 'age')
+""" + self.get_index_output('"QuotedNameIndex"', 'test', 'users', 
'firstname') \
+   + "\n" + self.get_index_output('myindex', 'test', 'users', 
'age')
 elif self.cluster.version() >= LooseVersion('3.0'):
 return """
 CREATE TABLE test.users (
@@ -951,7 +958,8 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99PERCENTILE';
-""" + self.get_index_output('myindex', 'test', 'users', 'age')
+""" + self.get_index_output('"QuotedNameIndex"', 'test', 'users', 
'firstname') \
+   + "\n" + self.get_index_output('myindex', 'test', 'users', 
'age')
 else:
 return """
 CREATE TABLE test.users (
@@ -972,7 +980,8 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND s

[39/50] cassandra git commit: Add test verifying that a schema propagation adding a view over a non existing table doesn't prevent a node from start

2017-09-04 Thread paulo
Add test verifying that a schema propagation adding a view over a non existing 
table doesn't prevent a node from start

patch by Andres de la Peña; reviewed by Jake Luciani for CASSANDRA-13737


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/95920874
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/95920874
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/95920874

Branch: refs/heads/master
Commit: 959208749d70e5808aec144e87b73e90d56a7f91
Parents: 7e3bcfd
Author: Andrés de la Peña 
Authored: Tue Aug 8 10:01:15 2017 +0100
Committer: Andrés de la Peña 
Committed: Tue Aug 8 10:01:15 2017 +0100

--
 materialized_views_test.py | 35 +++
 1 file changed, 35 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/95920874/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 306d719..77b20e6 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -1532,6 +1532,41 @@ class TestMaterializedViews(Tester):
 session.execute("DROP MATERIALIZED VIEW mv")
 session.execute("DROP TABLE test")
 
+def propagate_view_creation_over_non_existing_table(self):
+"""
+The internal addition of a view over a non existing table should be 
ignored
+@jira_ticket CASSANDRA-13737
+"""
+
+cluster = self.cluster
+cluster.populate(3)
+cluster.start()
+node1, node2, node3 = self.cluster.nodelist()
+session = self.patient_cql_connection(node1, 
consistency_level=ConsistencyLevel.QUORUM)
+create_ks(session, 'ks', 3)
+
+session.execute('CREATE TABLE users (username varchar PRIMARY KEY, 
state varchar)')
+
+# create a materialized view only in nodes 1 and 2
+node3.stop(wait_other_notice=True)
+session.execute(('CREATE MATERIALIZED VIEW users_by_state AS '
+ 'SELECT * FROM users WHERE state IS NOT NULL AND 
username IS NOT NULL '
+ 'PRIMARY KEY (state, username)'))
+
+# drop the base table only in node 3
+node1.stop(wait_other_notice=True)
+node2.stop(wait_other_notice=True)
+node3.start(wait_for_binary_proto=True)
+session = self.patient_cql_connection(node3, 
consistency_level=ConsistencyLevel.QUORUM)
+session.execute('DROP TABLE ks.users')
+
+# restart the cluster
+cluster.stop()
+cluster.start()
+
+# node3 should have received and ignored the creation of the MV over 
the dropped table
+self.assertTrue(node3.grep_log('Not adding view users_by_state because 
the base table'))
+
 
 # For read verification
 class MutationPresence(Enum):


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[43/50] cassandra git commit: Fix jolokia for mixed version clusters

2017-09-04 Thread paulo
Fix jolokia for mixed version clusters

Patch by Jeff Jirsa; Reviewed by Aleksey Yeshchenko


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b8842b97
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b8842b97
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b8842b97

Branch: refs/heads/master
Commit: b8842b979244547dd43d48bbaeadf1cea34a9fef
Parents: 013efa1
Author: Jeff Jirsa 
Authored: Mon Aug 14 12:55:17 2017 -0700
Committer: Jeff Jirsa 
Committed: Mon Aug 14 12:57:47 2017 -0700

--
 tools/jmxutils.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b8842b97/tools/jmxutils.py
--
diff --git a/tools/jmxutils.py b/tools/jmxutils.py
index 1f41626..8c20eb8 100644
--- a/tools/jmxutils.py
+++ b/tools/jmxutils.py
@@ -158,7 +158,7 @@ def remove_perf_disable_shared_mem(node):
 option (see https://github.com/rhuss/jolokia/issues/198 for details).  This
 edits cassandra-env.sh (or the Windows equivalent), or jvm.options file on 
3.2+ to remove that option.
 """
-if node.cluster.version() >= LooseVersion('3.2'):
+if node.get_cassandra_version() >= LooseVersion('3.2'):
 conf_file = os.path.join(node.get_conf_dir(), JVM_OPTIONS)
 pattern = '\-XX:\+PerfDisableSharedMem'
 replacement = '#-XX:+PerfDisableSharedMem'


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[28/50] cassandra git commit: Add tests for 'nodetool getbatchlogreplaythrottle' and 'nodetool setbatchlogreplaythrottle' (#1491)

2017-09-04 Thread paulo
Add tests for 'nodetool getbatchlogreplaythrottle' and 'nodetool 
setbatchlogreplaythrottle' (#1491)

* Add test for 'nodetool setbatchlogreplaythrottlekb'

* Check log messages about updates in batchlog replay throttle

* Add test for 'nodetool getbatchlogreplaythrottlekb'

* Adapt tests for the renaming of the nodetool accessors for batchlog replay 
throttle

* Remove unused imports

* Removed extra blank line at the end of the file


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8cd52d67
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8cd52d67
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8cd52d67

Branch: refs/heads/master
Commit: 8cd52d67587ddb5efc80366ff6c6a044c30b41d3
Parents: 557ab7b
Author: Andrés de la Peña 
Authored: Thu Jul 6 12:26:10 2017 +0100
Committer: GitHub 
Committed: Thu Jul 6 12:26:10 2017 +0100

--
 jmx_test.py  | 21 +
 nodetool_test.py | 22 ++
 2 files changed, 43 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cd52d67/jmx_test.py
--
diff --git a/jmx_test.py b/jmx_test.py
index 7251b12..7df84ac 100644
--- a/jmx_test.py
+++ b/jmx_test.py
@@ -181,6 +181,27 @@ class TestJMX(Tester):
 self.assertGreater(endpoint2Phi, 0.0)
 self.assertLess(endpoint2Phi, max_phi)
 
+@since('4.0')
+def test_set_get_batchlog_replay_throttle(self):
+"""
+@jira_ticket CASSANDRA-13614
+
+Test that batchlog replay throttle can be set and get through JMX
+"""
+cluster = self.cluster
+cluster.populate(2)
+node = cluster.nodelist()[0]
+remove_perf_disable_shared_mem(node)
+cluster.start()
+
+# Set and get throttle with JMX, ensuring that the rate change is 
logged
+with JolokiaAgent(node) as jmx:
+mbean = make_mbean('db', 'StorageService')
+jmx.write_attribute(mbean, 'BatchlogReplayThrottleInKB', 4096)
+self.assertTrue(len(node.grep_log('Updating batchlog replay 
throttle to 4096 KB/s, 2048 KB/s per endpoint',
+  filename='debug.log')) > 0)
+self.assertEqual(4096, jmx.read_attribute(mbean, 
'BatchlogReplayThrottleInKB'))
+
 
 @since('3.9')
 class TestJMXSSL(Tester):

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8cd52d67/nodetool_test.py
--
diff --git a/nodetool_test.py b/nodetool_test.py
index ff4622b..d7ce89a 100644
--- a/nodetool_test.py
+++ b/nodetool_test.py
@@ -136,3 +136,25 @@ class TestNodetool(Tester):
 out, err, _ = node.nodetool('status')
 self.assertEqual(0, len(err), err)
 self.assertRegexpMatches(out, notice_message)
+
+@since('4.0')
+def test_set_get_batchlog_replay_throttle(self):
+"""
+@jira_ticket CASSANDRA-13614
+
+Test that batchlog replay throttle can be set and get through nodetool
+"""
+cluster = self.cluster
+cluster.populate(2)
+node = cluster.nodelist()[0]
+cluster.start()
+
+# Test that nodetool help messages are displayed
+self.assertTrue('Set batchlog replay throttle' in node.nodetool('help 
setbatchlogreplaythrottle').stdout)
+self.assertTrue('Print batchlog replay throttle' in 
node.nodetool('help getbatchlogreplaythrottle').stdout)
+
+# Set and get throttle with nodetool, ensuring that the rate change is 
logged
+node.nodetool('setbatchlogreplaythrottle 2048')
+self.assertTrue(len(node.grep_log('Updating batchlog replay throttle 
to 2048 KB/s, 1024 KB/s per endpoint',
+  filename='debug.log')) > 0)
+self.assertTrue('Batchlog replay throttle: 2048 KB/s' in 
node.nodetool('getbatchlogreplaythrottle').stdout)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[40/50] cassandra git commit: Update regex for expected digest mismatch log message

2017-09-04 Thread paulo
Update regex for expected digest mismatch log message

patch by Zhao Yang; reviewed by Stefan Podkowinski for CASSANDRA-13723


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/459943a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/459943a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/459943a3

Branch: refs/heads/master
Commit: 459943a35e7ea9ef49791b47bebaacc0b5af6e04
Parents: 9592087
Author: Zhao Yang 
Authored: Mon Aug 7 15:49:04 2017 +0800
Committer: Stefan Podkowinski 
Committed: Thu Aug 10 08:30:39 2017 +0200

--
 materialized_views_test.py | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/459943a3/materialized_views_test.py
--
diff --git a/materialized_views_test.py b/materialized_views_test.py
index 77b20e6..79679ca 100644
--- a/materialized_views_test.py
+++ b/materialized_views_test.py
@@ -228,7 +228,6 @@ class TestMaterializedViews(Tester):
 
 debug("wait that all batchlogs are replayed")
 self._replay_batchlogs()
-
 for i in xrange(5):
 for j in xrange(1):
 assert_one(session, "SELECT * FROM t_by_v WHERE id = {} AND v 
= {}".format(i, j), [j, i])
@@ -1064,8 +1063,8 @@ class TestMaterializedViews(Tester):
 # execution happening
 
 # Look for messages like:
-# Digest mismatch: 
org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
DecoratedKey
-regex = r"Digest mismatch: 
org.apache.cassandra.service.DigestMismatchException: Mismatch for key 
DecoratedKey"
+# Digest mismatch: Mismatch for key DecoratedKey
+regex = r"Digest mismatch: Mismatch for key DecoratedKey"
 for event in trace.events:
 desc = event.description
 match = re.match(regex, desc)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[13/50] cassandra git commit: Hinted handoff setmaxwindow test should only run on versions >= 4.0

2017-09-04 Thread paulo
Hinted handoff setmaxwindow test should only run on versions >= 4.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f3566ad
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f3566ad
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f3566ad

Branch: refs/heads/master
Commit: 7f3566ad7b27b9caa8ceccb361b09e42113aa41b
Parents: bea71d8
Author: Joel Knighton 
Authored: Wed May 24 14:20:28 2017 -0500
Committer: Philip Thompson 
Committed: Tue May 30 14:18:24 2017 +0200

--
 hintedhandoff_test.py | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f3566ad/hintedhandoff_test.py
--
diff --git a/hintedhandoff_test.py b/hintedhandoff_test.py
index 1ed3305..6345e3c 100644
--- a/hintedhandoff_test.py
+++ b/hintedhandoff_test.py
@@ -121,6 +121,7 @@ class TestHintedHandoffConfig(Tester):
 
 self._do_hinted_handoff(node1, node2, True)
 
+@since('4.0')
 def hintedhandoff_setmaxwindow_test(self):
 """
 Test global hinted handoff against max_hint_window_in_ms update via 
nodetool


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[38/50] cassandra git commit: Drop table should remove corresponding entries in dropped_columns table

2017-09-04 Thread paulo
Drop table should remove corresponding entries in dropped_columns table

patch by Zhao Yang; reviewed by Aleksey Yeschenko for CASSANDRA-13730


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7e3bcfd5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7e3bcfd5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7e3bcfd5

Branch: refs/heads/master
Commit: 7e3bcfd52fbc926b4c43e258a7e0efa19e1ca13d
Parents: 55c4ca8
Author: Zhao Yang 
Authored: Sun Jul 30 11:54:29 2017 +0800
Committer: Aleksey Yeschenko 
Committed: Thu Aug 3 14:31:47 2017 +0100

--
 snapshot_test.py | 34 ++
 1 file changed, 34 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7e3bcfd5/snapshot_test.py
--
diff --git a/snapshot_test.py b/snapshot_test.py
index 563af81..1aa5a70 100644
--- a/snapshot_test.py
+++ b/snapshot_test.py
@@ -115,6 +115,40 @@ class TestSnapshot(SnapshotTester):
 
 self.assertEqual(rows[0][0], 100)
 
+@since('3.0')
+def test_snapshot_and_restore_drop_table_remove_dropped_column(self):
+"""
+@jira_ticket CASSANDRA-13730
+
+Dropping table should clear entries in dropped_column table
+"""
+cluster = self.cluster
+cluster.populate(1).start()
+node1, = cluster.nodelist()
+session = self.patient_cql_connection(node1)
+
+# Create schema and insert some data
+create_ks(session, 'ks', 1)
+session.execute("CREATE TABLE ks.cf (k int PRIMARY KEY, a text, b 
text)")
+session.execute("INSERT INTO ks.cf (k, a, b) VALUES (1, 'a', 'b')")
+assert_one(session, "SELECT * FROM ks.cf", [1, "a", "b"])
+
+# Take a snapshot and drop the column and then drop table
+snapshot_dir = self.make_snapshot(node1, 'ks', 'cf', 'basic')
+session.execute("ALTER TABLE ks.cf DROP b")
+assert_one(session, "SELECT * FROM ks.cf", [1, "a"])
+session.execute("DROP TABLE ks.cf")
+
+# Restore schema and data from snapshot, data should be the same as 
input
+self.restore_snapshot_schema(snapshot_dir, node1, 'ks', 'cf')
+self.restore_snapshot(snapshot_dir, node1, 'ks', 'cf')
+node1.nodetool('refresh ks cf')
+assert_one(session, "SELECT * FROM ks.cf", [1, "a", "b"])
+
+# Clean up
+debug("removing snapshot_dir: " + snapshot_dir)
+shutil.rmtree(snapshot_dir)
+
 @since('3.11')
 def test_snapshot_and_restore_dropping_a_column(self):
 """


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[23/50] cassandra git commit: Force decommission on topology test where required on 4.0+, run with vnodes

2017-09-04 Thread paulo
Force decommission on topology test where required on 4.0+, run with vnodes


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c368a909
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c368a909
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c368a909

Branch: refs/heads/master
Commit: c368a9098a4f5c8bd476257019154bf700963294
Parents: 1cc4941
Author: Joel Knighton 
Authored: Mon Jun 19 14:53:06 2017 -0500
Committer: Philip Thompson 
Committed: Tue Jun 20 12:10:58 2017 +0200

--
 topology_test.py | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c368a909/topology_test.py
--
diff --git a/topology_test.py b/topology_test.py
index 15827f3..45c1c73 100644
--- a/topology_test.py
+++ b/topology_test.py
@@ -351,7 +351,6 @@ class TestTopology(Tester):
 query_c1c2(session, n, ConsistencyLevel.ONE)
 
 @since('3.0')
-@no_vnodes()
 def decommissioned_node_cant_rejoin_test(self):
 '''
 @jira_ticket CASSANDRA-8801
@@ -375,7 +374,7 @@ class TestTopology(Tester):
 node1, node2, node3 = self.cluster.nodelist()
 
 debug('decommissioning...')
-node3.decommission()
+node3.decommission(force=self.cluster.version() >= '4.0')
 debug('stopping...')
 node3.stop()
 debug('attempting restart...')


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[46/50] cassandra git commit: Bump CCM version to 2.8.1

2017-09-04 Thread paulo
Bump CCM version to 2.8.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d5ee379
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d5ee379
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d5ee379

Branch: refs/heads/master
Commit: 6d5ee3792dd226b0eea5afaadf8489b150ea4b18
Parents: da6ad83
Author: Michael Shuler 
Authored: Fri Aug 25 07:56:20 2017 -0500
Committer: Michael Shuler 
Committed: Fri Aug 25 07:56:20 2017 -0500

--
 requirements.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d5ee379/requirements.txt
--
diff --git a/requirements.txt b/requirements.txt
index 9be7094..058ea38 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4,7 +4,7 @@
 futures
 six
 -e 
git+https://github.com/datastax/python-driver.git@cassandra-test#egg=cassandra-driver
-ccm==2.6.3
+ccm==2.8.1
 cql
 decorator
 docopt


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[19/50] cassandra git commit: CASSANDRA-9143 change needs dtest change

2017-09-04 Thread paulo
CASSANDRA-9143 change needs dtest change

CASSANDRA-9143 introduced strict check for not allowing
incremental subrange repair.
dtest needs to pass `-full` to repair command to invoke subrange
repair.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f69ced02
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f69ced02
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f69ced02

Branch: refs/heads/master
Commit: f69ced02b99ecc641b4b8eb149d12afe97e6f100
Parents: dfeb5df
Author: Yuki Morishita 
Authored: Tue Mar 28 14:40:57 2017 +0900
Committer: Joel Knighton 
Committed: Mon Jun 19 15:34:24 2017 -0500

--
 repair_tests/repair_test.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f69ced02/repair_tests/repair_test.py
--
diff --git a/repair_tests/repair_test.py b/repair_tests/repair_test.py
index 5276556..ad46d18 100644
--- a/repair_tests/repair_test.py
+++ b/repair_tests/repair_test.py
@@ -1047,9 +1047,9 @@ class TestRepair(BaseRepairTest):
 node2.stop(wait_other_notice=True)
 node1.stress(['write', 'n=1M', 'no-warmup', '-schema', 
'replication(factor=3)', '-rate', 'threads=30'])
 node2.start(wait_for_binary_proto=True)
-t1 = threading.Thread(target=node1.nodetool, args=('repair keyspace1 
standard1 -st {} -et {}'.format(str(node3.initial_token), 
str(node1.initial_token)),))
-t2 = threading.Thread(target=node2.nodetool, args=('repair keyspace1 
standard1 -st {} -et {}'.format(str(node1.initial_token), 
str(node2.initial_token)),))
-t3 = threading.Thread(target=node3.nodetool, args=('repair keyspace1 
standard1 -st {} -et {}'.format(str(node2.initial_token), 
str(node3.initial_token)),))
+t1 = threading.Thread(target=node1.nodetool, args=('repair keyspace1 
standard1 -full -st {} -et {}'.format(str(node3.initial_token), 
str(node1.initial_token)),))
+t2 = threading.Thread(target=node2.nodetool, args=('repair keyspace1 
standard1 -full -st {} -et {}'.format(str(node1.initial_token), 
str(node2.initial_token)),))
+t3 = threading.Thread(target=node3.nodetool, args=('repair keyspace1 
standard1 -full -st {} -et {}'.format(str(node2.initial_token), 
str(node3.initial_token)),))
 t1.start()
 t2.start()
 t3.start()


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[32/50] cassandra git commit: dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2017-09-04 Thread paulo
dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

Patch by Ariel Weisberg; Reviewed by Philip Thompson for CASSANDRA-12617


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cc355ff2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cc355ff2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cc355ff2

Branch: refs/heads/master
Commit: cc355ff255f6b44f6b9b77dfd18a586885e2200a
Parents: f1b0ba8
Author: Ariel Weisberg 
Authored: Wed Jul 12 18:38:05 2017 -0400
Committer: Ariel Weisberg 
Committed: Wed Jul 12 18:38:05 2017 -0400

--
 offline_tools_test.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cc355ff2/offline_tools_test.py
--
diff --git a/offline_tools_test.py b/offline_tools_test.py
index c0a0010..028027d 100644
--- a/offline_tools_test.py
+++ b/offline_tools_test.py
@@ -158,7 +158,7 @@ class TestOfflineTools(Tester):
 keys = 8 * cluster.data_dir_count
 node1.stress(['write', 'n={0}K'.format(keys), 'no-warmup',
   '-schema', 'replication(factor=1)',
-  '-col', 'n=FIXED(10)', 'SIZE=FIXED(1024)',
+  '-col', 'n=FIXED(10)', 'SIZE=FIXED(1200)',
   '-rate', 'threads=8'])
 
 node1.flush()


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[11/50] cassandra git commit: Add a sleep after compaction to give it time before checking SSTable directory for files (CASSANDRA-13182)

2017-09-04 Thread paulo
Add a sleep after compaction to give it time before checking SSTable directory 
for files (CASSANDRA-13182)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/538d658e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/538d658e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/538d658e

Branch: refs/heads/master
Commit: 538d658e0fb6b067ffeedd250c5997e2e77ad735
Parents: f148942
Author: Lerh Chuan Low 
Authored: Tue May 16 16:09:32 2017 +1000
Committer: Philip Thompson 
Committed: Tue May 16 09:55:20 2017 -0400

--
 sstableutil_test.py | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/538d658e/sstableutil_test.py
--
diff --git a/sstableutil_test.py b/sstableutil_test.py
index a8f4487..0886a26 100644
--- a/sstableutil_test.py
+++ b/sstableutil_test.py
@@ -1,6 +1,7 @@
 import glob
 import os
 import subprocess
+import time
 
 from ccmlib import common
 from ccmlib.node import ToolError
@@ -40,6 +41,7 @@ class SSTableUtilTest(Tester):
 self.assertEqual(0, len(tmpfiles))
 
 node.compact()
+time.sleep(5)
 finalfiles, tmpfiles = self._check_files(node, KeyspaceName, TableName)
 self.assertEqual(0, len(tmpfiles))
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[15/50] cassandra git commit: Test for CASSANDRA-13559

2017-09-04 Thread paulo
Test for CASSANDRA-13559


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bbe136cd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bbe136cd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bbe136cd

Branch: refs/heads/master
Commit: bbe136cde81a1752d8922dee24391a24400a5b68
Parents: 7f3566a
Author: Stefania Alborghetti 
Authored: Thu Jun 1 11:03:54 2017 +0800
Committer: Stefania Alborghetti 
Committed: Mon Jun 5 09:25:19 2017 +0800

--
 upgrade_tests/regression_test.py | 42 ++-
 1 file changed, 41 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bbe136cd/upgrade_tests/regression_test.py
--
diff --git a/upgrade_tests/regression_test.py b/upgrade_tests/regression_test.py
index cff3c50..613d195 100644
--- a/upgrade_tests/regression_test.py
+++ b/upgrade_tests/regression_test.py
@@ -6,7 +6,8 @@ from unittest import skipUnless
 from cassandra import ConsistencyLevel as CL
 from nose.tools import assert_not_in
 
-from dtest import RUN_STATIC_UPGRADE_MATRIX
+from dtest import RUN_STATIC_UPGRADE_MATRIX, debug
+from tools.decorators import since
 from tools.jmxutils import (JolokiaAgent, make_mbean)
 from upgrade_base import UpgradeTester
 from upgrade_manifest import build_upgrade_pairs
@@ -116,6 +117,45 @@ class TestForRegressions(UpgradeTester):
 checked = True
 self.assertTrue(checked)
 
+@since('3.0.14', max_version='3.0.99')
+def test_schema_agreement(self):
+"""
+Test that nodes agree on the schema during an upgrade in the 3.0.x 
series.
+
+Create a table before upgrading the cluster and wait for schema 
agreement.
+Upgrade one node and create one more table, wait for schema agreement 
and check
+the schema versions with nodetool describecluster.
+
+We know that schemas will not necessarily agree from 2.1/2.2 to 3.0.x 
or from 3.0.x to 3.x
+and upwards, so we only test the 3.0.x series for now. We start with 
3.0.13 because
+there is a problem in 3.0.13, see CASSANDRA-12213 and 13559.
+
+@jira_ticket CASSANDRA-13559
+"""
+session = self.prepare(nodes=5)
+session.execute("CREATE TABLE schema_agreement_test_1 ( id int PRIMARY 
KEY, value text )")
+
session.cluster.control_connection.wait_for_schema_agreement(wait_time=30)
+
+def validate_schema_agreement(n, is_upgr):
+debug("querying node {} for schema information, upgraded: 
{}".format(n.name, is_upgr))
+
+response = n.nodetool('describecluster').stdout
+debug(response)
+schemas = response.split('Schema versions:')[1].strip()
+num_schemas = len(re.findall('\[.*?\]', schemas))
+self.assertEqual(num_schemas, 1, "There were multiple schema 
versions during an upgrade: {}"
+ .format(schemas))
+
+for node in self.cluster.nodelist():
+validate_schema_agreement(node, False)
+
+for is_upgraded, session, node in self.do_upgrade(session, 
return_nodes=True):
+validate_schema_agreement(node, is_upgraded)
+if is_upgraded:
+session.execute("CREATE TABLE schema_agreement_test_2 ( id int 
PRIMARY KEY, value text )")
+
session.cluster.control_connection.wait_for_schema_agreement(wait_time=30)
+validate_schema_agreement(node, is_upgraded)
+
 def compact_sstable(self, node, sstable):
 mbean = make_mbean('db', type='CompactionManager')
 with JolokiaAgent(node) as jmx:


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[04/50] cassandra git commit: adding test for repair preview introduced in CASSANDRA-13257 (#1465)

2017-09-04 Thread paulo
adding test for repair preview introduced in CASSANDRA-13257 (#1465)

* adding test for repair preview

* review fixes


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d5c413c4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d5c413c4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d5c413c4

Branch: refs/heads/master
Commit: d5c413c41ba174276196a8b7c5f590632c5e20be
Parents: 0667de0
Author: Blake Eggleston 
Authored: Tue May 9 12:45:10 2017 -0700
Committer: Philip Thompson 
Committed: Tue May 9 15:45:10 2017 -0400

--
 repair_tests/preview_repair_test.py | 85 
 1 file changed, 85 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d5c413c4/repair_tests/preview_repair_test.py
--
diff --git a/repair_tests/preview_repair_test.py 
b/repair_tests/preview_repair_test.py
new file mode 100644
index 000..e888a9b
--- /dev/null
+++ b/repair_tests/preview_repair_test.py
@@ -0,0 +1,85 @@
+import time
+
+from cassandra import ConsistencyLevel
+from cassandra.query import SimpleStatement
+
+from dtest import Tester
+from tools.decorators import no_vnodes
+
+
+class PreviewRepairTest(Tester):
+
+def assert_no_repair_history(self, session):
+rows = session.execute("select * from 
system_distributed.repair_history")
+self.assertEqual(rows.current_rows, [])
+rows = session.execute("select * from 
system_distributed.parent_repair_history")
+self.assertEqual(rows.current_rows, [])
+
+@no_vnodes()
+def preview_test(self):
+""" Test that preview correctly detects out of sync data """
+cluster = self.cluster
+cluster.set_configuration_options(values={'hinted_handoff_enabled': 
False, 'commitlog_sync_period_in_ms': 500})
+cluster.populate(3).start()
+node1, node2, node3 = cluster.nodelist()
+
+session = self.patient_exclusive_cql_connection(node3)
+session.execute("CREATE KEYSPACE ks WITH 
REPLICATION={'class':'SimpleStrategy', 'replication_factor': 3}")
+session.execute("CREATE TABLE ks.tbl (k INT PRIMARY KEY, v INT)")
+
+# everything should be in sync
+result = node1.repair(options=['ks', '--preview'])
+self.assertIn("Previewed data was in sync", result.stdout)
+self.assert_no_repair_history(session)
+
+# make data inconsistent between nodes
+stmt = SimpleStatement("INSERT INTO ks.tbl (k,v) VALUES (%s, %s)")
+stmt.consistency_level = ConsistencyLevel.ALL
+for i in range(10):
+session.execute(stmt, (i, i))
+node3.flush()
+time.sleep(1)
+node3.stop(gently=False)
+stmt.consistency_level = ConsistencyLevel.QUORUM
+
+session = self.exclusive_cql_connection(node1)
+for i in range(10):
+session.execute(stmt, (i + 10, i + 10))
+node1.flush()
+time.sleep(1)
+node1.stop(gently=False)
+node3.start(wait_other_notice=True, wait_for_binary_proto=True)
+session = self.exclusive_cql_connection(node2)
+for i in range(10):
+session.execute(stmt, (i + 20, i + 20))
+node1.start(wait_other_notice=True, wait_for_binary_proto=True)
+
+# data should not be in sync for full and unrepaired previews
+result = node1.repair(options=['ks', '--preview'])
+self.assertIn("Total estimated streaming", result.stdout)
+self.assertNotIn("Previewed data was in sync", result.stdout)
+
+result = node1.repair(options=['ks', '--preview', '--full'])
+self.assertIn("Total estimated streaming", result.stdout)
+self.assertNotIn("Previewed data was in sync", result.stdout)
+
+# repaired data should be in sync anyway
+result = node1.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)
+
+self.assert_no_repair_history(session)
+
+# repair the data...
+node1.repair(options=['ks'])
+for node in cluster.nodelist():
+node.nodetool('compact ks tbl')
+
+# ...and everything should be in sync
+result = node1.repair(options=['ks', '--preview'])
+self.assertIn("Previewed data was in sync", result.stdout)
+
+result = node1.repair(options=['ks', '--preview', '--full'])
+self.assertIn("Previewed data was in sync", result.stdout)
+
+result = node1.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: comm

[47/50] cassandra git commit: Add tests for mixed version batchlog replay

2017-09-04 Thread paulo
Add tests for mixed version batchlog replay

Patch by Jeff Jirsa; reviewed by Aleksey Yeschenko


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ac9c9560
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ac9c9560
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ac9c9560

Branch: refs/heads/master
Commit: ac9c95607ce439de596da41c368d79c67d6dcdda
Parents: 6d5ee37
Author: Jeff Jirsa 
Authored: Mon Aug 14 12:55:17 2017 -0700
Committer: Aleksey Yeschenko 
Committed: Sat Aug 26 01:21:00 2017 +0100

--
 batch_test.py | 96 +++---
 byteman/fail_after_batchlog_write.btm | 19 ++
 2 files changed, 94 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ac9c9560/batch_test.py
--
diff --git a/batch_test.py b/batch_test.py
index 4194f10..5c25c46 100644
--- a/batch_test.py
+++ b/batch_test.py
@@ -1,6 +1,7 @@
 import sys
 import time
 from unittest import skipIf
+from nose.tools import assert_greater_equal
 
 from cassandra import ConsistencyLevel, Timeout, Unavailable
 from cassandra.query import SimpleStatement
@@ -9,6 +10,8 @@ from dtest import Tester, create_ks, debug
 from tools.assertions import (assert_all, assert_invalid, assert_one,
   assert_unavailable)
 from tools.decorators import since
+from tools.jmxutils import (JolokiaAgent, make_mbean,
+remove_perf_disable_shared_mem)
 
 
 class TestBatch(Tester):
@@ -295,6 +298,15 @@ class TestBatch(Tester):
 self._logged_batch_compatibility_test(0, 1, 
'github:apache/cassandra-2.2', 2, 4)
 
 @since('3.0', max_version='3.x')
+def batchlog_replay_compatibility_1_test(self):
+"""
+@jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
+
+Here we have one 3.0/3.x node and two 2.2 nodes and we send the batch 
request to the 3.0 node.
+"""
+self._batchlog_replay_compatibility_test(0, 1, 
'github:apache/cassandra-2.2', 2, 4)
+
+@since('3.0', max_version='3.x')
 @skipIf(sys.platform == 'win32', 'Windows production support only on 2.2+')
 def logged_batch_compatibility_2_test(self):
 """
@@ -324,6 +336,15 @@ class TestBatch(Tester):
 self._logged_batch_compatibility_test(2, 2, 
'github:apache/cassandra-2.2', 1, 4)
 
 @since('3.0', max_version='3.x')
+def batchlog_replay_compatibility_4_test(self):
+"""
+@jira_ticket CASSANDRA-9673, test that logged batches still work with 
a mixed version cluster.
+
+Here we have two 3.0/3.x nodes and one 2.2 node and we send the batch 
request to the 2.2 node.
+"""
+self._batchlog_replay_compatibility_test(2, 2, 
'github:apache/cassandra-2.2', 1, 4)
+
+@since('3.0', max_version='3.x')
 @skipIf(sys.platform == 'win32', 'Windows production support only on 2.2+')
 def logged_batch_compatibility_5_test(self):
 """
@@ -346,6 +367,43 @@ class TestBatch(Tester):
 res = sorted(rows)
 self.assertEquals([[0, 'Jack', 'Sparrow'], [1, 'Will', 'Turner']], 
[list(res[0]), list(res[1])])
 
+def _batchlog_replay_compatibility_test(self, coordinator_idx, 
current_nodes, previous_version, previous_nodes, protocol_version):
+session = self.prepare_mixed(coordinator_idx, current_nodes, 
previous_version, previous_nodes,
+ protocol_version=protocol_version, 
install_byteman=True)
+
+coordinator = self.cluster.nodelist()[coordinator_idx]
+coordinator.byteman_submit(['./byteman/fail_after_batchlog_write.btm'])
+debug("Injected byteman scripts to enable batchlog replay 
{}".format(coordinator.name))
+
+query = """
+BEGIN BATCH
+INSERT INTO users (id, firstname, lastname) VALUES (0, 'Jack', 
'Sparrow')
+INSERT INTO users (id, firstname, lastname) VALUES (1, 'Will', 
'Turner')
+APPLY BATCH
+"""
+session.execute(query)
+
+total_batches_replayed = 0
+blm = make_mbean('db', type='BatchlogManager')
+
+for n in self.cluster.nodelist():
+if n == coordinator:
+continue
+
+with JolokiaAgent(n) as jmx:
+debug('Forcing batchlog replay for {}'.format(n.name))
+jmx.execute_method(blm, 'forceBatchlogReplay')
+batches_replayed = jmx.read_attribute(blm, 
'TotalBatchesReplayed')
+debug('{} batches replayed on node 
{}'.format(batches_replayed, n.name))
+total_batches_replayed += batches_replayed
+
+assert_greater_equal(total_batches_replayed, 2)
+
+ 

[01/50] cassandra git commit: Merge pull request #1456 from stef1927/13364

2017-09-04 Thread paulo
Repository: cassandra
Updated Branches:
  refs/heads/master [created] 6d77ace53


Merge pull request #1456 from stef1927/13364

Added test case for CASSANDRA-13364

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0692e2b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0692e2b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0692e2b6

Branch: refs/heads/master
Commit: 0692e2b63b3efe507b4c87be3dd3afb90042b8f7
Parents: 8513c47 ec6b958
Author: Stefania Alborghetti 
Authored: Fri Apr 7 09:11:07 2017 +0800
Committer: GitHub 
Committed: Fri Apr 7 09:11:07 2017 +0800

--
 cqlsh_tests/cqlsh_copy_tests.py | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[16/50] cassandra git commit: Merge pull request #1477 from stef1927/13559

2017-09-04 Thread paulo
Merge pull request #1477 from stef1927/13559

Test for CASSANDRA-13559

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ef84f767
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ef84f767
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ef84f767

Branch: refs/heads/master
Commit: ef84f7679ad64b708cb19c5294e2e670fb69df25
Parents: 6f7caba bbe136c
Author: Stefania Alborghetti 
Authored: Fri Jun 9 07:36:35 2017 +0800
Committer: GitHub 
Committed: Fri Jun 9 07:36:35 2017 +0800

--
 upgrade_tests/regression_test.py | 42 ++-
 1 file changed, 41 insertions(+), 1 deletion(-)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[34/50] cassandra git commit: Allow TRACE logging on upgrade tests

2017-09-04 Thread paulo
Allow TRACE logging on upgrade tests

patch by jasobrown, reviewed by mkjellman for CASSANDRA-13715


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/894bc92c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/894bc92c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/894bc92c

Branch: refs/heads/master
Commit: 894bc92c9608103b3c5656d6dd0233e514a57848
Parents: d040629
Author: Jason Brown 
Authored: Thu Jul 20 17:56:02 2017 -0700
Committer: Jason Brown 
Committed: Fri Jul 21 11:41:06 2017 -0700

--
 upgrade_tests/upgrade_base.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/894bc92c/upgrade_tests/upgrade_base.py
--
diff --git a/upgrade_tests/upgrade_base.py b/upgrade_tests/upgrade_base.py
index 6d37468..65957bd 100644
--- a/upgrade_tests/upgrade_base.py
+++ b/upgrade_tests/upgrade_base.py
@@ -7,7 +7,7 @@ from unittest import skipIf
 from ccmlib.common import get_version_from_build, is_win
 from tools.jmxutils import remove_perf_disable_shared_mem
 
-from dtest import CASSANDRA_VERSION_FROM_BUILD, DEBUG, Tester, debug, create_ks
+from dtest import CASSANDRA_VERSION_FROM_BUILD, TRACE, DEBUG, Tester, debug, 
create_ks
 
 
 def switch_jdks(major_version_int):
@@ -161,7 +161,7 @@ class UpgradeTester(Tester):
 if (new_version_from_build >= '3' and self.protocol_version is not 
None and self.protocol_version < 3):
 self.skip('Protocol version {} incompatible '
   'with Cassandra version 
{}'.format(self.protocol_version, new_version_from_build))
-node1.set_log_level("DEBUG" if DEBUG else "INFO")
+node1.set_log_level("DEBUG" if DEBUG else "TRACE" if TRACE else "INFO")
 node1.set_configuration_options(values={'internode_compression': 
'none'})
 
 if self.enable_for_jolokia:


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[24/50] cassandra git commit: Add dtests for compatibility flag introduced in CASSANDRA-13004 (#1485)

2017-09-04 Thread paulo
Add dtests for compatibility flag introduced in CASSANDRA-13004 (#1485)

Add dtests for compatibility flag introduced in CASSANDRA-13004

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6847bc10
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6847bc10
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6847bc10

Branch: refs/heads/master
Commit: 6847bc10c2a3fa3ee911b0cf3826920bc4dbad18
Parents: c368a90
Author: Alex Petrov 
Authored: Tue Jun 20 20:25:51 2017 +0200
Committer: GitHub 
Committed: Tue Jun 20 20:25:51 2017 +0200

--
 upgrade_tests/compatibility_flag_test.py | 132 ++
 1 file changed, 132 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6847bc10/upgrade_tests/compatibility_flag_test.py
--
diff --git a/upgrade_tests/compatibility_flag_test.py 
b/upgrade_tests/compatibility_flag_test.py
new file mode 100644
index 000..1abeaef
--- /dev/null
+++ b/upgrade_tests/compatibility_flag_test.py
@@ -0,0 +1,132 @@
+from dtest import Tester, debug
+from tools.assertions import assert_all
+from tools.decorators import since
+
+
+class CompatibilityFlagTest(Tester):
+"""
+Test 30 protocol compatibility flag
+
+@jira CASSANDRA-13004
+"""
+
+def _compatibility_flag_off_with_30_node_test(self, from_version):
+"""
+Test compatibility with 30 protocol version: if the flag is unset, 
schema agreement can not be reached
+"""
+
+cluster = self.cluster
+cluster.populate(2)
+node1, node2 = cluster.nodelist()
+cluster.set_install_dir(version=from_version)
+cluster.start(wait_for_binary_proto=True)
+
+node1.drain()
+node1.watch_log_for("DRAINED")
+node1.stop(wait_other_notice=False)
+debug("Upgrading to current version")
+self.set_node_to_current_version(node1)
+node1.start(wait_for_binary_proto=True)
+
+node1.watch_log_for("Not pulling schema because versions match or 
shouldPullSchemaFrom returned false", filename='debug.log')
+node2.watch_log_for("Not pulling schema because versions match or 
shouldPullSchemaFrom returned false", filename='debug.log')
+
+def _compatibility_flag_on_with_30_test(self, from_version):
+"""
+Test compatibility with 30 protocol version: if the flag is set, 
schema agreement can be reached
+"""
+
+cluster = self.cluster
+cluster.populate(2)
+node1, node2 = cluster.nodelist()
+cluster.set_install_dir(version=from_version)
+cluster.start(wait_for_binary_proto=True)
+
+node1.drain()
+node1.watch_log_for("DRAINED")
+node1.stop(wait_other_notice=False)
+debug("Upgrading to current version")
+self.set_node_to_current_version(node1)
+node1.start(jvm_args=["-Dcassandra.force_3_0_protocol_version=true"], 
wait_for_binary_proto=True)
+
+session = self.patient_cql_connection(node1)
+self._run_test(session)
+
+def _compatibility_flag_on_3014_test(self):
+"""
+Test compatibility between post-13004 nodes, one of which is in 
compatibility mode
+"""
+
+cluster = self.cluster
+cluster.populate(2)
+node1, node2 = cluster.nodelist()
+
+node1.start(wait_for_binary_proto=True)
+node2.start(jvm_args=["-Dcassandra.force_3_0_protocol_version=true"], 
wait_for_binary_proto=True)
+
+session = self.patient_cql_connection(node1)
+self._run_test(session)
+
+def _compatibility_flag_off_3014_test(self):
+"""
+Test compatibility between post-13004 nodes
+"""
+
+cluster = self.cluster
+cluster.populate(2)
+node1, node2 = cluster.nodelist()
+
+node1.start(wait_for_binary_proto=True)
+node2.start(wait_for_binary_proto=True)
+
+session = self.patient_cql_connection(node1)
+self._run_test(session)
+
+def _run_test(self, session):
+# Make sure the system_auth table will get replicated to the node that 
we're going to replace
+
+session.execute("CREATE KEYSPACE test WITH replication = {'class': 
'SimpleStrategy', 'replication_factor': '2'} ;")
+session.execute("CREATE TABLE test.test (a text PRIMARY KEY, b text, c 
text);")
+
+for i in range(1, 6):
+session.execute("INSERT INTO test.test (a, b, c) VALUES ('{}', 
'{}', '{}');".format(i, i + 1, i + 2))
+
+assert_all(session,
+   "SELECT * FROM test.test",
+   [[str(i), str(i + 1), str(i + 2)] for i in range(1, 6)], 
ignore_order=True)
+
+assert_all(session,
+   "SELECT a,c FROM test.test

[29/50] cassandra git commit: Adds the ability to use uncompressed chunks in compressed files

2017-09-04 Thread paulo
Adds the ability to use uncompressed chunks in compressed files

Triggered when size of compressed data surpasses a configurable
threshold.

Patch by Branimir Lambov; reviewed by Ropert Stupp for CASSANDRA-10520


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/058b9528
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/058b9528
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/058b9528

Branch: refs/heads/master
Commit: 058b95289bf815495fced0ac55a78bcceceea9fa
Parents: 8cd52d6
Author: Branimir Lambov 
Authored: Tue Jan 17 16:25:07 2017 +0200
Committer: Alex Petrov 
Committed: Thu Jul 6 15:18:19 2017 +0200

--
 cqlsh_tests/cqlsh_tests.py | 44 +++--
 1 file changed, 42 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/058b9528/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index e7bc11c..dee1891 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -847,7 +847,25 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 PRIMARY KEY (id, col)
 """
 
-if self.cluster.version() >= LooseVersion('3.9'):
+if self.cluster.version() >= LooseVersion('4.0'):
+ret += """
+) WITH CLUSTERING ORDER BY (col ASC)
+AND bloom_filter_fp_chance = 0.01
+AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
+AND comment = ''
+AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
+AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor', 'min_compress_ratio': '1.1'}
+AND crc_check_chance = 1.0
+AND dclocal_read_repair_chance = 0.1
+AND default_time_to_live = 0
+AND gc_grace_seconds = 864000
+AND max_index_interval = 2048
+AND memtable_flush_period_in_ms = 0
+AND min_index_interval = 128
+AND read_repair_chance = 0.0
+AND speculative_retry = '99PERCENTILE';
+"""
+elif self.cluster.version() >= LooseVersion('3.9'):
 ret += """
 ) WITH CLUSTERING ORDER BY (col ASC)
 AND bloom_filter_fp_chance = 0.01
@@ -913,7 +931,29 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 return ret + "\n" + col_idx_def
 
 def get_users_table_output(self):
-if self.cluster.version() >= LooseVersion('3.9'):
+if self.cluster.version() >= LooseVersion('4.0'):
+return """
+CREATE TABLE test.users (
+userid text PRIMARY KEY,
+age int,
+firstname text,
+lastname text
+) WITH bloom_filter_fp_chance = 0.01
+AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
+AND comment = ''
+AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
+AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor', 'min_compress_ratio': '1.1'}
+AND crc_check_chance = 1.0
+AND dclocal_read_repair_chance = 0.1
+AND default_time_to_live = 0
+AND gc_grace_seconds = 864000
+AND max_index_interval = 2048
+AND memtable_flush_period_in_ms = 0
+AND min_index_interval = 128
+AND read_repair_chance = 0.0
+AND speculative_retry = '99PERCENTILE';
+""" + self.get_index_output('myindex', 'test', 'users', 'age')
+elif self.cluster.version() >= LooseVersion('3.9'):
 return """
 CREATE TABLE test.users (
 userid text PRIMARY KEY,


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[02/50] cassandra git commit: Create test for restoring a snapshot with dropped columns (CASSANDRA-13276)

2017-09-04 Thread paulo
Create test for restoring a snapshot with dropped columns (CASSANDRA-13276)




Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e6b47064
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e6b47064
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e6b47064

Branch: refs/heads/master
Commit: e6b47064237ce4d9dc10313995fba34cb9cdefb7
Parents: 0692e2b
Author: Andrés de la Peña 
Authored: Tue Apr 25 20:38:53 2017 +0100
Committer: GitHub 
Committed: Tue Apr 25 20:38:53 2017 +0100

--
 snapshot_test.py | 43 +++
 1 file changed, 43 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e6b47064/snapshot_test.py
--
diff --git a/snapshot_test.py b/snapshot_test.py
index ba184ee..7169a7c 100644
--- a/snapshot_test.py
+++ b/snapshot_test.py
@@ -9,6 +9,7 @@ from cassandra.concurrent import execute_concurrent_with_args
 
 from dtest import (Tester, cleanup_cluster, create_ccm_cluster, create_ks,
debug, get_test_path)
+from tools.assertions import assert_one
 from tools.files import replace_in_file, safe_mkdtemp
 from tools.hacks import advance_to_next_cl_segment
 from tools.misc import ImmutableMapping
@@ -70,6 +71,13 @@ class SnapshotTester(Tester):
 raise Exception("sstableloader command '%s' failed; exit 
status: %d'; stdout: %s; stderr: %s" %
 (" ".join(args), exit_status, stdout, 
stderr))
 
+def restore_snapshot_schema(self, snapshot_dir, node, ks, cf):
+debug("Restoring snapshot schema")
+for x in xrange(0, self.cluster.data_dir_count):
+schema_path = os.path.join(snapshot_dir, str(x), ks, cf, 
'schema.cql')
+if os.path.exists(schema_path):
+node.run_cqlsh(cmds="SOURCE '%s'" % schema_path)
+
 
 class TestSnapshot(SnapshotTester):
 
@@ -106,6 +114,41 @@ class TestSnapshot(SnapshotTester):
 
 self.assertEqual(rows[0][0], 100)
 
+def test_snapshot_and_restore_dropping_a_column(self):
+"""
+@jira_ticket CASSANDRA-13276
+
+Can't load snapshots of tables with dropped columns.
+"""
+cluster = self.cluster
+cluster.populate(1).start()
+node1, = cluster.nodelist()
+session = self.patient_cql_connection(node1)
+
+# Create schema and insert some data
+create_ks(session, 'ks', 1)
+session.execute("CREATE TABLE ks.cf (k int PRIMARY KEY, a text, b 
text)")
+session.execute("INSERT INTO ks.cf (k, a, b) VALUES (1, 'a', 'b')")
+assert_one(session, "SELECT * FROM ks.cf", [1, "a", "b"])
+
+# Drop a column
+session.execute("ALTER TABLE ks.cf DROP b")
+assert_one(session, "SELECT * FROM ks.cf", [1, "a"])
+
+# Take a snapshot and drop the table
+snapshot_dir = self.make_snapshot(node1, 'ks', 'cf', 'basic')
+session.execute("DROP TABLE ks.cf")
+
+# Restore schema and data from snapshot
+self.restore_snapshot_schema(snapshot_dir, node1, 'ks', 'cf')
+self.restore_snapshot(snapshot_dir, node1, 'ks', 'cf')
+node1.nodetool('refresh ks cf')
+assert_one(session, "SELECT * FROM ks.cf", [1, "a"])
+
+# Clean up
+debug("removing snapshot_dir: " + snapshot_dir)
+shutil.rmtree(snapshot_dir)
+
 
 class TestArchiveCommitlog(SnapshotTester):
 cluster_options = ImmutableMapping({'commitlog_segment_size_in_mb': 1})


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[25/50] cassandra git commit: CASSANDRA-10130 (#1486)

2017-09-04 Thread paulo
CASSANDRA-10130 (#1486)

* Add test case for CASSANDRA-10130

* Address comments by @sbtourist

* Add more tests for index status management

* Ad missed `@staticmethod` annotation

* Add @since annotations for 4.0

* Update failing index build failures

* Fix code style removing trailing whitespaces and blank lines with whitespaces


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/50e1e7b1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/50e1e7b1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/50e1e7b1

Branch: refs/heads/master
Commit: 50e1e7b13a1eef3e9347aee7806dc40569ab17ad
Parents: 6847bc1
Author: Andrés de la Peña 
Authored: Mon Jun 26 13:18:55 2017 +0100
Committer: Philip Thompson 
Committed: Mon Jun 26 14:18:55 2017 +0200

--
 byteman/index_build_failure.btm|  13 +++
 secondary_indexes_test.py  | 174 +---
 sstable_generation_loading_test.py | 122 +-
 3 files changed, 271 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/50e1e7b1/byteman/index_build_failure.btm
--
diff --git a/byteman/index_build_failure.btm b/byteman/index_build_failure.btm
new file mode 100644
index 000..8f5183d
--- /dev/null
+++ b/byteman/index_build_failure.btm
@@ -0,0 +1,13 @@
+#
+# Sleep 5s during index update
+#
+RULE fail during index building
+CLASS org.apache.cassandra.db.compaction.CompactionManager
+METHOD submitIndexBuild
+AT ENTRY
+# set flag to only run this rule once.
+IF NOT flagged("done")
+DO
+   flag("done");
+   throw new java.lang.RuntimeException("Index building failure")
+ENDRULE

http://git-wip-us.apache.org/repos/asf/cassandra/blob/50e1e7b1/secondary_indexes_test.py
--
diff --git a/secondary_indexes_test.py b/secondary_indexes_test.py
index b73e94d..1edd30e 100644
--- a/secondary_indexes_test.py
+++ b/secondary_indexes_test.py
@@ -13,7 +13,7 @@ from cassandra.query import BatchStatement, SimpleStatement
 
 from dtest import (DISABLE_VNODES, OFFHEAP_MEMTABLES, DtestTimeoutError,
Tester, debug, CASSANDRA_VERSION_FROM_BUILD, create_ks, 
create_cf)
-from tools.assertions import assert_bootstrap_state, assert_invalid, 
assert_one, assert_row_count
+from tools.assertions import assert_bootstrap_state, assert_invalid, 
assert_none, assert_one, assert_row_count
 from tools.data import index_is_built, rows_to_list
 from tools.decorators import since
 from tools.misc import new_node
@@ -21,6 +21,16 @@ from tools.misc import new_node
 
 class TestSecondaryIndexes(Tester):
 
+@staticmethod
+def _index_sstables_files(node, keyspace, table, index):
+files = []
+for data_dir in node.data_directories():
+data_dir = os.path.join(data_dir, keyspace)
+base_tbl_dir = os.path.join(data_dir, [s for s in 
os.listdir(data_dir) if s.startswith(table)][0])
+index_sstables_dir = os.path.join(base_tbl_dir, '.' + index)
+files.extend(os.listdir(index_sstables_dir))
+return set(files)
+
 def data_created_before_index_not_returned_in_where_query_test(self):
 """
 @jira_ticket CASSANDRA-3367
@@ -307,14 +317,7 @@ class TestSecondaryIndexes(Tester):
 
 stmt = session.prepare('select * from standard1 where "C0" = ?')
 self.assertEqual(1, len(list(session.execute(stmt, [lookup_value]
-before_files = []
-index_sstables_dirs = []
-for data_dir in node1.data_directories():
-data_dir = os.path.join(data_dir, 'keyspace1')
-base_tbl_dir = os.path.join(data_dir, [s for s in 
os.listdir(data_dir) if s.startswith("standard1")][0])
-index_sstables_dir = os.path.join(base_tbl_dir, '.ix_c0')
-before_files.extend(os.listdir(index_sstables_dir))
-index_sstables_dirs.append(index_sstables_dir)
+before_files = self._index_sstables_files(node1, 'keyspace1', 
'standard1', 'ix_c0')
 
 node1.nodetool("rebuild_index keyspace1 standard1 ix_c0")
 start = time.time()
@@ -326,15 +329,160 @@ class TestSecondaryIndexes(Tester):
 else:
 raise DtestTimeoutError()
 
-after_files = []
-for index_sstables_dir in index_sstables_dirs:
-after_files.extend(os.listdir(index_sstables_dir))
-self.assertNotEqual(set(before_files), set(after_files))
+after_files = self._index_sstables_files(node1, 'keyspace1', 
'standard1', 'ix_c0')
+self.assertNotEqual(before_files, after_files)
 self.assertEqual(1, len(list(session.execute(stmt, [lookup_value]
 
 # verify that only the expecte

[08/50] cassandra git commit: Preserve DESCRIBE behaviour with quoted index names for older versions

2017-09-04 Thread paulo
Preserve DESCRIBE behaviour with quoted index names for older versions


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/afda2d45
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/afda2d45
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/afda2d45

Branch: refs/heads/master
Commit: afda2d45fe578359b2db51233c1f12833d8a196b
Parents: f292548
Author: Sam Tunnicliffe 
Authored: Fri Jan 20 12:40:59 2017 -0800
Committer: Philip Thompson 
Committed: Thu May 11 14:24:05 2017 -0400

--
 cqlsh_tests/cqlsh_tests.py | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/afda2d45/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index 7734848..4feadc1 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -682,7 +682,6 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 self.cluster.populate(1)
 self.cluster.start(wait_for_binary_proto=True)
 node1, = self.cluster.nodelist()
-
 self.execute(
 cql="""
 CREATE KEYSPACE test WITH REPLICATION = {'class' : 
'SimpleStrategy', 'replication_factor' : 1};
@@ -980,10 +979,20 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
-""" + self.get_index_output('"QuotedNameIndex"', 'test', 'users', 
'firstname') \
+""" + self.get_index_output('QuotedNameIndex', 'test', 'users', 
'firstname') \
+ "\n" + self.get_index_output('myindex', 'test', 'users', 
'age')
 
 def get_index_output(self, index, ks, table, col):
+# a quoted index name (e.g. "FooIndex") is only correctly echoed by 
DESCRIBE
+# from 3.0.11 & 3.10
+if index[0] == '"' and index[-1] == '"':
+version = self.cluster.version()
+if version >= LooseVersion('3.10'):
+pass
+elif LooseVersion('3.1') > version >= LooseVersion('3.0.11'):
+pass
+else:
+index = index[1:-1]
 return "CREATE INDEX {} ON {}.{} ({});".format(index, ks, table, col)
 
 def get_users_by_state_mv_output(self):


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[20/50] cassandra git commit: Restrict size estimates multi-dc test to run on 3.0.11+

2017-09-04 Thread paulo
Restrict size estimates multi-dc test to run on 3.0.11+


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3cf276e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3cf276e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3cf276e9

Branch: refs/heads/master
Commit: 3cf276e966f253a49df91293a1a0b46620192c59
Parents: f69ced0
Author: Joel Knighton 
Authored: Mon Jun 19 16:40:14 2017 -0500
Committer: Joel Knighton 
Committed: Mon Jun 19 16:40:14 2017 -0500

--
 topology_test.py | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cf276e9/topology_test.py
--
diff --git a/topology_test.py b/topology_test.py
index 7604ebe..15827f3 100644
--- a/topology_test.py
+++ b/topology_test.py
@@ -31,6 +31,7 @@ class TestTopology(Tester):
 
 node1.stop(gently=False)
 
+@since('3.0.11')
 def size_estimates_multidc_test(self):
 """
 Test that primary ranges are correctly generated on


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[27/50] cassandra git commit: Fix do_upgrade in batch_test.py to upgrade to the current version

2017-09-04 Thread paulo
Fix do_upgrade in batch_test.py to upgrade to the current version


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/557ab7b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/557ab7b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/557ab7b6

Branch: refs/heads/master
Commit: 557ab7b6b7c62e341b3ec9c8e7041f7731a1c0bd
Parents: d2d9e6d
Author: Philip Thompson 
Authored: Wed Jul 5 11:40:30 2017 +0200
Committer: Philip Thompson 
Committed: Wed Jul 5 11:51:36 2017 +0200

--
 batch_test.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/557ab7b6/batch_test.py
--
diff --git a/batch_test.py b/batch_test.py
index e67d185..4194f10 100644
--- a/batch_test.py
+++ b/batch_test.py
@@ -5,7 +5,7 @@ from unittest import skipIf
 from cassandra import ConsistencyLevel, Timeout, Unavailable
 from cassandra.query import SimpleStatement
 
-from dtest import CASSANDRA_DIR, Tester, debug, create_ks
+from dtest import Tester, create_ks, debug
 from tools.assertions import (assert_all, assert_invalid, assert_one,
   assert_unavailable)
 from tools.decorators import since
@@ -433,7 +433,7 @@ class TestBatch(Tester):
 node.watch_log_for("DRAINED")
 node.stop(wait_other_notice=False)
 
-node.set_install_dir(install_dir=CASSANDRA_DIR)
+self.set_node_to_current_version(node)
 debug("Set new cassandra dir for {}: {}".format(node.name, 
node.get_install_dir()))
 
 # Restart nodes on new version


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[10/50] cassandra git commit: Add upgrade test for old format indexed sstables (CASSANDRA-13236)

2017-09-04 Thread paulo
Add upgrade test for old format indexed sstables (CASSANDRA-13236)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f1489423
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f1489423
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f1489423

Branch: refs/heads/master
Commit: f1489423113713d04a1ef1a2bd4e9160abaea4b1
Parents: 5c99d20
Author: Sam Tunnicliffe 
Authored: Thu May 4 18:04:56 2017 -0700
Committer: Philip Thompson 
Committed: Thu May 11 14:31:57 2017 -0400

--
 upgrade_tests/storage_engine_upgrade_test.py | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f1489423/upgrade_tests/storage_engine_upgrade_test.py
--
diff --git a/upgrade_tests/storage_engine_upgrade_test.py 
b/upgrade_tests/storage_engine_upgrade_test.py
index ac578dc..aa1cc27 100644
--- a/upgrade_tests/storage_engine_upgrade_test.py
+++ b/upgrade_tests/storage_engine_upgrade_test.py
@@ -215,12 +215,18 @@ class TestStorageEngineUpgrade(Tester):
 assert_one(session, "SELECT * FROM t WHERE k = {}".format(n), [n, 
n + 1, n + 2, n + 3, n + 4])
 
 def upgrade_with_statics_test(self):
+self.upgrade_with_statics(rows=10)
+
+def upgrade_with_wide_partition_and_statics_test(self):
+""" Checks we read old indexed sstables with statics by creating 
partitions larger than a single index block"""
+self.upgrade_with_statics(rows=1000)
+
+def upgrade_with_statics(self, rows):
 """
 Validates we can read legacy sstables with static columns.
 """
 PARTITIONS = 1
-ROWS = 10
-
+ROWS = rows
 session = self._setup_cluster()
 
 session.execute('CREATE TABLE t (k int, s1 int static, s2 int static, 
t int, v1 int, v2 int, PRIMARY KEY (k, t))')


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[14/50] cassandra git commit: Bump CCM version to 2.6.3

2017-09-04 Thread paulo
Bump CCM version to 2.6.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6f7caba9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6f7caba9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6f7caba9

Branch: refs/heads/master
Commit: 6f7caba9c59daa949e67efc28f75e7de4c5b9fa7
Parents: 7f3566a
Author: Joel Knighton 
Authored: Thu Jun 1 14:00:50 2017 -0500
Committer: Philip Thompson 
Committed: Fri Jun 2 11:49:37 2017 +0200

--
 requirements.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6f7caba9/requirements.txt
--
diff --git a/requirements.txt b/requirements.txt
index 40fb0e1..9be7094 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4,7 +4,7 @@
 futures
 six
 -e 
git+https://github.com/datastax/python-driver.git@cassandra-test#egg=cassandra-driver
-ccm==2.6.0
+ccm==2.6.3
 cql
 decorator
 docopt


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[03/50] cassandra git commit: CASSANDRA-13483: fixed test failure in snapshot_test.TestSnapshot.test_snapshot_and_restore_dropping_a_column

2017-09-04 Thread paulo
CASSANDRA-13483: fixed test failure in 
snapshot_test.TestSnapshot.test_snapshot_and_restore_dropping_a_column


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0667de02
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0667de02
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0667de02

Branch: refs/heads/master
Commit: 0667de025dd4e85dbae1b30db4a2e189c46ff47f
Parents: e6b4706
Author: Zhao Yang 
Authored: Mon May 1 00:34:58 2017 +0800
Committer: Philip Thompson 
Committed: Mon May 1 20:02:40 2017 -0400

--
 snapshot_test.py | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0667de02/snapshot_test.py
--
diff --git a/snapshot_test.py b/snapshot_test.py
index 7169a7c..563af81 100644
--- a/snapshot_test.py
+++ b/snapshot_test.py
@@ -13,6 +13,7 @@ from tools.assertions import assert_one
 from tools.files import replace_in_file, safe_mkdtemp
 from tools.hacks import advance_to_next_cl_segment
 from tools.misc import ImmutableMapping
+from tools.decorators import since
 
 
 class SnapshotTester(Tester):
@@ -114,6 +115,7 @@ class TestSnapshot(SnapshotTester):
 
 self.assertEqual(rows[0][0], 100)
 
+@since('3.11')
 def test_snapshot_and_restore_dropping_a_column(self):
 """
 @jira_ticket CASSANDRA-13276


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[22/50] cassandra git commit: Removed cluster reuse from codebase

2017-09-04 Thread paulo
Removed cluster reuse from codebase


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1cc49419
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1cc49419
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1cc49419

Branch: refs/heads/master
Commit: 1cc4941916a3df199821f974e47acd667f65c2b8
Parents: 93aa314
Author: MichaelHamm 
Authored: Mon Jun 19 11:06:13 2017 -0700
Committer: Philip Thompson 
Committed: Tue Jun 20 12:09:35 2017 +0200

--
 INSTALL.md  |  4 
 README.md   |  3 +--
 cqlsh_tests/cqlsh_copy_tests.py | 24 +---
 dtest.py| 25 -
 upgrade_tests/cql_tests.py  | 14 +-
 5 files changed, 3 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1cc49419/INSTALL.md
--
diff --git a/INSTALL.md b/INSTALL.md
index 0e9e9e1..69985c3 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -129,10 +129,6 @@ will often need to modify them in some fashion at some 
later point:
  cd ~/git/cstar/cassandra-dtest
  PRINT_DEBUG=true nosetests -x -s -v putget_test.py
 
-* To reuse cassandra clusters when possible, set the environment variable 
REUSE_CLUSTER
-
-REUSE_CLUSTER=true nosetests -s -v cql_tests.py
-
 * Some tests will not run with vnodes enabled (you'll see a "SKIP: Test 
disabled for vnodes" message in that case). Use the provided runner script 
instead:
 
 ./run_dtests.py --vnodes false --nose-options "-x -s -v" 
topology_test.py:TestTopology.movement_test

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1cc49419/README.md
--
diff --git a/README.md b/README.md
index 79a65e0..ba32c3c 100644
--- a/README.md
+++ b/README.md
@@ -43,8 +43,7 @@ environment variable (that still will have precedence if 
given though).
 Existing tests are probably the best place to start to look at how to write
 tests.
 
-Each test spawns a new fresh cluster and tears it down after the test, unless
-`REUSE_CLUSTER` is set to true. Then some tests will share cassandra 
instances. If a
+Each test spawns a new fresh cluster and tears it down after the test. If a
 test fails, the logs for the node are saved in a `logs/` directory
 for analysis (it's not perfect but has been good enough so far, I'm open to
 better suggestions).

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1cc49419/cqlsh_tests/cqlsh_copy_tests.py
--
diff --git a/cqlsh_tests/cqlsh_copy_tests.py b/cqlsh_tests/cqlsh_copy_tests.py
index 43d33db..8501497 100644
--- a/cqlsh_tests/cqlsh_copy_tests.py
+++ b/cqlsh_tests/cqlsh_copy_tests.py
@@ -25,8 +25,7 @@ from ccmlib.common import is_win
 from cqlsh_tools import (DummyColorMap, assert_csvs_items_equal, csv_rows,
  monkeypatch_driver, random_list, unmonkeypatch_driver,
  write_rows_to_csv)
-from dtest import (DISABLE_VNODES, Tester, canReuseCluster, debug,
-   freshCluster, warning, create_ks)
+from dtest import (DISABLE_VNODES, Tester, debug, warning, create_ks)
 from tools.data import rows_to_list
 from tools.decorators import since
 from tools.metadata_wrapper import (UpdatingClusterMetadataWrapper,
@@ -55,7 +54,6 @@ class UTC(datetime.tzinfo):
 return datetime.timedelta(0)
 
 
-@canReuseCluster
 class CqlshCopyTest(Tester):
 """
 Tests the COPY TO and COPY FROM features in cqlsh.
@@ -2359,23 +2357,18 @@ class CqlshCopyTest(Tester):
 new_results = list(self.session.execute("SELECT * FROM testcopyto"))
 self.assertEqual(results, new_results)
 
-@freshCluster()
 def test_round_trip_murmur3(self):
 self._test_round_trip(nodes=3, partitioner="murmur3")
 
-@freshCluster()
 def test_round_trip_random(self):
 self._test_round_trip(nodes=3, partitioner="random")
 
-@freshCluster()
 def test_round_trip_order_preserving(self):
 self._test_round_trip(nodes=3, partitioner="order")
 
-@freshCluster()
 def test_round_trip_byte_ordered(self):
 self._test_round_trip(nodes=3, partitioner="byte")
 
-@freshCluster()
 def test_source_copy_round_trip(self):
 """
 Like test_round_trip, but uses the SOURCE command to execute the
@@ -2523,7 +2516,6 @@ class CqlshCopyTest(Tester):
 
 return ret
 
-@freshCluster()
 def test_bulk_round_trip_default(self):
 """
 Test bulk import with default stress import (one row per operation)
@@ -2542,7 +2534,6 @@ class CqlshCopyTest(Tester):
 self._test_bulk_round_trip(nodes

[17/50] cassandra git commit: Reformated git install formatting from, 'git:cassandra-2.2', to, 'github:apache/cassandra-2.2'.

2017-09-04 Thread paulo
Reformated git install formatting from, 'git:cassandra-2.2', to, 
'github:apache/cassandra-2.2'.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c93bd487
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c93bd487
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c93bd487

Branch: refs/heads/master
Commit: c93bd48712f32aaff475bc3265968b36c6665229
Parents: ef84f76
Author: MichaelHamm 
Authored: Fri Jun 9 14:21:42 2017 -0700
Committer: Philip Thompson 
Committed: Mon Jun 12 13:02:54 2017 +0200

--
 mixed_version_test.py|  4 ++--
 offline_tools_test.py| 14 +++---
 upgrade_crc_check_chance_test.py |  2 +-
 upgrade_internal_auth_test.py|  6 +++---
 4 files changed, 13 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c93bd487/mixed_version_test.py
--
diff --git a/mixed_version_test.py b/mixed_version_test.py
index f60584a..9da28b9 100644
--- a/mixed_version_test.py
+++ b/mixed_version_test.py
@@ -21,9 +21,9 @@ class TestSchemaChanges(Tester):
 node1, node2 = cluster.nodelist()
 original_version = node1.get_cassandra_version()
 if original_version.vstring.startswith('2.0'):
-upgraded_version = 'git:cassandra-2.1'
+upgraded_version = 'github:apache/cassandra-2.1'
 elif original_version.vstring.startswith('2.1'):
-upgraded_version = 'git:cassandra-2.2'
+upgraded_version = 'github:apache/cassandra-2.2'
 else:
 self.skip("This test is only designed to work with 2.0 and 2.1 
right now")
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c93bd487/offline_tools_test.py
--
diff --git a/offline_tools_test.py b/offline_tools_test.py
index 3f9c2b7..c0a0010 100644
--- a/offline_tools_test.py
+++ b/offline_tools_test.py
@@ -329,19 +329,19 @@ class TestOfflineTools(Tester):
 # CCM doesn't handle this upgrade correctly and results in an 
error when flushing 2.1:
 #   Error opening zip file or JAR manifest missing : 
/home/mshuler/git/cassandra/lib/jamm-0.2.5.jar
 # The 2.1 installed jamm version is 0.3.0, but bin/cassandra.in.sh 
used by nodetool still has 0.2.5
-# (when this is fixed in CCM issue #463, install 
version='git:cassandra-2.0' as below)
+# (when this is fixed in CCM issue #463, install 
version='github:apache/cassandra-2.0' as below)
 self.skipTest('Skipping 2.1 test due to jamm.jar version upgrade 
problem in CCM node configuration.')
 elif testversion < '3.0':
-debug('Test version: {} - installing 
git:cassandra-2.1'.format(testversion))
-cluster.set_install_dir(version='git:cassandra-2.1')
+debug('Test version: {} - installing 
github:apache/cassandra-2.1'.format(testversion))
+cluster.set_install_dir(version='github:apache/cassandra-2.1')
 # As of 3.5, sstable format 'ma' from 3.0 is still the latest - 
install 2.2 to upgrade from
 elif testversion < '4.0':
-debug('Test version: {} - installing 
git:cassandra-2.2'.format(testversion))
-cluster.set_install_dir(version='git:cassandra-2.2')
+debug('Test version: {} - installing 
github:apache/cassandra-2.2'.format(testversion))
+cluster.set_install_dir(version='github:apache/cassandra-2.2')
 # From 4.0, one can only upgrade from 3.0
 else:
-debug('Test version: {} - installing 
git:cassandra-3.0'.format(testversion))
-cluster.set_install_dir(version='git:cassandra-3.0')
+debug('Test version: {} - installing 
github:apache/cassandra-3.0'.format(testversion))
+cluster.set_install_dir(version='github:apache/cassandra-3.0')
 
 # Start up last major version, write out an sstable to upgrade, and 
stop node
 cluster.populate(1).start(wait_for_binary_proto=True)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c93bd487/upgrade_crc_check_chance_test.py
--
diff --git a/upgrade_crc_check_chance_test.py b/upgrade_crc_check_chance_test.py
index 0367104..ec758c2 100644
--- a/upgrade_crc_check_chance_test.py
+++ b/upgrade_crc_check_chance_test.py
@@ -25,7 +25,7 @@ class TestCrcCheckChanceUpgrade(Tester):
 cluster = self.cluster
 
 # Forcing cluster version on purpose
-cluster.set_install_dir(version="git:cassandra-2.2")
+cluster.set_install_dir(version="github:apache/cassandra-2.2")
 cluster.populate(2).start()
 
 node1, node2 = cluster.nodelist()

[18/50] cassandra git commit: Repair preview tests should only run on 4.0+

2017-09-04 Thread paulo
Repair preview tests should only run on 4.0+


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfeb5dfb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfeb5dfb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfeb5dfb

Branch: refs/heads/master
Commit: dfeb5dfb2930b1b9d236d1fa4ac159db53c1f60a
Parents: c93bd48
Author: Joel Knighton 
Authored: Fri Jun 16 14:53:32 2017 -0500
Committer: Philip Thompson 
Committed: Sat Jun 17 17:09:34 2017 +0200

--
 repair_tests/preview_repair_test.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfeb5dfb/repair_tests/preview_repair_test.py
--
diff --git a/repair_tests/preview_repair_test.py 
b/repair_tests/preview_repair_test.py
index e888a9b..86627ab 100644
--- a/repair_tests/preview_repair_test.py
+++ b/repair_tests/preview_repair_test.py
@@ -4,9 +4,10 @@ from cassandra import ConsistencyLevel
 from cassandra.query import SimpleStatement
 
 from dtest import Tester
-from tools.decorators import no_vnodes
+from tools.decorators import no_vnodes, since
 
 
+@since('4.0')
 class PreviewRepairTest(Tester):
 
 def assert_no_repair_history(self, session):


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[21/50] cassandra git commit: Merge pull request #1484 from jkni/since-size-estimates

2017-09-04 Thread paulo
Merge pull request #1484 from jkni/since-size-estimates

Restrict size estimates multi-dc test to run on 3.0.11+

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/93aa3147
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/93aa3147
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/93aa3147

Branch: refs/heads/master
Commit: 93aa3147a5feced7eb0cc4cfb852e8a67f9251e9
Parents: f69ced0 3cf276e
Author: Paulo Ricardo Motta Gomes 
Authored: Mon Jun 19 21:47:28 2017 -0500
Committer: GitHub 
Committed: Mon Jun 19 21:47:28 2017 -0500

--
 topology_test.py | 1 +
 1 file changed, 1 insertion(+)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[05/50] cassandra git commit: adding cluster reconfiguration tests for 9143 (#1468)

2017-09-04 Thread paulo
adding cluster reconfiguration tests for 9143 (#1468)

* adding cluster reconfiguration tests for 9143

* fixing blank line

* fixing whitespace issue


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dc8cb3fb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dc8cb3fb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dc8cb3fb

Branch: refs/heads/master
Commit: dc8cb3fb12cad229131b57eb789e41246a108924
Parents: d5c413c
Author: Blake Eggleston 
Authored: Tue May 9 16:33:08 2017 -0700
Committer: Philip Thompson 
Committed: Tue May 9 19:33:08 2017 -0400

--
 repair_tests/incremental_repair_test.py | 113 ++-
 1 file changed, 112 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dc8cb3fb/repair_tests/incremental_repair_test.py
--
diff --git a/repair_tests/incremental_repair_test.py 
b/repair_tests/incremental_repair_test.py
index 270e1fa..a447d56 100644
--- a/repair_tests/incremental_repair_test.py
+++ b/repair_tests/incremental_repair_test.py
@@ -14,7 +14,8 @@ from nose.plugins.attrib import attr
 from dtest import Tester, debug, create_ks, create_cf
 from tools.assertions import assert_almost_equal, assert_one
 from tools.data import insert_c1c2
-from tools.decorators import since
+from tools.decorators import since, no_vnodes
+from tools.misc import new_node
 
 
 class ConsistentState(object):
@@ -647,3 +648,113 @@ class TestIncRepair(Tester):
 
 for out in (node.run_sstablemetadata(keyspace='keyspace1').stdout for 
node in cluster.nodelist() if len(node.get_sstables('keyspace1', 'standard1')) 
> 0):
 self.assertNotIn('Repaired at: 0', out)
+
+@no_vnodes()
+@since('4.0')
+def move_test(self):
+""" Test repaired data remains in sync after a move """
+cluster = self.cluster
+cluster.set_configuration_options(values={'hinted_handoff_enabled': 
False, 'commitlog_sync_period_in_ms': 500})
+cluster.populate(4, tokens=[0, 2**32, 2**48, -(2**32)]).start()
+node1, node2, node3, node4 = cluster.nodelist()
+
+session = self.patient_exclusive_cql_connection(node3)
+session.execute("CREATE KEYSPACE ks WITH 
REPLICATION={'class':'SimpleStrategy', 'replication_factor': 2}")
+session.execute("CREATE TABLE ks.tbl (k INT PRIMARY KEY, v INT)")
+
+# insert some data
+stmt = SimpleStatement("INSERT INTO ks.tbl (k,v) VALUES (%s, %s)")
+for i in range(1000):
+session.execute(stmt, (i, i))
+
+node1.repair(options=['ks'])
+
+for i in range(1000):
+v = i + 1000
+session.execute(stmt, (v, v))
+
+# everything should be in sync
+for node in cluster.nodelist():
+result = node.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)
+
+node2.nodetool('move {}'.format(2**16))
+
+# everything should still be in sync
+for node in cluster.nodelist():
+result = node.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)
+
+@no_vnodes()
+@since('4.0')
+def decommission_test(self):
+""" Test repaired data remains in sync after a decommission """
+cluster = self.cluster
+cluster.set_configuration_options(values={'hinted_handoff_enabled': 
False, 'commitlog_sync_period_in_ms': 500})
+cluster.populate(4).start()
+node1, node2, node3, node4 = cluster.nodelist()
+
+session = self.patient_exclusive_cql_connection(node3)
+session.execute("CREATE KEYSPACE ks WITH 
REPLICATION={'class':'SimpleStrategy', 'replication_factor': 2}")
+session.execute("CREATE TABLE ks.tbl (k INT PRIMARY KEY, v INT)")
+
+# insert some data
+stmt = SimpleStatement("INSERT INTO ks.tbl (k,v) VALUES (%s, %s)")
+for i in range(1000):
+session.execute(stmt, (i, i))
+
+node1.repair(options=['ks'])
+
+for i in range(1000):
+v = i + 1000
+session.execute(stmt, (v, v))
+
+# everything should be in sync
+for node in cluster.nodelist():
+result = node.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)
+
+node2.nodetool('decommission')
+
+# everything should still be in sync
+for node in [node1, node3, node4]:
+result = node.repair(options=['ks', '--validate'])
+self.assertIn("Repaired data is in sync", result.stdout)
+
+@no_vnodes()
+@since('4.0')
+def bootstrap_test(self):
+""" Test repaired data remai

[06/50] cassandra git commit: New test for CASSANDRA-11720; Changing `max_hint_window_in_ms` at runtime

2017-09-04 Thread paulo
New test for CASSANDRA-11720; Changing `max_hint_window_in_ms` at runtime


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6540ba4b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6540ba4b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6540ba4b

Branch: refs/heads/master
Commit: 6540ba4be1623e330376895e263030f4811e2048
Parents: dc8cb3f
Author: mck 
Authored: Wed May 3 12:02:08 2017 +1000
Committer: Philip Thompson 
Committed: Wed May 10 19:58:51 2017 -0400

--
 hintedhandoff_test.py | 25 ++---
 1 file changed, 22 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6540ba4b/hintedhandoff_test.py
--
diff --git a/hintedhandoff_test.py b/hintedhandoff_test.py
index 7fb8e20..1ed3305 100644
--- a/hintedhandoff_test.py
+++ b/hintedhandoff_test.py
@@ -42,13 +42,13 @@ class TestHintedHandoffConfig(Tester):
 self.assertEqual('', err)
 return out
 
-def _do_hinted_handoff(self, node1, node2, enabled):
+def _do_hinted_handoff(self, node1, node2, enabled, keyspace='ks'):
 """
 Test that if we stop one node the other one
 will store hints only when hinted handoff is enabled
 """
 session = self.patient_exclusive_cql_connection(node1)
-create_ks(session, 'ks', 2)
+create_ks(session, keyspace, 2)
 create_c1c2_table(self, session)
 
 node2.stop(wait_other_notice=True)
@@ -64,7 +64,7 @@ class TestHintedHandoffConfig(Tester):
 node1.stop(wait_other_notice=True)
 
 # Check node2 for all the keys that should have been delivered via HH 
if enabled or not if not enabled
-session = self.patient_exclusive_cql_connection(node2, keyspace='ks')
+session = self.patient_exclusive_cql_connection(node2, 
keyspace=keyspace)
 for n in xrange(0, 100):
 if enabled:
 query_c1c2(session, n, ConsistencyLevel.ONE)
@@ -121,6 +121,25 @@ class TestHintedHandoffConfig(Tester):
 
 self._do_hinted_handoff(node1, node2, True)
 
+def hintedhandoff_setmaxwindow_test(self):
+"""
+Test global hinted handoff against max_hint_window_in_ms update via 
nodetool
+"""
+node1, node2 = self._start_two_node_cluster({'hinted_handoff_enabled': 
True, "max_hint_window_in_ms": 30})
+
+for node in node1, node2:
+res = self._launch_nodetool_cmd(node, 'statushandoff')
+self.assertEqual('Hinted handoff is running', res.rstrip())
+
+res = self._launch_nodetool_cmd(node, 'getmaxhintwindow')
+self.assertEqual('Current max hint window: 30 ms', res.rstrip())
+self._do_hinted_handoff(node1, node2, True)
+node1.start(wait_other_notice=True)
+self._launch_nodetool_cmd(node, 'setmaxhintwindow 1')
+res = self._launch_nodetool_cmd(node, 'getmaxhintwindow')
+self.assertEqual('Current max hint window: 1 ms', res.rstrip())
+self._do_hinted_handoff(node1, node2, False, keyspace='ks2')
+
 def hintedhandoff_dc_disabled_test(self):
 """
 Test global hinted handoff enabled with the dc disabled


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[07/50] cassandra git commit: Fix version check after C* ticket was committed

2017-09-04 Thread paulo
Fix version check after C* ticket was committed


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c99d202
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c99d202
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c99d202

Branch: refs/heads/master
Commit: 5c99d2028d1b03c2543dd81b90700922aa9ec93b
Parents: afda2d4
Author: Sam Tunnicliffe 
Authored: Thu May 11 18:33:47 2017 +0100
Committer: Philip Thompson 
Committed: Thu May 11 14:24:05 2017 -0400

--
 cqlsh_tests/cqlsh_tests.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c99d202/cqlsh_tests/cqlsh_tests.py
--
diff --git a/cqlsh_tests/cqlsh_tests.py b/cqlsh_tests/cqlsh_tests.py
index 4feadc1..e7bc11c 100644
--- a/cqlsh_tests/cqlsh_tests.py
+++ b/cqlsh_tests/cqlsh_tests.py
@@ -984,12 +984,12 @@ VALUES (4, blobAsInt(0x), '', blobAsBigint(0x), 0x, 
blobAsBoolean(0x), blobAsDec
 
 def get_index_output(self, index, ks, table, col):
 # a quoted index name (e.g. "FooIndex") is only correctly echoed by 
DESCRIBE
-# from 3.0.11 & 3.10
+# from 3.0.14 & 3.11
 if index[0] == '"' and index[-1] == '"':
 version = self.cluster.version()
-if version >= LooseVersion('3.10'):
+if version >= LooseVersion('3.11'):
 pass
-elif LooseVersion('3.1') > version >= LooseVersion('3.0.11'):
+elif LooseVersion('3.1') > version >= LooseVersion('3.0.14'):
 pass
 else:
 index = index[1:-1]


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[12/50] cassandra git commit: add test to confirm that hostname validation is working

2017-09-04 Thread paulo
add test to confirm that hostname validation is working


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bea71d8f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bea71d8f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bea71d8f

Branch: refs/heads/master
Commit: bea71d8fd2e02777bd5c03234489ae9e0efe177e
Parents: 538d658
Author: Jason Brown 
Authored: Thu May 25 14:59:37 2017 -0700
Committer: Philip Thompson 
Committed: Tue May 30 14:18:13 2017 +0200

--
 sslnodetonode_test.py | 12 
 1 file changed, 12 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bea71d8f/sslnodetonode_test.py
--
diff --git a/sslnodetonode_test.py b/sslnodetonode_test.py
index c4a9184..a11a3f4 100644
--- a/sslnodetonode_test.py
+++ b/sslnodetonode_test.py
@@ -26,6 +26,18 @@ class TestNodeToNodeSSLEncryption(Tester):
 self.cluster.start()
 self.cql_connection(self.node1)
 
+def ssl_correct_hostname_with_validation_test(self):
+"""Should be able to start with valid ssl options"""
+
+credNode1 = sslkeygen.generate_credentials("127.0.0.1")
+credNode2 = sslkeygen.generate_credentials("127.0.0.2", 
credNode1.cakeystore, credNode1.cacert)
+
+self.setup_nodes(credNode1, credNode2, endpointVerification=True)
+self.allow_log_errors = False
+self.cluster.start()
+time.sleep(2)
+self.cql_connection(self.node1)
+
 def ssl_wrong_hostname_no_validation_test(self):
 """Should be able to start with valid ssl options"""
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10496) Make DTCS/TWCS split partitions based on time during compaction

2017-09-04 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16153049#comment-16153049
 ] 

mck edited comment on CASSANDRA-10496 at 9/5/17 2:28 AM:
-

[~iksaif],
a few comments:
 - i suspect [~krummas] is keen to see a patch that splits partitions. 
 - changing locations isn't supported. see how i paired it with the writer in 
my experiment above.
 - Marcus' original idea was to create only two sstables per TWCS window. is 
that still possible?
 - shouldn't the bucket be based of the maxTimestamp? see `getBuckets(..)` and 
`newestBucket(..)`
 - is it correct that the idea is as "old" sstables are split out they would 
later then get re-compacted with their original bucket, and the domino effect 
that this could cause re-compacting older buckets could be avoided by 
increasing minThreshold to 3?



was (Author: michaelsembwever):
[~iksaif],
a few comments:
 - i suspect [~krummas] is keen to see a patch that splits partitions. 
 - changing locations isn't supported. see how i paired it with the writer in 
my experiment above.
 - i don't think you want to create the SSTableWriters multiple times.
 - Marcus' original idea was to create only two sstables per TWCS window. is 
that still possible?
 - shouldn't the bucket be based of the maxTimestamp? see `getBuckets(..)` and 
`newestBucket(..)`
 - is it correct that the idea is as "old" sstables are split out they would 
later then get re-compacted with their original bucket, and the domino effect 
that this could cause re-compacting older buckets could be avoided by 
increasing minThreshold to 3?


> Make DTCS/TWCS split partitions based on time during compaction
> ---
>
> Key: CASSANDRA-10496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10496
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>  Labels: dtcs
> Fix For: 4.x
>
>
> To avoid getting old data in new time windows with DTCS (or related, like 
> [TWCS|CASSANDRA-9666]), we need to split out old data into its own sstable 
> during compaction.
> My initial idea is to just create two sstables, when we create the compaction 
> task we state the start and end times for the window, and any data older than 
> the window will be put in its own sstable.
> By creating a single sstable with old data, we will incrementally get the 
> windows correct - say we have an sstable with these timestamps:
> {{[100, 99, 98, 97, 75, 50, 10]}}
> and we are compacting in window {{[100, 80]}} - we would create two sstables:
> {{[100, 99, 98, 97]}}, {{[75, 50, 10]}}, and the first window is now 
> 'correct'. The next compaction would compact in window {{[80, 60]}} and 
> create sstables {{[75]}}, {{[50, 10]}} etc.
> We will probably also want to base the windows on the newest data in the 
> sstables so that we actually have older data than the window.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10496) Make DTCS/TWCS split partitions based on time during compaction

2017-09-04 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16153049#comment-16153049
 ] 

mck commented on CASSANDRA-10496:
-

[~iksaif],
a few comments:
 - i suspect [~krummas] is keen to see a patch that splits partitions. Even 
though a solution that doesn't still has a lot to offer.
 - changing locations isn't supported. see how i paired it with the writer in 
my experiment above.
 - i don't think you want to create the SSTableWriters multiple times.
 - Marcus' original idea was to create only two sstables per TWCS window. is 
that still possible?
 - shouldn't the bucket be based of the maxTimestamp? see `getBuckets(..)` and 
`newestBucket(..)`
 - is it correct that the idea is as "old" sstables are split out they would 
later then get re-compacted with their original bucket, and the domino effect 
that this could cause re-compacting older buckets could be avoided by 
increasing minThreshold to 3?


> Make DTCS/TWCS split partitions based on time during compaction
> ---
>
> Key: CASSANDRA-10496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10496
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>  Labels: dtcs
> Fix For: 4.x
>
>
> To avoid getting old data in new time windows with DTCS (or related, like 
> [TWCS|CASSANDRA-9666]), we need to split out old data into its own sstable 
> during compaction.
> My initial idea is to just create two sstables, when we create the compaction 
> task we state the start and end times for the window, and any data older than 
> the window will be put in its own sstable.
> By creating a single sstable with old data, we will incrementally get the 
> windows correct - say we have an sstable with these timestamps:
> {{[100, 99, 98, 97, 75, 50, 10]}}
> and we are compacting in window {{[100, 80]}} - we would create two sstables:
> {{[100, 99, 98, 97]}}, {{[75, 50, 10]}}, and the first window is now 
> 'correct'. The next compaction would compact in window {{[80, 60]}} and 
> create sstables {{[75]}}, {{[50, 10]}} etc.
> We will probably also want to base the windows on the newest data in the 
> sstables so that we actually have older data than the window.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: fix logging context

2017-09-04 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk d32c474f6 -> 460360093


fix logging context


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/46036009
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/46036009
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/46036009

Branch: refs/heads/trunk
Commit: 46036009376eaba548bbb4ac4ddf2531c720ed92
Parents: d32c474
Author: Dave Brosius 
Authored: Mon Sep 4 22:28:23 2017 -0400
Committer: Dave Brosius 
Committed: Mon Sep 4 22:28:23 2017 -0400

--
 .../org/apache/cassandra/net/async/InboundHandshakeHandler.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/46036009/src/java/org/apache/cassandra/net/async/InboundHandshakeHandler.java
--
diff --git 
a/src/java/org/apache/cassandra/net/async/InboundHandshakeHandler.java 
b/src/java/org/apache/cassandra/net/async/InboundHandshakeHandler.java
index 7a8303c..aa4f4ff 100644
--- a/src/java/org/apache/cassandra/net/async/InboundHandshakeHandler.java
+++ b/src/java/org/apache/cassandra/net/async/InboundHandshakeHandler.java
@@ -37,7 +37,7 @@ import org.apache.cassandra.streaming.messages.StreamMessage;
  */
 class InboundHandshakeHandler extends ByteToMessageDecoder
 {
-private static final Logger logger = 
LoggerFactory.getLogger(NettyFactory.class);
+private static final Logger logger = 
LoggerFactory.getLogger(InboundHandshakeHandler.class);
 
 enum State { START, AWAITING_HANDSHAKE_BEGIN, 
AWAIT_MESSAGING_START_RESPONSE, HANDSHAKE_COMPLETE, HANDSHAKE_FAIL }
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10496) Make DTCS/TWCS split partitions based on time during compaction

2017-09-04 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16153049#comment-16153049
 ] 

mck edited comment on CASSANDRA-10496 at 9/5/17 2:27 AM:
-

[~iksaif],
a few comments:
 - i suspect [~krummas] is keen to see a patch that splits partitions. 
 - changing locations isn't supported. see how i paired it with the writer in 
my experiment above.
 - i don't think you want to create the SSTableWriters multiple times.
 - Marcus' original idea was to create only two sstables per TWCS window. is 
that still possible?
 - shouldn't the bucket be based of the maxTimestamp? see `getBuckets(..)` and 
`newestBucket(..)`
 - is it correct that the idea is as "old" sstables are split out they would 
later then get re-compacted with their original bucket, and the domino effect 
that this could cause re-compacting older buckets could be avoided by 
increasing minThreshold to 3?



was (Author: michaelsembwever):
[~iksaif],
a few comments:
 - i suspect [~krummas] is keen to see a patch that splits partitions. Even 
though a solution that doesn't still has a lot to offer.
 - changing locations isn't supported. see how i paired it with the writer in 
my experiment above.
 - i don't think you want to create the SSTableWriters multiple times.
 - Marcus' original idea was to create only two sstables per TWCS window. is 
that still possible?
 - shouldn't the bucket be based of the maxTimestamp? see `getBuckets(..)` and 
`newestBucket(..)`
 - is it correct that the idea is as "old" sstables are split out they would 
later then get re-compacted with their original bucket, and the domino effect 
that this could cause re-compacting older buckets could be avoided by 
increasing minThreshold to 3?


> Make DTCS/TWCS split partitions based on time during compaction
> ---
>
> Key: CASSANDRA-10496
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10496
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>  Labels: dtcs
> Fix For: 4.x
>
>
> To avoid getting old data in new time windows with DTCS (or related, like 
> [TWCS|CASSANDRA-9666]), we need to split out old data into its own sstable 
> during compaction.
> My initial idea is to just create two sstables, when we create the compaction 
> task we state the start and end times for the window, and any data older than 
> the window will be put in its own sstable.
> By creating a single sstable with old data, we will incrementally get the 
> windows correct - say we have an sstable with these timestamps:
> {{[100, 99, 98, 97, 75, 50, 10]}}
> and we are compacting in window {{[100, 80]}} - we would create two sstables:
> {{[100, 99, 98, 97]}}, {{[75, 50, 10]}}, and the first window is now 
> 'correct'. The next compaction would compact in window {{[80, 60]}} and 
> create sstables {{[75]}}, {{[50, 10]}} etc.
> We will probably also want to base the windows on the newest data in the 
> sstables so that we actually have older data than the window.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: add missing logging marker, to match parameter

2017-09-04 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 10d5b7b2f -> d32c474f6


add missing logging marker, to match parameter


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d32c474f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d32c474f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d32c474f

Branch: refs/heads/trunk
Commit: d32c474f6d29efc6324886b08ac52b27d92c1434
Parents: 10d5b7b
Author: Dave Brosius 
Authored: Mon Sep 4 22:25:53 2017 -0400
Committer: Dave Brosius 
Committed: Mon Sep 4 22:25:53 2017 -0400

--
 src/java/org/apache/cassandra/net/async/MessageOutHandler.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d32c474f/src/java/org/apache/cassandra/net/async/MessageOutHandler.java
--
diff --git a/src/java/org/apache/cassandra/net/async/MessageOutHandler.java 
b/src/java/org/apache/cassandra/net/async/MessageOutHandler.java
index b4ceb92..e88b56a 100644
--- a/src/java/org/apache/cassandra/net/async/MessageOutHandler.java
+++ b/src/java/org/apache/cassandra/net/async/MessageOutHandler.java
@@ -115,7 +115,7 @@ class MessageOutHandler extends ChannelDuplexHandler
 // the channel handlers are removed from the channel potentially saync 
from the close operation.
 if (!ctx.channel().isOpen())
 {
-logger.debug("attempting to process a message in the pipeline, but 
the channel is closed", ctx.channel().id());
+logger.debug("attempting to process a message in the pipeline, but 
channel {} is closed", ctx.channel().id());
 return;
 }
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13418) Allow TWCS to ignore overlaps when dropping fully expired sstables

2017-09-04 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-13418:

   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   4.0
   3.11.1
   Status: Resolved  (was: Patch Available)

committed.

> Allow TWCS to ignore overlaps when dropping fully expired sstables
> --
>
> Key: CASSANDRA-13418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Corentin Chary
>Assignee: Romain GERARD
>  Labels: twcs
> Fix For: 3.11.1, 4.0
>
> Attachments: twcs-cleanup.png
>
>
> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html explains it well. If 
> you really want read-repairs you're going to have sstables blocking the 
> expiration of other fully expired SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a 
> very low value and that will purge the blockers of old data that should 
> already have expired, thus removing the overlaps and allowing the other 
> SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have 
> time series, you might not care if all your data doesn't exactly expire at 
> the right time, or if data re-appears for some time, as long as it gets 
> deleted as soon as it can. And in this situation I believe it would be really 
> beneficial to allow users to simply ignore overlapping SSTables when looking 
> for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, 
> so this isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be 
> enough to greatly reduce entropy of the most used data (and if you have 
> timeseries, you're likely to have a dashboard doing the same important 
> queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on 
> our system and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/3] cassandra git commit: CASSANDRA-13418 Allow to skip overlapings checks

2017-09-04 Thread mck
CASSANDRA-13418 Allow to skip overlapings checks

 patch by Romain GÉRARD; reviewed by Mick Semb Wever for CASSANDRA-13418


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/14d67d81
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/14d67d81
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/14d67d81

Branch: refs/heads/trunk
Commit: 14d67d81c57d6387c77bd85c57b342d285880835
Parents: 37d6730
Author: Romain GÉRARD 
Authored: Wed Aug 16 16:21:46 2017 +0200
Committer: Mick Semb Wever 
Committed: Tue Sep 5 08:33:25 2017 +1000

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionController.java | 67 --
 .../TimeWindowCompactionController.java | 49 +
 .../TimeWindowCompactionStrategy.java   | 10 +--
 .../TimeWindowCompactionStrategyOptions.java| 22 ++
 .../db/compaction/TimeWindowCompactionTask.java | 42 +++
 .../db/compaction/CompactionControllerTest.java |  5 ++
 .../TimeWindowCompactionStrategyTest.java   | 74 +++-
 8 files changed, 257 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/14d67d81/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1f63ced..9218d90 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.1
+ * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
  * BTree.Builder memory leak (CASSANDRA-13754)
  * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
  * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/14d67d81/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionController.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
index bf3647a..84aac09 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.db.compaction;
 import java.util.*;
 import java.util.function.Predicate;
 
+import org.apache.cassandra.config.Config;
 import org.apache.cassandra.db.Memtable;
 import org.apache.cassandra.db.rows.UnfilteredRowIterator;
 
@@ -49,7 +50,8 @@ import static 
org.apache.cassandra.db.lifecycle.SSTableIntervalTree.buildInterva
 public class CompactionController implements AutoCloseable
 {
 private static final Logger logger = 
LoggerFactory.getLogger(CompactionController.class);
-static final boolean NEVER_PURGE_TOMBSTONES = 
Boolean.getBoolean("cassandra.never_purge_tombstones");
+private static final String NEVER_PURGE_TOMBSTONES_PROPERTY = 
Config.PROPERTY_PREFIX + "never_purge_tombstones";
+static final boolean NEVER_PURGE_TOMBSTONES = 
Boolean.getBoolean(NEVER_PURGE_TOMBSTONES_PROPERTY);
 
 public final ColumnFamilyStore cfs;
 private final boolean compactingRepaired;
@@ -98,7 +100,14 @@ public class CompactionController implements AutoCloseable
 {
 if (NEVER_PURGE_TOMBSTONES)
 {
-logger.debug("not refreshing overlaps - running with 
-Dcassandra.never_purge_tombstones=true");
+logger.debug("not refreshing overlaps - running with -D{}=true",
+NEVER_PURGE_TOMBSTONES_PROPERTY);
+return;
+}
+
+if (ignoreOverlaps())
+{
+logger.debug("not refreshing overlaps - running with 
ignoreOverlaps activated");
 return;
 }
 
@@ -120,7 +129,7 @@ public class CompactionController implements AutoCloseable
 if (this.overlappingSSTables != null)
 close();
 
-if (compacting == null)
+if (compacting == null || ignoreOverlaps())
 overlappingSSTables = 
Refs.tryRef(Collections.emptyList());
 else
 overlappingSSTables = 
cfs.getAndReferenceOverlappingLiveSSTables(compacting);
@@ -129,7 +138,7 @@ public class CompactionController implements AutoCloseable
 
 public Set getFullyExpiredSSTables()
 {
-return getFullyExpiredSSTables(cfs, compacting, overlappingSSTables, 
gcBefore);
+return getFullyExpiredSSTables(cfs, compacting, overlappingSSTables, 
gcBefore, ignoreOverlaps());
 }
 
 /**
@@ -146,20 +155,39 @@ public class CompactionController implements AutoCloseable
  * @param compacting we take the drop-candidates from this set, it is 
usually the sstables included in the comp

[1/3] cassandra git commit: CASSANDRA-13418 Allow to skip overlapings checks

2017-09-04 Thread mck
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 37d67306a -> 14d67d81c
  refs/heads/trunk c8d15f04f -> 10d5b7b2f


CASSANDRA-13418 Allow to skip overlapings checks

 patch by Romain GÉRARD; reviewed by Mick Semb Wever for CASSANDRA-13418


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/14d67d81
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/14d67d81
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/14d67d81

Branch: refs/heads/cassandra-3.11
Commit: 14d67d81c57d6387c77bd85c57b342d285880835
Parents: 37d6730
Author: Romain GÉRARD 
Authored: Wed Aug 16 16:21:46 2017 +0200
Committer: Mick Semb Wever 
Committed: Tue Sep 5 08:33:25 2017 +1000

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionController.java | 67 --
 .../TimeWindowCompactionController.java | 49 +
 .../TimeWindowCompactionStrategy.java   | 10 +--
 .../TimeWindowCompactionStrategyOptions.java| 22 ++
 .../db/compaction/TimeWindowCompactionTask.java | 42 +++
 .../db/compaction/CompactionControllerTest.java |  5 ++
 .../TimeWindowCompactionStrategyTest.java   | 74 +++-
 8 files changed, 257 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/14d67d81/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1f63ced..9218d90 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.1
+ * Add a compaction option to TWCS to ignore sstables overlapping checks 
(CASSANDRA-13418)
  * BTree.Builder memory leak (CASSANDRA-13754)
  * Revert CASSANDRA-10368 of supporting non-pk column filtering due to 
correctness (CASSANDRA-13798)
  * Fix cassandra-stress hang issues when an error during cluster connection 
happens (CASSANDRA-12938)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/14d67d81/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionController.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
index bf3647a..84aac09 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@ -20,6 +20,7 @@ package org.apache.cassandra.db.compaction;
 import java.util.*;
 import java.util.function.Predicate;
 
+import org.apache.cassandra.config.Config;
 import org.apache.cassandra.db.Memtable;
 import org.apache.cassandra.db.rows.UnfilteredRowIterator;
 
@@ -49,7 +50,8 @@ import static 
org.apache.cassandra.db.lifecycle.SSTableIntervalTree.buildInterva
 public class CompactionController implements AutoCloseable
 {
 private static final Logger logger = 
LoggerFactory.getLogger(CompactionController.class);
-static final boolean NEVER_PURGE_TOMBSTONES = 
Boolean.getBoolean("cassandra.never_purge_tombstones");
+private static final String NEVER_PURGE_TOMBSTONES_PROPERTY = 
Config.PROPERTY_PREFIX + "never_purge_tombstones";
+static final boolean NEVER_PURGE_TOMBSTONES = 
Boolean.getBoolean(NEVER_PURGE_TOMBSTONES_PROPERTY);
 
 public final ColumnFamilyStore cfs;
 private final boolean compactingRepaired;
@@ -98,7 +100,14 @@ public class CompactionController implements AutoCloseable
 {
 if (NEVER_PURGE_TOMBSTONES)
 {
-logger.debug("not refreshing overlaps - running with 
-Dcassandra.never_purge_tombstones=true");
+logger.debug("not refreshing overlaps - running with -D{}=true",
+NEVER_PURGE_TOMBSTONES_PROPERTY);
+return;
+}
+
+if (ignoreOverlaps())
+{
+logger.debug("not refreshing overlaps - running with 
ignoreOverlaps activated");
 return;
 }
 
@@ -120,7 +129,7 @@ public class CompactionController implements AutoCloseable
 if (this.overlappingSSTables != null)
 close();
 
-if (compacting == null)
+if (compacting == null || ignoreOverlaps())
 overlappingSSTables = 
Refs.tryRef(Collections.emptyList());
 else
 overlappingSSTables = 
cfs.getAndReferenceOverlappingLiveSSTables(compacting);
@@ -129,7 +138,7 @@ public class CompactionController implements AutoCloseable
 
 public Set getFullyExpiredSSTables()
 {
-return getFullyExpiredSSTables(cfs, compacting, overlappingSSTables, 
gcBefore);
+return getFullyExpiredSSTables(cfs, compacting, overlappingSSTables, 
gcBefore, ignoreOverlaps());
 }
 
 /**
@@ -146,20 +155,39 @@ public class CompactionControlle

[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-04 Thread mck
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/10d5b7b2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/10d5b7b2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/10d5b7b2

Branch: refs/heads/trunk
Commit: 10d5b7b2f77fb7c25e288f42f7fb64b3131fad35
Parents: c8d15f0 14d67d8
Author: Mick Semb Wever 
Authored: Tue Sep 5 08:36:12 2017 +1000
Committer: Mick Semb Wever 
Committed: Tue Sep 5 08:38:48 2017 +1000

--
 doc/cql3/CQL.textile| 36 +-
 doc/source/operating/compaction.rst |  8 ++-
 .../db/compaction/CompactionController.java | 67 --
 .../TimeWindowCompactionController.java | 49 +
 .../TimeWindowCompactionStrategy.java   | 10 +--
 .../TimeWindowCompactionStrategyOptions.java| 22 ++
 .../db/compaction/TimeWindowCompactionTask.java | 42 +++
 .../db/compaction/CompactionControllerTest.java |  5 ++
 .../TimeWindowCompactionStrategyTest.java   | 74 +++-
 9 files changed, 281 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/10d5b7b2/doc/cql3/CQL.textile
--
diff --cc doc/cql3/CQL.textile
index 88d6694,f2f9bd8..db1ec22
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@@ -347,24 -347,24 +347,24 @@@ h4(#compactionOptions). Compaction opti
  
  The @compaction@ property must at least define the @'class'@ sub-option, that 
defines the compaction strategy class to use. The default supported class are 
@'SizeTieredCompactionStrategy'@, @'LeveledCompactionStrategy'@, 
@'DateTieredCompactionStrategy'@ and @'TimeWindowCompactionStrategy'@. Custom 
strategy can be provided by specifying the full class name as a "string 
constant":#constants. The rest of the sub-options depends on the chosen class. 
The sub-options supported by the default classes are:
  
--|_. option |_. supported compaction strategy |_. 
default|_. description |
--| @enabled@| _all_   | true   
  | A boolean denoting whether compaction should be enabled or not.|
--| @tombstone_threshold@| _all_   | 0.2
  | A ratio such that if a sstable has more than this ratio of gcable 
tombstones over all contained columns, the sstable will be compacted (with no 
other sstables) for the purpose of purging those tombstones. |
--| @tombstone_compaction_interval@  | _all_   | 1 day  
  | The minimum time to wait after an sstable creation time before 
considering it for "tombstone compaction", where "tombstone compaction" is the 
compaction triggered if the sstable has more gcable tombstones than 
@tombstone_threshold@. |
--| @unchecked_tombstone_compaction@ | _all_   | false  
  | Setting this to true enables more aggressive tombstone compactions - 
single sstable tombstone compactions will run without checking how likely it is 
that they will be successful. |
--| @min_sstable_size@   | SizeTieredCompactionStrategy| 50MB   
  | The size tiered strategy groups SSTables to compact in buckets. A 
bucket groups SSTables that differs from less than 50% in size.  However, for 
small sizes, this would result in a bucketing that is too fine grained. 
@min_sstable_size@ defines a size threshold (in bytes) below which all SSTables 
belong to one unique bucket|
--| @min_threshold@  | SizeTieredCompactionStrategy| 4  
  | Minimum number of SSTables needed to start a minor compaction.|
--| @max_threshold@  | SizeTieredCompactionStrategy| 32 
  | Maximum number of SSTables processed by one minor compaction.|
--| @bucket_low@ | SizeTieredCompactionStrategy| 0.5
  | Size tiered consider sstables to be within the same bucket if their 
size is within [average_size * @bucket_low@, average_size * @bucket_high@ ] 
(i.e the default groups sstable whose sizes diverges by at most 50%)|
--| @bucket_high@| SizeTieredCompactionStrategy| 1.5
  | Size tiered consider sstables to be within the same bucket if their 
size is within [average_size * @bucket_low@, average_size * @bucket_high@ ] 
(i.e the default groups sstable whose sizes diverges by at most 50%).|
--| @sstable_size_in_mb@ | LeveledCompactionStrategy   | 5MB
  | The target size (in MB) for sstables in the leveled strategy. Note that 
while sstable sizes should stay less or equal to @sstable_size_in_mb@, it is 
possible to exceptionally have a larger sstable as during compaction, data for

[jira] [Updated] (CASSANDRA-13794) Fix short read protection logic for querying more rows

2017-09-04 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-13794:
--
Status: Patch Available  (was: In Progress)

Marking the ticket as {{Patch Available}}, despite its lack of (new) tests, so 
that it can be reviewed first. Tests will be committed with the rest of the 
code.

> Fix short read protection logic for querying more rows
> --
>
> Key: CASSANDRA-13794
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13794
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Benedict
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.x, 3.11.x
>
>
> Discovered by [~benedict] while reviewing CASSANDRA-13747:
> {quote}
> While reviewing I got a little suspicious of the modified line 
> {{DataResolver}} :479, as it seemed that n and x were the wrong way around... 
> and, reading the comment of intent directly above, and reproducing the 
> calculation, they are indeed.
> This is probably a significant enough bug that it warrants its own ticket for 
> record keeping, though I'm fairly agnostic on that decision.
> I'm a little concerned about our current short read behaviour, as right now 
> it seems we should be requesting exactly one row, for any size of under-read, 
> which could mean extremely poor performance in case of large under-reads.
> I would suggest that the outer unconditional {{Math.max}} is a bad idea, has 
> been (poorly) insulating us from this error, and that we should first be 
> asserting that the calculation yields a value >= 0 before setting to 1.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13794) Fix short read protection logic for querying more rows

2017-09-04 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152884#comment-16152884
 ] 

Aleksey Yeschenko commented on CASSANDRA-13794:
---

Work in progress branch 
[here|https://github.com/iamaleksey/cassandra/tree/13794-3.0]. Currently 
missing (new) tests, but I want to get the underlying logic reviewed and 
approved, first. Would add coverage before committing it.

A short summary of the issue: the code right now has two variables swapped, 
which ultimately results in us always fetching 1 extra row per short read 
protection requests, in a blocking manner, making it very inefficient. But upon 
closer look, there are some other inefficiencies here that can and should be 
addressed:

1. One of our stop conditions is {{lastCount == counter.counted()}}. It's 
supposed to abort a short read if our previous attempt to fetch more rows 
yielded 0 extra rows. It's not incorrect, but is only a special case of the 
more general scenario: our previous attempt to fetch more extra rows yielded 
fewer results than we requested for. That would mean there is no more rows to 
fetch at that replica, and allows us to abort earlier and more frequently.

2. Another of our stop conditions is {{!counter.isDoneForPartition()}}. Once 
again, it isn't incorrect, but it can be extended further. Due to the way 
{{isDoneForPartition()}} is defined ({{isDone() || rowInCurrentPartition >= 
perPartitionLimit}}) and because of that counter being counting-only, it is 
possible for us to have fetched enough rows total for other partitions short 
read retries previously to hit the global limit of rows in the counter. That 
would make {{isDone()}} return {{true}} always, and have 
{{isDoneForPartition()}} return false positives even if the partition currently 
processed only has a partition level deletion and/or tombstones. That can 
affect queries that set per partition limit explicitly or when running {{SELECT 
DISTINCT}} queries. Spotted that during CASSANDRA-13747 fixing.

3. Once we've swapped {{x}} and {{n}} in {{moreContents()}} to fix the logic 
error, we'd still have some issues. In degenerate cases where we have some 
nodes missing a fresh partition deletion, for example, the formula would fetch 
*a lot* of rows {{n * (n - 1)}}, with {{n}} growing exponentially with every 
attempt.

Upon closer inspection, the formula doesn't make 100% sense. It claims that we 
miss {{n - x}} rows - where {{n = counter.countedInCurrentPartition()}} and {{x 
= postReconciliationCounter.countedInCurrentPartition()}}, but the number we 
really miss is {{limit - postReconciliationCounter.counted()}} or 
{{perPartitionLimit - postReconciliationCounter.countedInCurrentPartition()}}. 
They might be the same on our first short read protection iteration, but will 
be diverging further and further with each request. In addition to that, it 
seems to assume a uniform distribution of tombstones (in the end result) rows 
in the source partition, which can't be true for most workloads.

I couldn't come up with some ideal heuristic that covers all workloads, so I 
stuck to something safe that respects client paging limits but still attempts 
to minimise the # of requests we make by fetching (in most cases) more rows 
than is minimally necessary. I'm not completely sure about it, but I welcome 
any ideas on how to make it better. Either way, anything we do should be 
significantly more efficient than what we have now.

I've also made some renames, refactorings, and moved a few things around to 
better understand the code myself, and make it clearer for future contributors 
- including future me. The most significant noticeable change is application of 
the per-response counter shift to {{mergeWithShortReadProtection()}} method, 
instead of overloading {{ShortReadRowProtection}} with responsibilities - I 
also like it to be next to the global counter creation, so you can see the 
contrast in arguments.

> Fix short read protection logic for querying more rows
> --
>
> Key: CASSANDRA-13794
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13794
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Benedict
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.x, 3.11.x
>
>
> Discovered by [~benedict] while reviewing CASSANDRA-13747:
> {quote}
> While reviewing I got a little suspicious of the modified line 
> {{DataResolver}} :479, as it seemed that n and x were the wrong way around... 
> and, reading the comment of intent directly above, and reproducing the 
> calculation, they are indeed.
> This is probably a significant enough bug that it warrants its own ticket for 
> record keeping, though I'm fairly agnostic on that decision.
> I'm a little concerned about ou

COMPACTING and REPAIRING : dependency between these two

2017-09-04 Thread helloga...@gmail.com
Hi,

We have come across high disk space usage for which we wanted to do COMPACTING.
And data consistency due to connectivity issues and for that we wanted to do 
REPAIRING.

We thought of having some automatic jobs for COMPACTING and REPAIRING. SO, can 
someone please suggest what would be the frequency that they should be running.

Is there any dependency between COMPACTING and REPAIRING? if so, which one need 
to be executed first in these both.

Thanks
G

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



compaction and repairing dependecy

2017-09-04 Thread helloga...@gmail.com
Hi,

we have observed a cases where disk space usage is going high for which we 
wanted to do COMPACTING.
And also data consistency on connectivity issues and for that we wanted to do 
REPAIRING.

So, can some one please suggest frequency of both COMPACTING and REPAIRING. And 
is there any dependency between both of them? Which one need to be performed 
first and then which one?

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13831) NettyFactoryTest is failing in trunk on MacOS

2017-09-04 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152731#comment-16152731
 ] 

Aleksey Yeschenko commented on CASSANDRA-13831:
---

This makes the tests pass but still shits all over the logs.

{code}
[junit] WARN  [main] 2017-09-04 16:33:51,193 NettyFactory.java:98 - epoll 
not availble {}
[junit] java.lang.ExceptionInInitializerError: null
[junit] at io.netty.channel.epoll.Epoll.(Epoll.java:33) 
~[netty-all-4.1.14.Final.jar:4.1.14.Final]
[junit] at 
org.apache.cassandra.service.NativeTransportService.useEpoll(NativeTransportService.java:162)
 ~[main/:na]
[junit] at 
org.apache.cassandra.net.async.NettyFactoryTest.(NettyFactoryTest.java:65)
 ~[classes/:na]
[junit] at java.lang.Class.forName0(Native Method) ~[na:1.8.0_144]
[junit] at java.lang.Class.forName(Class.java:264) ~[na:1.8.0_144]
[junit] at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:380)
 ~[ant-junit.jar:na]
[junit] at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1182)
 ~[ant-junit.jar:na]
[junit] at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:1033)
 ~[ant-junit.jar:na]
[junit] Caused by: java.lang.IllegalStateException: Only supported on Linux
[junit] at 
io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:189) 
~[netty-all-4.1.14.Final.jar:4.1.14.Final]
[junit] at io.netty.channel.epoll.Native.(Native.java:61) 
~[netty-all-4.1.14.Final.jar:4.1.14.Final]
[junit] ... 8 common frames omitted
{code}

Could you maybe modify {{NativeTransportService.useEpoll()}} to look at the OS 
first before trying to call {{Epoll.isAvailable()}}, or fix it some other way?

> NettyFactoryTest is failing in trunk on MacOS
> -
>
> Key: CASSANDRA-13831
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13831
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Aleksey Yeschenko
>Assignee: Jason Brown
>Priority: Minor
> Fix For: 4.x
>
>
> Example failure:
> {code}
> [junit] Testcase: 
> getEventLoopGroup_EpollWithoutIoRatioBoost(org.apache.cassandra.net.async.NettyFactoryTest):
> Caused an ERROR
> [junit] failed to load the required native library
> [junit] java.lang.UnsatisfiedLinkError: failed to load the required 
> native library
> [junit]   at 
> io.netty.channel.epoll.Epoll.ensureAvailability(Epoll.java:78)
> [junit]   at 
> io.netty.channel.epoll.EpollEventLoop.(EpollEventLoop.java:53)
> [junit]   at 
> io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:134)
> [junit]   at 
> io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:35)
> [junit]   at 
> io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:84)
> [junit]   at 
> io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:58)
> [junit]   at 
> io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:47)
> [junit]   at 
> io.netty.channel.MultithreadEventLoopGroup.(MultithreadEventLoopGroup.java:59)
> [junit]   at 
> io.netty.channel.epoll.EpollEventLoopGroup.(EpollEventLoopGroup.java:104)
> [junit]   at 
> io.netty.channel.epoll.EpollEventLoopGroup.(EpollEventLoopGroup.java:91)
> [junit]   at 
> io.netty.channel.epoll.EpollEventLoopGroup.(EpollEventLoopGroup.java:68)
> [junit]   at 
> org.apache.cassandra.net.async.NettyFactory.getEventLoopGroup(NettyFactory.java:175)
> [junit]   at 
> org.apache.cassandra.net.async.NettyFactoryTest.getEventLoopGroup_Epoll(NettyFactoryTest.java:187)
> [junit]   at 
> org.apache.cassandra.net.async.NettyFactoryTest.getEventLoopGroup_EpollWithoutIoRatioBoost(NettyFactoryTest.java:205)
> [junit] Caused by: java.lang.ExceptionInInitializerError
> [junit]   at io.netty.channel.epoll.Epoll.(Epoll.java:33)
> [junit]   at 
> org.apache.cassandra.service.NativeTransportService.useEpoll(NativeTransportService.java:162)
> [junit]   at 
> org.apache.cassandra.net.async.NettyFactory.(NettyFactory.java:94)
> [junit]   at 
> org.apache.cassandra.net.async.NettyFactoryTest.getEventLoopGroup_Nio(NettyFactoryTest.java:216)
> [junit]   at 
> org.apache.cassandra.net.async.NettyFactoryTest.getEventLoopGroup_NioWithoutIoRatioBoost(NettyFactoryTest.java:211)
> [junit] Caused by: java.lang.IllegalStateException: Only supported on 
> Linux
> [junit]   at 
> io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:189)
> [junit]   at io.netty.channel.epoll.Native.(Native.java:61)
> {

[jira] [Updated] (CASSANDRA-13662) Remove unsupported CREDENTIALS message

2017-09-04 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-13662:

Status: Ready to Commit  (was: Patch Available)

> Remove unsupported CREDENTIALS message
> --
>
> Key: CASSANDRA-13662
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13662
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Auth, CQL
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
>  Labels: security
> Fix For: 4.x
>
>
> Remove CREDENTIAL message, as protocol v1 isn't supported anyways. Let's try 
> not to keep unused legacy classes around for any security relevant features. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13662) Remove unsupported CREDENTIALS message

2017-09-04 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152651#comment-16152651
 ] 

Jeremiah Jordan commented on CASSANDRA-13662:
-

Yeah. Might be silly. Just asking the question as we have been wanting to focus 
on making sure new code has tests lately...

> Remove unsupported CREDENTIALS message
> --
>
> Key: CASSANDRA-13662
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13662
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Auth, CQL
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
>  Labels: security
> Fix For: 4.x
>
>
> Remove CREDENTIAL message, as protocol v1 isn't supported anyways. Let's try 
> not to keep unused legacy classes around for any security relevant features. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13818) Add support for --hosts, --force, and subrange repair to incremental repair

2017-09-04 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152636#comment-16152636
 ] 

Marcus Eriksson commented on CASSANDRA-13818:
-

Can't we run a normal repair over the unrepaired sstables to avoid the 
pointless anticompaction? I guess we would not be as consistent and might 
overstream a bit, but it can't hurt anything more than that right?

> Add support for --hosts, --force, and subrange repair to incremental repair
> ---
>
> Key: CASSANDRA-13818
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13818
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> It should be possible to run incremental repair with nodes down, we just 
> shouldn't promote the data to repaired afterwards



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13833) Failed compaction is not captured

2017-09-04 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13833:

   Resolution: Fixed
Fix Version/s: 4.0
   3.11.1
   3.0.15
   2.2.11
   Status: Resolved  (was: Patch Available)

committed as {{e80ede6d393460f22ee}}, thanks!

> Failed compaction is not captured
> -
>
> Key: CASSANDRA-13833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
> Fix For: 2.2.11, 3.0.15, 3.11.1, 4.0
>
>
> Follow up for CASSANDRA-13785, when the compaction failed, it fails silently. 
> No error message is logged and exceptions metric is not updated. Basically, 
> it's unable to get the exception: 
> [CompactionManager.java:1491|https://github.com/apache/cassandra/blob/cassandra-2.2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1491]
> Here is the call stack:
> {noformat}
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> There're 2 {{FutureTask}} in the call stack, for example 
> {{FutureTask1(FutureTask2))}}, If the call thrown an exception, 
> {{FutureTask2}} sets the status, save the exception and return. But 
> FutureTask1 doesn't get any exception, then set the status to normal. So 
> we're unable to get the exception in:
> [CompactionManager.java:1491|https://github.com/apache/cassandra/blob/cassandra-2.2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1491]
> 2.1.x is working fine, here is the call stack:
> {noformat}
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_141]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_141]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[na:1.8.0_141]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_141]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[02/10] cassandra git commit: Fix compaction and flush exception not captured issue

2017-09-04 Thread marcuse
Fix compaction and flush exception not captured issue

patch by Jay Zhuang; reviewed by marcuse for CASSANDRA-13833


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e80ede6d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e80ede6d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e80ede6d

Branch: refs/heads/cassandra-3.0
Commit: e80ede6d393460f22ee2b313d4bac7e3fbbfe893
Parents: 4d90573
Author: Jay Zhuang 
Authored: Thu Aug 31 11:07:07 2017 -0700
Committer: Marcus Eriksson 
Committed: Mon Sep 4 15:01:02 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   4 +-
 .../db/compaction/CompactionManager.java|   4 +-
 .../db/compaction/CompactionExecutorTest.java   | 131 +++
 4 files changed, 136 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4e68ddc..03a78fd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix compaction and flush exception not captured (CASSANDRA-13833)
  * Make BatchlogManagerMBean.forceBatchlogReplay() blocking (CASSANDRA-13809)
  * Uncaught exceptions in Netty pipeline (CASSANDRA-13649)
  * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 2e52eb2..7e36e11 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -906,9 +906,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 logFlush();
 Flush flush = new Flush(false);
 ListenableFutureTask flushTask = 
ListenableFutureTask.create(flush, null);
-flushExecutor.submit(flushTask);
+flushExecutor.execute(flushTask);
 ListenableFutureTask task = 
ListenableFutureTask.create(flush.postFlush);
-postFlushExecutor.submit(task);
+postFlushExecutor.execute(task);
 
 @SuppressWarnings("unchecked")
 ListenableFuture future = 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d21f1e8..cd50646 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1457,7 +1457,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 return CompactionMetrics.getCompactions().size();
 }
 
-private static class CompactionExecutor extends 
JMXEnabledThreadPoolExecutor
+static class CompactionExecutor extends JMXEnabledThreadPoolExecutor
 {
 protected CompactionExecutor(int minThreads, int maxThreads, String 
name, BlockingQueue queue)
 {
@@ -1537,7 +1537,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 try
 {
 ListenableFutureTask ret = ListenableFutureTask.create(task);
-submit(ret);
+execute(ret);
 return ret;
 }
 catch (RejectedExecutionException ex)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
new file mode 100644
index 000..c6feb3f
--- /dev/null
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.

[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-09-04 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b0eba5f9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b0eba5f9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b0eba5f9

Branch: refs/heads/cassandra-3.11
Commit: b0eba5f9c64db18840a4b0e4d56a589c5f2e08cd
Parents: f791c26 e80ede6
Author: Marcus Eriksson 
Authored: Mon Sep 4 15:02:53 2017 +0200
Committer: Marcus Eriksson 
Committed: Mon Sep 4 15:02:53 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   2 +-
 .../db/compaction/CompactionManager.java|   4 +-
 .../db/compaction/CompactionExecutorTest.java   | 107 +++
 4 files changed, 111 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b0eba5f9/CHANGES.txt
--
diff --cc CHANGES.txt
index b405fdf,03a78fd..3baa63b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,37 -1,8 +1,38 @@@
 -2.2.11
 +3.0.15
 + * Better tolerate improperly formatted bcrypt hashes (CASSANDRA-13626) 
 + * Fix race condition in read command serialization (CASSANDRA-13363)
 + * Enable segement creation before recovering commitlogs (CASSANDRA-13587)
 + * Fix AssertionError in short read protection (CASSANDRA-13747)
 + * Don't skip corrupted sstables on startup (CASSANDRA-13620)
 + * Fix the merging of cells with different user type versions 
(CASSANDRA-13776)
 + * Copy session properties on cqlsh.py do_login (CASSANDRA-13640)
 + * Potential AssertionError during ReadRepair of range tombstone and 
partition deletions (CASSANDRA-13719)
 + * Don't let stress write warmup data if n=0 (CASSANDRA-13773)
 + * Gossip thread slows down when using batch commit log (CASSANDRA-12966)
 + * Randomize batchlog endpoint selection with only 1 or 2 racks 
(CASSANDRA-12884)
 + * Fix digest calculation for counter cells (CASSANDRA-13750)
 + * Fix ColumnDefinition.cellValueType() for non-frozen collection and change 
SSTabledump to use type.toJSONString() (CASSANDRA-13573)
 + * Skip materialized view addition if the base table doesn't exist 
(CASSANDRA-13737)
 + * Drop table should remove corresponding entries in dropped_columns table 
(CASSANDRA-13730)
 + * Log warn message until legacy auth tables have been migrated 
(CASSANDRA-13371)
 + * Fix incorrect [2.1 <- 3.0] serialization of counter cells created in 2.0 
(CASSANDRA-13691)
 + * Fix invalid writetime for null cells (CASSANDRA-13711)
 + * Fix ALTER TABLE statement to atomically propagate changes to the table and 
its MVs (CASSANDRA-12952)
 + * Fixed ambiguous output of nodetool tablestats command (CASSANDRA-13722)
 + * JMXEnabledThreadPoolExecutor with corePoolSize equal to maxPoolSize 
(Backport CASSANDRA-13329)
 + * Fix Digest mismatch Exception if hints file has UnknownColumnFamily 
(CASSANDRA-13696)
 + * Purge tombstones created by expired cells (CASSANDRA-13643)
 + * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
 + * Set test.runners based on cores and memory size (CASSANDRA-13078)
 + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
 + * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 + * sstabledump reports incorrect usage for argument order (CASSANDRA-13532)
 +Merged from 2.2:
+  * Fix compaction and flush exception not captured (CASSANDRA-13833)
 - * Make BatchlogManagerMBean.forceBatchlogReplay() blocking (CASSANDRA-13809)
   * Uncaught exceptions in Netty pipeline (CASSANDRA-13649)
 - * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) 
 + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067)
   * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
   * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
   * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b0eba5f9/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 7251244,7e36e11..183176c
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -837,12 -899,33 +837,12 @@@ public class ColumnFamilyStore implemen
  {
  synchronized (data)
  {
 -if (previousFlushFailure != null)
 -throw new IllegalStateException("A flu

[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-04 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37d67306
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37d67306
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37d67306

Branch: refs/heads/trunk
Commit: 37d67306accb2fefed9cfc57856fcca4df93a407
Parents: bed7fa5 b0eba5f
Author: Marcus Eriksson 
Authored: Mon Sep 4 15:04:04 2017 +0200
Committer: Marcus Eriksson 
Committed: Mon Sep 4 15:04:04 2017 +0200

--
 CHANGES.txt |   1 +
 .../db/compaction/CompactionManager.java|   4 +-
 .../db/compaction/CompactionExecutorTest.java   | 109 +++
 3 files changed, 112 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/37d67306/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/37d67306/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/37d67306/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
--
diff --cc 
test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
index 000,9b07da9..2f8b5b2
mode 00,100644..100644
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
@@@ -1,0 -1,107 +1,109 @@@
+ /*
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+ 
+ package org.apache.cassandra.db.compaction;
+ 
+ import java.util.concurrent.Future;
+ import java.util.concurrent.TimeUnit;
+ 
+ import org.junit.After;
+ import org.junit.Before;
+ import org.junit.Test;
+ import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor;
++import org.apache.cassandra.config.DatabaseDescriptor;
+ 
+ import static org.junit.Assert.assertEquals;
+ import static org.junit.Assert.assertNotNull;
+ 
+ public class CompactionExecutorTest
+ {
+ static Throwable testTaskThrowable = null;
+ private static class TestTaskExecutor extends 
CompactionManager.CompactionExecutor
+ {
+ @Override
+ public void afterExecute(Runnable r, Throwable t)
+ {
+ if (t == null)
+ {
+ t = DebuggableThreadPoolExecutor.extractThrowable(r);
+ }
+ testTaskThrowable = t;
+ }
+ @Override
+ protected void beforeExecute(Thread t, Runnable r)
+ {
+ }
+ }
+ private CompactionManager.CompactionExecutor executor;
+ 
+ @Before
+ public void setup()
+ {
++DatabaseDescriptor.daemonInitialization();
+ executor = new TestTaskExecutor();
+ }
+ 
+ @After
+ public void destroy() throws Exception
+ {
+ executor.shutdown();
+ executor.awaitTermination(1, TimeUnit.MINUTES);
+ }
+ 
+ @Test
+ public void testFailedRunnable() throws Exception
+ {
+ testTaskThrowable = null;
+ Future tt = executor.submitIfRunning(
+ () -> { assert false : "testFailedRunnable"; }
+ , "compactionExecutorTest");
+ 
+ while (!tt.isDone())
+ Thread.sleep(10);
+ assertNotNull(testTaskThrowable);
+ assertEquals(testTaskThrowable.getMessage(), "testFailedRunnable");
+ }
+ 
+ @Test
+ public void testFailedCallable() throws Exception
+ {
+ testTaskThrowable = null;
+ Future tt = executor.submitIfRunning(
+ () -> { assert false : "testFailedCallable"; return 1; }
+ , "compactionExecutorTest");
+ 
+ while (!tt.isDone())
+ Thread.sleep(10);
+ assertNotNull(testTaskThrowable);
+ assertEquals(testTaskThrowable.getMessage(), "testFailedCallable");
+ 

[03/10] cassandra git commit: Fix compaction and flush exception not captured issue

2017-09-04 Thread marcuse
Fix compaction and flush exception not captured issue

patch by Jay Zhuang; reviewed by marcuse for CASSANDRA-13833


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e80ede6d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e80ede6d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e80ede6d

Branch: refs/heads/cassandra-3.11
Commit: e80ede6d393460f22ee2b313d4bac7e3fbbfe893
Parents: 4d90573
Author: Jay Zhuang 
Authored: Thu Aug 31 11:07:07 2017 -0700
Committer: Marcus Eriksson 
Committed: Mon Sep 4 15:01:02 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   4 +-
 .../db/compaction/CompactionManager.java|   4 +-
 .../db/compaction/CompactionExecutorTest.java   | 131 +++
 4 files changed, 136 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4e68ddc..03a78fd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix compaction and flush exception not captured (CASSANDRA-13833)
  * Make BatchlogManagerMBean.forceBatchlogReplay() blocking (CASSANDRA-13809)
  * Uncaught exceptions in Netty pipeline (CASSANDRA-13649)
  * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 2e52eb2..7e36e11 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -906,9 +906,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 logFlush();
 Flush flush = new Flush(false);
 ListenableFutureTask flushTask = 
ListenableFutureTask.create(flush, null);
-flushExecutor.submit(flushTask);
+flushExecutor.execute(flushTask);
 ListenableFutureTask task = 
ListenableFutureTask.create(flush.postFlush);
-postFlushExecutor.submit(task);
+postFlushExecutor.execute(task);
 
 @SuppressWarnings("unchecked")
 ListenableFuture future = 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d21f1e8..cd50646 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1457,7 +1457,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 return CompactionMetrics.getCompactions().size();
 }
 
-private static class CompactionExecutor extends 
JMXEnabledThreadPoolExecutor
+static class CompactionExecutor extends JMXEnabledThreadPoolExecutor
 {
 protected CompactionExecutor(int minThreads, int maxThreads, String 
name, BlockingQueue queue)
 {
@@ -1537,7 +1537,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 try
 {
 ListenableFutureTask ret = ListenableFutureTask.create(task);
-submit(ret);
+execute(ret);
 return ret;
 }
 catch (RejectedExecutionException ex)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
new file mode 100644
index 000..c6feb3f
--- /dev/null
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www

[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-09-04 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37d67306
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37d67306
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37d67306

Branch: refs/heads/cassandra-3.11
Commit: 37d67306accb2fefed9cfc57856fcca4df93a407
Parents: bed7fa5 b0eba5f
Author: Marcus Eriksson 
Authored: Mon Sep 4 15:04:04 2017 +0200
Committer: Marcus Eriksson 
Committed: Mon Sep 4 15:04:04 2017 +0200

--
 CHANGES.txt |   1 +
 .../db/compaction/CompactionManager.java|   4 +-
 .../db/compaction/CompactionExecutorTest.java   | 109 +++
 3 files changed, 112 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/37d67306/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/37d67306/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/37d67306/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
--
diff --cc 
test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
index 000,9b07da9..2f8b5b2
mode 00,100644..100644
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
@@@ -1,0 -1,107 +1,109 @@@
+ /*
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  * http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+ 
+ package org.apache.cassandra.db.compaction;
+ 
+ import java.util.concurrent.Future;
+ import java.util.concurrent.TimeUnit;
+ 
+ import org.junit.After;
+ import org.junit.Before;
+ import org.junit.Test;
+ import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor;
++import org.apache.cassandra.config.DatabaseDescriptor;
+ 
+ import static org.junit.Assert.assertEquals;
+ import static org.junit.Assert.assertNotNull;
+ 
+ public class CompactionExecutorTest
+ {
+ static Throwable testTaskThrowable = null;
+ private static class TestTaskExecutor extends 
CompactionManager.CompactionExecutor
+ {
+ @Override
+ public void afterExecute(Runnable r, Throwable t)
+ {
+ if (t == null)
+ {
+ t = DebuggableThreadPoolExecutor.extractThrowable(r);
+ }
+ testTaskThrowable = t;
+ }
+ @Override
+ protected void beforeExecute(Thread t, Runnable r)
+ {
+ }
+ }
+ private CompactionManager.CompactionExecutor executor;
+ 
+ @Before
+ public void setup()
+ {
++DatabaseDescriptor.daemonInitialization();
+ executor = new TestTaskExecutor();
+ }
+ 
+ @After
+ public void destroy() throws Exception
+ {
+ executor.shutdown();
+ executor.awaitTermination(1, TimeUnit.MINUTES);
+ }
+ 
+ @Test
+ public void testFailedRunnable() throws Exception
+ {
+ testTaskThrowable = null;
+ Future tt = executor.submitIfRunning(
+ () -> { assert false : "testFailedRunnable"; }
+ , "compactionExecutorTest");
+ 
+ while (!tt.isDone())
+ Thread.sleep(10);
+ assertNotNull(testTaskThrowable);
+ assertEquals(testTaskThrowable.getMessage(), "testFailedRunnable");
+ }
+ 
+ @Test
+ public void testFailedCallable() throws Exception
+ {
+ testTaskThrowable = null;
+ Future tt = executor.submitIfRunning(
+ () -> { assert false : "testFailedCallable"; return 1; }
+ , "compactionExecutorTest");
+ 
+ while (!tt.isDone())
+ Thread.sleep(10);
+ assertNotNull(testTaskThrowable);
+ assertEquals(testTaskThrowable.getMessage(), "testFailedCalla

[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-09-04 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b0eba5f9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b0eba5f9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b0eba5f9

Branch: refs/heads/cassandra-3.0
Commit: b0eba5f9c64db18840a4b0e4d56a589c5f2e08cd
Parents: f791c26 e80ede6
Author: Marcus Eriksson 
Authored: Mon Sep 4 15:02:53 2017 +0200
Committer: Marcus Eriksson 
Committed: Mon Sep 4 15:02:53 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   2 +-
 .../db/compaction/CompactionManager.java|   4 +-
 .../db/compaction/CompactionExecutorTest.java   | 107 +++
 4 files changed, 111 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b0eba5f9/CHANGES.txt
--
diff --cc CHANGES.txt
index b405fdf,03a78fd..3baa63b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,37 -1,8 +1,38 @@@
 -2.2.11
 +3.0.15
 + * Better tolerate improperly formatted bcrypt hashes (CASSANDRA-13626) 
 + * Fix race condition in read command serialization (CASSANDRA-13363)
 + * Enable segement creation before recovering commitlogs (CASSANDRA-13587)
 + * Fix AssertionError in short read protection (CASSANDRA-13747)
 + * Don't skip corrupted sstables on startup (CASSANDRA-13620)
 + * Fix the merging of cells with different user type versions 
(CASSANDRA-13776)
 + * Copy session properties on cqlsh.py do_login (CASSANDRA-13640)
 + * Potential AssertionError during ReadRepair of range tombstone and 
partition deletions (CASSANDRA-13719)
 + * Don't let stress write warmup data if n=0 (CASSANDRA-13773)
 + * Gossip thread slows down when using batch commit log (CASSANDRA-12966)
 + * Randomize batchlog endpoint selection with only 1 or 2 racks 
(CASSANDRA-12884)
 + * Fix digest calculation for counter cells (CASSANDRA-13750)
 + * Fix ColumnDefinition.cellValueType() for non-frozen collection and change 
SSTabledump to use type.toJSONString() (CASSANDRA-13573)
 + * Skip materialized view addition if the base table doesn't exist 
(CASSANDRA-13737)
 + * Drop table should remove corresponding entries in dropped_columns table 
(CASSANDRA-13730)
 + * Log warn message until legacy auth tables have been migrated 
(CASSANDRA-13371)
 + * Fix incorrect [2.1 <- 3.0] serialization of counter cells created in 2.0 
(CASSANDRA-13691)
 + * Fix invalid writetime for null cells (CASSANDRA-13711)
 + * Fix ALTER TABLE statement to atomically propagate changes to the table and 
its MVs (CASSANDRA-12952)
 + * Fixed ambiguous output of nodetool tablestats command (CASSANDRA-13722)
 + * JMXEnabledThreadPoolExecutor with corePoolSize equal to maxPoolSize 
(Backport CASSANDRA-13329)
 + * Fix Digest mismatch Exception if hints file has UnknownColumnFamily 
(CASSANDRA-13696)
 + * Purge tombstones created by expired cells (CASSANDRA-13643)
 + * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
 + * Set test.runners based on cores and memory size (CASSANDRA-13078)
 + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
 + * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 + * sstabledump reports incorrect usage for argument order (CASSANDRA-13532)
 +Merged from 2.2:
+  * Fix compaction and flush exception not captured (CASSANDRA-13833)
 - * Make BatchlogManagerMBean.forceBatchlogReplay() blocking (CASSANDRA-13809)
   * Uncaught exceptions in Netty pipeline (CASSANDRA-13649)
 - * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) 
 + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067)
   * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
   * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
   * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b0eba5f9/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 7251244,7e36e11..183176c
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -837,12 -899,33 +837,12 @@@ public class ColumnFamilyStore implemen
  {
  synchronized (data)
  {
 -if (previousFlushFailure != null)
 -throw new IllegalStateException("A flus

[04/10] cassandra git commit: Fix compaction and flush exception not captured issue

2017-09-04 Thread marcuse
Fix compaction and flush exception not captured issue

patch by Jay Zhuang; reviewed by marcuse for CASSANDRA-13833


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e80ede6d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e80ede6d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e80ede6d

Branch: refs/heads/trunk
Commit: e80ede6d393460f22ee2b313d4bac7e3fbbfe893
Parents: 4d90573
Author: Jay Zhuang 
Authored: Thu Aug 31 11:07:07 2017 -0700
Committer: Marcus Eriksson 
Committed: Mon Sep 4 15:01:02 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   4 +-
 .../db/compaction/CompactionManager.java|   4 +-
 .../db/compaction/CompactionExecutorTest.java   | 131 +++
 4 files changed, 136 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4e68ddc..03a78fd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix compaction and flush exception not captured (CASSANDRA-13833)
  * Make BatchlogManagerMBean.forceBatchlogReplay() blocking (CASSANDRA-13809)
  * Uncaught exceptions in Netty pipeline (CASSANDRA-13649)
  * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 2e52eb2..7e36e11 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -906,9 +906,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 logFlush();
 Flush flush = new Flush(false);
 ListenableFutureTask flushTask = 
ListenableFutureTask.create(flush, null);
-flushExecutor.submit(flushTask);
+flushExecutor.execute(flushTask);
 ListenableFutureTask task = 
ListenableFutureTask.create(flush.postFlush);
-postFlushExecutor.submit(task);
+postFlushExecutor.execute(task);
 
 @SuppressWarnings("unchecked")
 ListenableFuture future = 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d21f1e8..cd50646 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1457,7 +1457,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 return CompactionMetrics.getCompactions().size();
 }
 
-private static class CompactionExecutor extends 
JMXEnabledThreadPoolExecutor
+static class CompactionExecutor extends JMXEnabledThreadPoolExecutor
 {
 protected CompactionExecutor(int minThreads, int maxThreads, String 
name, BlockingQueue queue)
 {
@@ -1537,7 +1537,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 try
 {
 ListenableFutureTask ret = ListenableFutureTask.create(task);
-submit(ret);
+execute(ret);
 return ret;
 }
 catch (RejectedExecutionException ex)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
new file mode 100644
index 000..c6feb3f
--- /dev/null
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.o

[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2017-09-04 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b0eba5f9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b0eba5f9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b0eba5f9

Branch: refs/heads/trunk
Commit: b0eba5f9c64db18840a4b0e4d56a589c5f2e08cd
Parents: f791c26 e80ede6
Author: Marcus Eriksson 
Authored: Mon Sep 4 15:02:53 2017 +0200
Committer: Marcus Eriksson 
Committed: Mon Sep 4 15:02:53 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   2 +-
 .../db/compaction/CompactionManager.java|   4 +-
 .../db/compaction/CompactionExecutorTest.java   | 107 +++
 4 files changed, 111 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b0eba5f9/CHANGES.txt
--
diff --cc CHANGES.txt
index b405fdf,03a78fd..3baa63b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,37 -1,8 +1,38 @@@
 -2.2.11
 +3.0.15
 + * Better tolerate improperly formatted bcrypt hashes (CASSANDRA-13626) 
 + * Fix race condition in read command serialization (CASSANDRA-13363)
 + * Enable segement creation before recovering commitlogs (CASSANDRA-13587)
 + * Fix AssertionError in short read protection (CASSANDRA-13747)
 + * Don't skip corrupted sstables on startup (CASSANDRA-13620)
 + * Fix the merging of cells with different user type versions 
(CASSANDRA-13776)
 + * Copy session properties on cqlsh.py do_login (CASSANDRA-13640)
 + * Potential AssertionError during ReadRepair of range tombstone and 
partition deletions (CASSANDRA-13719)
 + * Don't let stress write warmup data if n=0 (CASSANDRA-13773)
 + * Gossip thread slows down when using batch commit log (CASSANDRA-12966)
 + * Randomize batchlog endpoint selection with only 1 or 2 racks 
(CASSANDRA-12884)
 + * Fix digest calculation for counter cells (CASSANDRA-13750)
 + * Fix ColumnDefinition.cellValueType() for non-frozen collection and change 
SSTabledump to use type.toJSONString() (CASSANDRA-13573)
 + * Skip materialized view addition if the base table doesn't exist 
(CASSANDRA-13737)
 + * Drop table should remove corresponding entries in dropped_columns table 
(CASSANDRA-13730)
 + * Log warn message until legacy auth tables have been migrated 
(CASSANDRA-13371)
 + * Fix incorrect [2.1 <- 3.0] serialization of counter cells created in 2.0 
(CASSANDRA-13691)
 + * Fix invalid writetime for null cells (CASSANDRA-13711)
 + * Fix ALTER TABLE statement to atomically propagate changes to the table and 
its MVs (CASSANDRA-12952)
 + * Fixed ambiguous output of nodetool tablestats command (CASSANDRA-13722)
 + * JMXEnabledThreadPoolExecutor with corePoolSize equal to maxPoolSize 
(Backport CASSANDRA-13329)
 + * Fix Digest mismatch Exception if hints file has UnknownColumnFamily 
(CASSANDRA-13696)
 + * Purge tombstones created by expired cells (CASSANDRA-13643)
 + * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
 + * Set test.runners based on cores and memory size (CASSANDRA-13078)
 + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
 + * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 + * sstabledump reports incorrect usage for argument order (CASSANDRA-13532)
 +Merged from 2.2:
+  * Fix compaction and flush exception not captured (CASSANDRA-13833)
 - * Make BatchlogManagerMBean.forceBatchlogReplay() blocking (CASSANDRA-13809)
   * Uncaught exceptions in Netty pipeline (CASSANDRA-13649)
 - * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) 
 + * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067)
   * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
   * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
   * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b0eba5f9/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 7251244,7e36e11..183176c
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -837,12 -899,33 +837,12 @@@ public class ColumnFamilyStore implemen
  {
  synchronized (data)
  {
 -if (previousFlushFailure != null)
 -throw new IllegalStateException("A flush previo

[10/10] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-09-04 Thread marcuse
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c8d15f04
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c8d15f04
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c8d15f04

Branch: refs/heads/trunk
Commit: c8d15f04f1efd37668e2ccbc681730ae6b2199da
Parents: e5f3bb6 37d6730
Author: Marcus Eriksson 
Authored: Mon Sep 4 15:04:18 2017 +0200
Committer: Marcus Eriksson 
Committed: Mon Sep 4 15:04:18 2017 +0200

--
 CHANGES.txt |   1 +
 .../db/compaction/CompactionManager.java|   4 +-
 .../db/compaction/CompactionExecutorTest.java   | 109 +++
 3 files changed, 112 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8d15f04/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8d15f04/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[01/10] cassandra git commit: Fix compaction and flush exception not captured issue

2017-09-04 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 4d90573c5 -> e80ede6d3
  refs/heads/cassandra-3.0 f791c2690 -> b0eba5f9c
  refs/heads/cassandra-3.11 bed7fa5ef -> 37d67306a
  refs/heads/trunk e5f3bb6e5 -> c8d15f04f


Fix compaction and flush exception not captured issue

patch by Jay Zhuang; reviewed by marcuse for CASSANDRA-13833


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e80ede6d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e80ede6d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e80ede6d

Branch: refs/heads/cassandra-2.2
Commit: e80ede6d393460f22ee2b313d4bac7e3fbbfe893
Parents: 4d90573
Author: Jay Zhuang 
Authored: Thu Aug 31 11:07:07 2017 -0700
Committer: Marcus Eriksson 
Committed: Mon Sep 4 15:01:02 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   4 +-
 .../db/compaction/CompactionManager.java|   4 +-
 .../db/compaction/CompactionExecutorTest.java   | 131 +++
 4 files changed, 136 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4e68ddc..03a78fd 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix compaction and flush exception not captured (CASSANDRA-13833)
  * Make BatchlogManagerMBean.forceBatchlogReplay() blocking (CASSANDRA-13809)
  * Uncaught exceptions in Netty pipeline (CASSANDRA-13649)
  * Prevent integer overflow on exabyte filesystems (CASSANDRA-13067) 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 2e52eb2..7e36e11 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -906,9 +906,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 logFlush();
 Flush flush = new Flush(false);
 ListenableFutureTask flushTask = 
ListenableFutureTask.create(flush, null);
-flushExecutor.submit(flushTask);
+flushExecutor.execute(flushTask);
 ListenableFutureTask task = 
ListenableFutureTask.create(flush.postFlush);
-postFlushExecutor.submit(task);
+postFlushExecutor.execute(task);
 
 @SuppressWarnings("unchecked")
 ListenableFuture future = 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d21f1e8..cd50646 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1457,7 +1457,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 return CompactionMetrics.getCompactions().size();
 }
 
-private static class CompactionExecutor extends 
JMXEnabledThreadPoolExecutor
+static class CompactionExecutor extends JMXEnabledThreadPoolExecutor
 {
 protected CompactionExecutor(int minThreads, int maxThreads, String 
name, BlockingQueue queue)
 {
@@ -1537,7 +1537,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 try
 {
 ListenableFutureTask ret = ListenableFutureTask.create(task);
-submit(ret);
+execute(ret);
 return ret;
 }
 catch (RejectedExecutionException ex)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e80ede6d/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
new file mode 100644
index 000..c6feb3f
--- /dev/null
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionExecutorTest.java
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownersh

[jira] [Commented] (CASSANDRA-8457) nio MessagingService

2017-09-04 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152574#comment-16152574
 ] 

Stefan Podkowinski commented on CASSANDRA-8457:
---

[~jasobrown], did you forget to bump netty to 4.1.14 in build.xml?

> nio MessagingService
> 
>
> Key: CASSANDRA-8457
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jonathan Ellis
>Assignee: Jason Brown
>Priority: Minor
>  Labels: netty, performance
> Fix For: 4.0
>
> Attachments: 8457-load.tgz
>
>
> Thread-per-peer (actually two each incoming and outbound) is a big 
> contributor to context switching, especially for larger clusters.  Let's look 
> at switching to nio, possibly via Netty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13396) Cassandra 3.10: ClassCastException in ThreadAwareSecurityManager

2017-09-04 Thread Eric Hubert (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152569#comment-16152569
 ] 

Eric Hubert commented on CASSANDRA-13396:
-

We faced the same underlying issue after upgrading from Cassandra 3.9 to 3.11.0 
when using Cassandra embedded for integration testing using JUnit. 
As our application uses a different logging backend and we did not want to 
switch it and provide appropriate redundant configuration for logback, we 
excluded logback dependencies and only provided our implementation to also 
avoid any warnings about duplicate bindings. This setup worked fine with 
Cassandra 3.9, but fails with Cassandra >= 3.10; the server does not startup, 
because of the missing classes. So in this case any patch working with 
instanceof checks still attempting to load those classes without specific 
try/catch would obviously also fail. 

In addition to SMAwareReconfigureOnChangeFilter in 
org.apache.cassandra.cql3.functions.ThreadAwareSecurityManager.install() using 
multiple logback internals (added with CASSANDRA-12535) I also found the change 
with CASSANDRA-12509 adding ch.qos.logback.core.hook.DelayingShutdownHook in 
StorageService#initServer problematic.
Would it be an alternative to handle all access to the underlying logging 
implementation via reflection? 
E.g. attempting to load logback classes and only if this does not fail, perform 
implementation specific actions via reflection (otherwise log a warning about 
missing logback presence, which can be ignored in integration test setups). We 
are mostly talking about one-time initialization, so the performance impact 
should be really negligible.
This solution would require users to properly exclude logback logging libs if 
they want to use other sf4j implementation bindings. Providing multiple logging 
implementations with sl4fj bindings anyway triggers a warning which should be 
handled.

> Cassandra 3.10: ClassCastException in ThreadAwareSecurityManager
> 
>
> Key: CASSANDRA-13396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13396
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Eugene Fedotov
>Priority: Minor
>
> https://www.mail-archive.com/user@cassandra.apache.org/msg51603.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13833) Failed compaction is not captured

2017-09-04 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152553#comment-16152553
 ] 

Marcus Eriksson commented on CASSANDRA-13833:
-

I reran the failures locally and the ones that looked suspicious all pass 
(except for trunk which looks completely broken now)

I'll get it committed


> Failed compaction is not captured
> -
>
> Key: CASSANDRA-13833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>
> Follow up for CASSANDRA-13785, when the compaction failed, it fails silently. 
> No error message is logged and exceptions metric is not updated. Basically, 
> it's unable to get the exception: 
> [CompactionManager.java:1491|https://github.com/apache/cassandra/blob/cassandra-2.2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1491]
> Here is the call stack:
> {noformat}
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> There're 2 {{FutureTask}} in the call stack, for example 
> {{FutureTask1(FutureTask2))}}, If the call thrown an exception, 
> {{FutureTask2}} sets the status, save the exception and return. But 
> FutureTask1 doesn't get any exception, then set the status to normal. So 
> we're unable to get the exception in:
> [CompactionManager.java:1491|https://github.com/apache/cassandra/blob/cassandra-2.2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1491]
> 2.1.x is working fine, here is the call stack:
> {noformat}
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_141]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_141]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[na:1.8.0_141]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_141]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13662) Remove unsupported CREDENTIALS message

2017-09-04 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152525#comment-16152525
 ] 

Stefan Podkowinski commented on CASSANDRA-13662:


Thanks for taking a look at this. When it comes to unit testing, we could 
create a test to verify the correct opcode > UnsupportedMessageCodec mapping, 
which seems to be kinda silly. Same with writing a unit test for 
UnsupportedMessageCodec, which always just throws an exception for both 
methods. Writing a integration test would require to make the driver send 
CREDENTIAL, which I think would be non-trivial for this type of low-level 
operation, but I'm not a driver expert. 

> Remove unsupported CREDENTIALS message
> --
>
> Key: CASSANDRA-13662
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13662
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Auth, CQL
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
>  Labels: security
> Fix For: 4.x
>
>
> Remove CREDENTIAL message, as protocol v1 isn't supported anyways. Let's try 
> not to keep unused legacy classes around for any security relevant features. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



  1   2   >