[jira] [Reopened] (CASSANDRA-11137) JSON datetime formatting needs timezone
[ https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov reopened CASSANDRA-11137: - Provided fix causes test failures for 2.2 and 3.0. > JSON datetime formatting needs timezone > --- > > Key: CASSANDRA-11137 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11137 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Stefania >Assignee: Alex Petrov > Fix For: 3.6 > > > The JSON date time string representation lacks the timezone information: > {code} > cqlsh:events> select toJson(created_at) AS created_at from > event_by_user_timestamp ; > created_at > --- > "2016-01-04 16:05:47.123" > (1 rows) > {code} > vs. > {code} > cqlsh:events> select created_at FROM event_by_user_timestamp ; > created_at > -- > 2016-01-04 15:05:47+ > (1 rows) > cqlsh:events> > {code} > To make things even more complicated the JSON timestamp is not returned in > UTC. > At the moment {{DateType}} picks this formatting string {{"-MM-dd > HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a > minimum add the timezone? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11659) Set a variable for cassandra.logdir instead hardcoding in bin/cassandra.
[ https://issues.apache.org/jira/browse/CASSANDRA-11659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259686#comment-15259686 ] Herbert Fischer commented on CASSANDRA-11659: - This setting could be moved to /etc/default/cassandra or /etc/cassandra/cassandra-env.sh, in JVM_OPTS. > Set a variable for cassandra.logdir instead hardcoding in bin/cassandra. > > > Key: CASSANDRA-11659 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11659 > Project: Cassandra > Issue Type: Bug > Components: Packaging >Reporter: Ahmed AbouZaid >Priority: Minor > Labels: patch > Attachments: cassandra_logdir_path_v.01.patch > > Original Estimate: 5m > Remaining Estimate: 5m > > Hello, > I think we need a way to overwrite value of "cassandra.logdir", that in case > the user needs to change logdir path. > Since it's hard-coded inside "/usr/sbin/cassandra" (as "/var/log/cassandra" > or "$CASSANDRA_HOME/logs") and also it's defined after includes, so it > should be as variable, just like "cassandra.storagedir". > Something like that: > {code} > cassandra_parms="$cassandra_parms -Dcassandra.logdir=$cassandra_logdir" > cassandra_parms="$cassandra_parms > -Dcassandra.storagedir=$cassandra_storagedir" > {code} > And as in Debian, we can set "/var/log/cassandra" as default value somewhere. > Thanks -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11452) Cache implementation using LIRS eviction for in-process page cache
[ https://issues.apache.org/jira/browse/CASSANDRA-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259698#comment-15259698 ] Benedict commented on CASSANDRA-11452: -- If we had direct access to the hash table, the hash collision check could be made almost entirely free. As it stands, I don't think it can be done at all without a parallel structure which would be costly to maintain. > Cache implementation using LIRS eviction for in-process page cache > -- > > Key: CASSANDRA-11452 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11452 > Project: Cassandra > Issue Type: Improvement > Components: Local Write-Read Paths >Reporter: Branimir Lambov >Assignee: Branimir Lambov > > Following up from CASSANDRA-5863, to make best use of caching and to avoid > having to explicitly marking compaction accesses as non-cacheable, we need a > cache implementation that uses an eviction algorithm that can better handle > non-recurring accesses. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11669) RangeName queries might not return all the results
Benjamin Lerer created CASSANDRA-11669: -- Summary: RangeName queries might not return all the results Key: CASSANDRA-11669 URL: https://issues.apache.org/jira/browse/CASSANDRA-11669 Project: Cassandra Issue Type: Bug Reporter: Benjamin Lerer Assignee: Benjamin Lerer It seems that if a page end in the middle of a partition the remaining rows of the partition will never be returned. The problem can be reproduced using the java driver with the following code: {code} session = cluster.connect(); session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : '1'}"); session.execute("USE test"); session.execute("DROP TABLE IF EXISTS test"); session.execute("CREATE TABLE IF NOT EXISTS test (a int, b int, c int, d int, PRIMARY KEY(a, b, c))"); PreparedStatement prepare = session.prepare("INSERT INTO test (a, b, c, d) VALUES (?, ?, ?, ?);"); for (int i = 1; i < 4; i++) for (int j = 1; j < 5; j++) for (int k = 1; k < 5; k++) session.execute(prepare.bind(i, j, k, i + j)); ResultSet rs = session.execute(session.newSimpleStatement("SELECT * FROM test WHERE b = 1 and c IN (1, 2, 3) ALLOW FILTERING") .setFetchSize(4)); for (Row row : rs) { System.out.println(row); // Only one row will be returned for partition 2 instead of 3 } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10707) Add support for Group By to Select statement
[ https://issues.apache.org/jira/browse/CASSANDRA-10707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-10707: --- Status: Open (was: Patch Available) > Add support for Group By to Select statement > > > Key: CASSANDRA-10707 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10707 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > > Now that Cassandra support aggregate functions, it makes sense to support > {{GROUP BY}} on the {{SELECT}} statements. > It should be possible to group either at the partition level or at the > clustering column level. > {code} > SELECT partitionKey, max(value) FROM myTable GROUP BY partitionKey; > SELECT partitionKey, clustering0, clustering1, max(value) FROM myTable GROUP > BY partitionKey, clustering0, clustering1; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11669) RangeName queries might not return all the results
[ https://issues.apache.org/jira/browse/CASSANDRA-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-11669: --- Component/s: CQL > RangeName queries might not return all the results > -- > > Key: CASSANDRA-11669 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11669 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > > It seems that if a page end in the middle of a partition the remaining rows > of the partition will never be returned. > The problem can be reproduced using the java driver with the following code: > {code} > session = cluster.connect(); > session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION > = {'class' : 'SimpleStrategy', 'replication_factor' : '1'}"); > session.execute("USE test"); > session.execute("DROP TABLE IF EXISTS test"); > session.execute("CREATE TABLE IF NOT EXISTS test (a int, b int, c > int, d int, PRIMARY KEY(a, b, c))"); > PreparedStatement prepare = session.prepare("INSERT INTO test (a, b, > c, d) VALUES (?, ?, ?, ?);"); > for (int i = 1; i < 4; i++) > for (int j = 1; j < 5; j++) > for (int k = 1; k < 5; k++) > session.execute(prepare.bind(i, j, k, i + j)); > ResultSet rs = session.execute(session.newSimpleStatement("SELECT * > FROM test WHERE b = 1 and c IN (1, 2, 3) ALLOW FILTERING") > .setFetchSize(4)); > for (Row row : rs) > { > System.out.println(row); // Only one row will be returned for > partition 2 instead of 3 > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11669) RangeName queries might not return all the results
[ https://issues.apache.org/jira/browse/CASSANDRA-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-11669: --- Since Version: 3.0.0 > RangeName queries might not return all the results > -- > > Key: CASSANDRA-11669 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11669 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > > It seems that if a page end in the middle of a partition the remaining rows > of the partition will never be returned. > The problem can be reproduced using the java driver with the following code: > {code} > session = cluster.connect(); > session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION > = {'class' : 'SimpleStrategy', 'replication_factor' : '1'}"); > session.execute("USE test"); > session.execute("DROP TABLE IF EXISTS test"); > session.execute("CREATE TABLE IF NOT EXISTS test (a int, b int, c > int, d int, PRIMARY KEY(a, b, c))"); > PreparedStatement prepare = session.prepare("INSERT INTO test (a, b, > c, d) VALUES (?, ?, ?, ?);"); > for (int i = 1; i < 4; i++) > for (int j = 1; j < 5; j++) > for (int k = 1; k < 5; k++) > session.execute(prepare.bind(i, j, k, i + j)); > ResultSet rs = session.execute(session.newSimpleStatement("SELECT * > FROM test WHERE b = 1 and c IN (1, 2, 3) ALLOW FILTERING") > .setFetchSize(4)); > for (Row row : rs) > { > System.out.println(row); // Only one row will be returned for > partition 2 instead of 3 > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11669) RangeName queries might not return all the results
[ https://issues.apache.org/jira/browse/CASSANDRA-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259749#comment-15259749 ] Benjamin Lerer commented on CASSANDRA-11669: 2.2 is not impacted by the problem. > RangeName queries might not return all the results > -- > > Key: CASSANDRA-11669 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11669 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > > It seems that if a page end in the middle of a partition the remaining rows > of the partition will never be returned. > The problem can be reproduced using the java driver with the following code: > {code} > session = cluster.connect(); > session.execute("CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION > = {'class' : 'SimpleStrategy', 'replication_factor' : '1'}"); > session.execute("USE test"); > session.execute("DROP TABLE IF EXISTS test"); > session.execute("CREATE TABLE IF NOT EXISTS test (a int, b int, c > int, d int, PRIMARY KEY(a, b, c))"); > PreparedStatement prepare = session.prepare("INSERT INTO test (a, b, > c, d) VALUES (?, ?, ?, ?);"); > for (int i = 1; i < 4; i++) > for (int j = 1; j < 5; j++) > for (int k = 1; k < 5; k++) > session.execute(prepare.bind(i, j, k, i + j)); > ResultSet rs = session.execute(session.newSimpleStatement("SELECT * > FROM test WHERE b = 1 and c IN (1, 2, 3) ALLOW FILTERING") > .setFetchSize(4)); > for (Row row : rs) > { > System.out.println(row); // Only one row will be returned for > partition 2 instead of 3 > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[2/6] cassandra git commit: Remove unnecessary file existence check during anticompaction.
Remove unnecessary file existence check during anticompaction. Patch by marcuse; reviewed by Paulo Motta for CASSANDRA-11660 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3db30aab Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3db30aab Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3db30aab Branch: refs/heads/cassandra-3.0 Commit: 3db30aab98e8ca568b006273b533ae68f448f3ac Parents: b6b2517 Author: Marcus Eriksson Authored: Tue Apr 26 13:33:21 2016 +0200 Committer: Marcus Eriksson Committed: Wed Apr 27 10:03:42 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/compaction/CompactionManager.java| 8 2 files changed, 1 insertion(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3db30aab/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index bdabf29..e8a301a 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.7 + * Remove unnescessary file existence check during anticompaction (CASSANDRA-11660) * Add missing files to debian packages (CASSANDRA-11642) * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) * cqlsh: COPY FROM should use regular inserts for single statement batches and http://git-wip-us.apache.org/repos/asf/cassandra/blob/3db30aab/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index 675d3cc..3f41672 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -1235,17 +1235,9 @@ public class CompactionManager implements CompactionManagerMBean { long groupMaxDataAge = -1; -// check that compaction hasn't stolen any sstables used in previous repair sessions -// if we need to skip the anticompaction, it will be carried out by the next repair for (Iterator i = anticompactionGroup.originals().iterator(); i.hasNext();) { SSTableReader sstable = i.next(); -if (!new File(sstable.getFilename()).exists()) -{ -logger.info("Skipping anticompaction for {}, required sstable was compacted and is no longer available.", sstable); -i.remove(); -continue; -} if (groupMaxDataAge < sstable.maxDataAge) groupMaxDataAge = sstable.maxDataAge; }
[4/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8bfe09f4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8bfe09f4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8bfe09f4 Branch: refs/heads/trunk Commit: 8bfe09f4660aee6401b36f43322adfca6273d786 Parents: f2afd04 3db30aa Author: Marcus Eriksson Authored: Wed Apr 27 10:10:42 2016 +0200 Committer: Marcus Eriksson Committed: Wed Apr 27 10:10:42 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/compaction/CompactionManager.java| 8 2 files changed, 1 insertion(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bfe09f4/CHANGES.txt -- diff --cc CHANGES.txt index f02d4f2,e8a301a..bc15d32 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,17 -1,5 +1,18 @@@ -2.2.7 +3.0.6 + * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654) + * Ignore all LocalStrategy keyspaces for streaming and other related + operations (CASSANDRA-11627) + * Ensure columnfilter covers indexed columns for thrift 2i queries (CASSANDRA-11523) + * Only open one sstable scanner per sstable (CASSANDRA-11412) + * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410) + * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485) + * LogAwareFileLister should only use OLD sstable files in current folder to determine disk consistency (CASSANDRA-11470) + * Notify indexers of expired rows during compaction (CASSANDRA-11329) + * Properly respond with ProtocolError when a v1/v2 native protocol + header is received (CASSANDRA-11464) + * Validate that num_tokens and initial_token are consistent with one another (CASSANDRA-10120) +Merged from 2.2: + * Remove unnescessary file existence check during anticompaction (CASSANDRA-11660) * Add missing files to debian packages (CASSANDRA-11642) * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) * cqlsh: COPY FROM should use regular inserts for single statement batches and http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bfe09f4/src/java/org/apache/cassandra/db/compaction/CompactionManager.java --
[6/6] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Merge branch 'cassandra-3.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0e52 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0e52 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0e52 Branch: refs/heads/trunk Commit: 0e524eaf71cad06407685dd8b947aac2179e Parents: 29d4a82 8bfe09f Author: Marcus Eriksson Authored: Wed Apr 27 10:11:08 2016 +0200 Committer: Marcus Eriksson Committed: Wed Apr 27 10:11:08 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/compaction/CompactionManager.java| 8 2 files changed, 1 insertion(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0e52/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0e52/src/java/org/apache/cassandra/db/compaction/CompactionManager.java --
[3/6] cassandra git commit: Remove unnecessary file existence check during anticompaction.
Remove unnecessary file existence check during anticompaction. Patch by marcuse; reviewed by Paulo Motta for CASSANDRA-11660 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3db30aab Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3db30aab Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3db30aab Branch: refs/heads/trunk Commit: 3db30aab98e8ca568b006273b533ae68f448f3ac Parents: b6b2517 Author: Marcus Eriksson Authored: Tue Apr 26 13:33:21 2016 +0200 Committer: Marcus Eriksson Committed: Wed Apr 27 10:03:42 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/compaction/CompactionManager.java| 8 2 files changed, 1 insertion(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3db30aab/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index bdabf29..e8a301a 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.7 + * Remove unnescessary file existence check during anticompaction (CASSANDRA-11660) * Add missing files to debian packages (CASSANDRA-11642) * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) * cqlsh: COPY FROM should use regular inserts for single statement batches and http://git-wip-us.apache.org/repos/asf/cassandra/blob/3db30aab/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index 675d3cc..3f41672 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -1235,17 +1235,9 @@ public class CompactionManager implements CompactionManagerMBean { long groupMaxDataAge = -1; -// check that compaction hasn't stolen any sstables used in previous repair sessions -// if we need to skip the anticompaction, it will be carried out by the next repair for (Iterator i = anticompactionGroup.originals().iterator(); i.hasNext();) { SSTableReader sstable = i.next(); -if (!new File(sstable.getFilename()).exists()) -{ -logger.info("Skipping anticompaction for {}, required sstable was compacted and is no longer available.", sstable); -i.remove(); -continue; -} if (groupMaxDataAge < sstable.maxDataAge) groupMaxDataAge = sstable.maxDataAge; }
[1/6] cassandra git commit: Remove unnecessary file existence check during anticompaction.
Repository: cassandra Updated Branches: refs/heads/cassandra-2.2 b6b251770 -> 3db30aab9 refs/heads/cassandra-3.0 f2afd04e7 -> 8bfe09f46 refs/heads/trunk 29d4a8297 -> 0e524 Remove unnecessary file existence check during anticompaction. Patch by marcuse; reviewed by Paulo Motta for CASSANDRA-11660 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3db30aab Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3db30aab Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3db30aab Branch: refs/heads/cassandra-2.2 Commit: 3db30aab98e8ca568b006273b533ae68f448f3ac Parents: b6b2517 Author: Marcus Eriksson Authored: Tue Apr 26 13:33:21 2016 +0200 Committer: Marcus Eriksson Committed: Wed Apr 27 10:03:42 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/compaction/CompactionManager.java| 8 2 files changed, 1 insertion(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3db30aab/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index bdabf29..e8a301a 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.7 + * Remove unnescessary file existence check during anticompaction (CASSANDRA-11660) * Add missing files to debian packages (CASSANDRA-11642) * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) * cqlsh: COPY FROM should use regular inserts for single statement batches and http://git-wip-us.apache.org/repos/asf/cassandra/blob/3db30aab/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index 675d3cc..3f41672 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -1235,17 +1235,9 @@ public class CompactionManager implements CompactionManagerMBean { long groupMaxDataAge = -1; -// check that compaction hasn't stolen any sstables used in previous repair sessions -// if we need to skip the anticompaction, it will be carried out by the next repair for (Iterator i = anticompactionGroup.originals().iterator(); i.hasNext();) { SSTableReader sstable = i.next(); -if (!new File(sstable.getFilename()).exists()) -{ -logger.info("Skipping anticompaction for {}, required sstable was compacted and is no longer available.", sstable); -i.remove(); -continue; -} if (groupMaxDataAge < sstable.maxDataAge) groupMaxDataAge = sstable.maxDataAge; }
[5/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8bfe09f4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8bfe09f4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8bfe09f4 Branch: refs/heads/cassandra-3.0 Commit: 8bfe09f4660aee6401b36f43322adfca6273d786 Parents: f2afd04 3db30aa Author: Marcus Eriksson Authored: Wed Apr 27 10:10:42 2016 +0200 Committer: Marcus Eriksson Committed: Wed Apr 27 10:10:42 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/compaction/CompactionManager.java| 8 2 files changed, 1 insertion(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bfe09f4/CHANGES.txt -- diff --cc CHANGES.txt index f02d4f2,e8a301a..bc15d32 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,17 -1,5 +1,18 @@@ -2.2.7 +3.0.6 + * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654) + * Ignore all LocalStrategy keyspaces for streaming and other related + operations (CASSANDRA-11627) + * Ensure columnfilter covers indexed columns for thrift 2i queries (CASSANDRA-11523) + * Only open one sstable scanner per sstable (CASSANDRA-11412) + * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410) + * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485) + * LogAwareFileLister should only use OLD sstable files in current folder to determine disk consistency (CASSANDRA-11470) + * Notify indexers of expired rows during compaction (CASSANDRA-11329) + * Properly respond with ProtocolError when a v1/v2 native protocol + header is received (CASSANDRA-11464) + * Validate that num_tokens and initial_token are consistent with one another (CASSANDRA-10120) +Merged from 2.2: + * Remove unnescessary file existence check during anticompaction (CASSANDRA-11660) * Add missing files to debian packages (CASSANDRA-11642) * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) * cqlsh: COPY FROM should use regular inserts for single statement batches and http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bfe09f4/src/java/org/apache/cassandra/db/compaction/CompactionManager.java --
[jira] [Updated] (CASSANDRA-11660) Dubious call to remove in CompactionManager.java
[ https://issues.apache.org/jira/browse/CASSANDRA-11660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-11660: Resolution: Fixed Fix Version/s: 2.2.7 3.0.6 3.6 Status: Resolved (was: Ready to Commit) Committed, thanks all! > Dubious call to remove in CompactionManager.java > > > Key: CASSANDRA-11660 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11660 > Project: Cassandra > Issue Type: Bug > Components: Compaction >Reporter: Max Schaefer >Assignee: Marcus Eriksson >Priority: Minor > Fix For: 3.6, 3.0.6, 2.2.7 > > > I'm surprised by > [this|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1382] > call to {{Iterator.remove()}} in > {{org.apache.cassandra.db.compaction.antiCompactGroup}}: the iterator in > question seems to come from > [org.apache.cassandra.db.lifecycle.LifecycleTransaction.originals()|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/lifecycle/LifecycleTransaction.java#L419], > which returns an unmodifiable set, so I would expect this call to always > fail with an {{UnsupportedOperationException}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed
Anastasia Osintseva created CASSANDRA-11670: --- Summary: Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed Key: CASSANDRA-11670 URL: https://issues.apache.org/jira/browse/CASSANDRA-11670 Project: Cassandra Issue Type: Bug Components: Configuration, Streaming and Messaging Reporter: Anastasia Osintseva Fix For: 3.0.5 I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each DC. One node has been added successfully after I had made scrubing. Now we are trying to add node to another DC, but get error: org.apache.cassandra.streaming.StreamException: Stream failed ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - Unknown exception caught while attempting to update MaterializedView! messages_dump.messages java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large for the maxiumum size of 33554432 at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) [apache-cassandra-3.0.5.jar:3.0.5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_11] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_11] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_11] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_11] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11] ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 StreamReceiveTask.java:214 - Error applying streamed data: java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large for the maxiumum size of 33554432 at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) ~[apache-cassandra-3.0.5.jar:3.0.5] at
cassandra git commit: Always check for collisions before joining ring
Repository: cassandra Updated Branches: refs/heads/trunk 0e524 -> 2bc5f0c61 Always check for collisions before joining ring Patch by Sam Tunnicliffe; reviewed by Joel Knighton for CASSANDRA-10134 The collision check and shadow round can be skipped completely (for testing etc) by setting cassandra.allow_unsafe_join=true. This commit also enables explicit unsafe replace without bootstrap by using both auto_bootstrap=false and cassandra.replace_address. Doing so requires cassandra.allow_unsafe_replace=true. Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2bc5f0c6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2bc5f0c6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2bc5f0c6 Branch: refs/heads/trunk Commit: 2bc5f0c61ddb428b4826d83d42dad473eaeac002 Parents: 0e52666 Author: Sam Tunnicliffe Authored: Wed Mar 16 09:53:04 2016 + Committer: Sam Tunnicliffe Committed: Wed Apr 27 09:25:33 2016 +0100 -- CHANGES.txt | 1 + NEWS.txt| 4 + .../apache/cassandra/db/view/ViewBuilder.java | 1 + .../apache/cassandra/db/view/ViewManager.java | 16 ++ .../gms/GossipDigestAckVerbHandler.java | 5 +- .../gms/GossipDigestSynVerbHandler.java | 28 ++- src/java/org/apache/cassandra/gms/Gossiper.java | 92 -- .../org/apache/cassandra/io/util/FileUtils.java | 4 +- .../locator/DynamicEndpointSnitch.java | 2 +- .../cassandra/service/StorageService.java | 172 --- .../cassandra/service/StorageServiceMBean.java | 2 +- .../apache/cassandra/tools/nodetool/Info.java | 2 +- .../cassandra/utils/JVMStabilityInspector.java | 4 +- .../unit/org/apache/cassandra/SchemaLoader.java | 2 + 14 files changed, 244 insertions(+), 91 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bc5f0c6/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2bbd39d..f37a8ab 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.6 + * Always perform collision check before joining ring (CASSANDRA-10134) * SSTableWriter output discrepancy (CASSANDRA-11646) * Fix potential timeout in NativeTransportService.testConcurrentDestroys (CASSANDRA-10756) * Support large partitions on the 3.0 sstable format (CASSANDRA-11206) http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bc5f0c6/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index a177d37..7f24d2c 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -18,6 +18,10 @@ using the provided 'sstableupgrade' tool. New features + - Collision checks are performed when joining the token ring, regardless of whether + the node should bootstrap. Additionally, replace_address can legitimately be used + without bootstrapping to help with recovery of nodes with partially failed disks. + See CASSANDRA-10134 for more details. - Key cache will only hold indexed entries up to the size configured by column_index_cache_size_in_kb in cassandra.yaml in memory. Larger indexed entries will never go into memory. See CASSANDRA-11206 for more details. http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bc5f0c6/src/java/org/apache/cassandra/db/view/ViewBuilder.java -- diff --git a/src/java/org/apache/cassandra/db/view/ViewBuilder.java b/src/java/org/apache/cassandra/db/view/ViewBuilder.java index 23eeba4..8944122 100644 --- a/src/java/org/apache/cassandra/db/view/ViewBuilder.java +++ b/src/java/org/apache/cassandra/db/view/ViewBuilder.java @@ -103,6 +103,7 @@ public class ViewBuilder extends CompactionInfo.Holder public void run() { +logger.trace("Running view builder for {}.{}", baseCfs.metadata.ksName, view.name); UUID localHostId = SystemKeyspace.getLocalHostId(); String ksname = baseCfs.metadata.ksName, viewName = view.name; http://git-wip-us.apache.org/repos/asf/cassandra/blob/2bc5f0c6/src/java/org/apache/cassandra/db/view/ViewManager.java -- diff --git a/src/java/org/apache/cassandra/db/view/ViewManager.java b/src/java/org/apache/cassandra/db/view/ViewManager.java index faa5551..37428ad 100644 --- a/src/java/org/apache/cassandra/db/view/ViewManager.java +++ b/src/java/org/apache/cassandra/db/view/ViewManager.java @@ -35,6 +35,8 @@ import org.apache.cassandra.db.partitions.PartitionUpdate; import org.apache.cassandra.dht.Token; import org.apache.cassandra.repair.SystemDistributedKeyspace; import org.apache.cassandra.service.StoragePro
[jira] [Updated] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed
[ https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anastasia Osintseva updated CASSANDRA-11670: Description: I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each DC. One node has been added successfully after I had made scrubing. Now I'm trying to add node to another DC, but get error: org.apache.cassandra.streaming.StreamException: Stream failed {noformat} ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - Unknown exception caught while attempting to update MaterializedView! messages_dump.messages java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large for the maxiumum size of 33554432 at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) [apache-cassandra-3.0.5.jar:3.0.5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_11] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_11] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_11] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_11] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11] ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 StreamReceiveTask.java:214 - Error applying streamed data: java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large for the maxiumum size of 33554432 at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) ~[apache-cassandra-3.0.5.jar:3.0.5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_11] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_11] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.
[jira] [Updated] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed
[ https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anastasia Osintseva updated CASSANDRA-11670: Description: I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each DC. One node has been added successfully after I had made scrubing. Now I'm trying to add node to another DC, but get error: org.apache.cassandra.streaming.StreamException: Stream failed. After scrubing and repair I get the same error. {noformat} ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - Unknown exception caught while attempting to update MaterializedView! messages_dump.messages java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large for the maxiumum size of 33554432 at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) [apache-cassandra-3.0.5.jar:3.0.5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_11] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_11] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_11] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_11] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11] ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 StreamReceiveTask.java:214 - Error applying streamed data: java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large for the maxiumum size of 33554432 at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) ~[apache-cassandra-3.0.5.jar:3.0.5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_11] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_11] at java.util.concurrent.ThreadPoolExec
[jira] [Created] (CASSANDRA-11671) Remove check on gossip status from DynamicEndpointSnitch::updateScores
Sam Tunnicliffe created CASSANDRA-11671: --- Summary: Remove check on gossip status from DynamicEndpointSnitch::updateScores Key: CASSANDRA-11671 URL: https://issues.apache.org/jira/browse/CASSANDRA-11671 Project: Cassandra Issue Type: Improvement Components: Coordination Reporter: Sam Tunnicliffe Priority: Minor Fix For: 3.x It seems that historically there were initialization ordering issues which affected DES and StorageService (CASSANDRA-1756) and so a condition was added to DES::updateScores() to ensure that SS had finished setup. In fact, the check was actually testing whether gossip was active or not. CASSANDRA-10134 preserved this behaviour, but it seems likely that the check can be removed from DES completely now. If not, it can at least be switched to use SS::isInitialized() which post CASSANDRA-10134 actually reports what it's name suggests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-10134) Always require replace_address to replace existing address
[ https://issues.apache.org/jira/browse/CASSANDRA-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe resolved CASSANDRA-10134. - Resolution: Fixed Fix Version/s: (was: 3.x) 3.6 Committed to trunk in {{2bc5f0c61ddb428b4826d83d42dad473eaeac002}} (with a couple of the log statements emitted during a shadow round switched from trace to debug). I've opened CASSANDRA-11671 for the change to {{DynamicEndpointSnitch::updateScores}}. > Always require replace_address to replace existing address > -- > > Key: CASSANDRA-10134 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10134 > Project: Cassandra > Issue Type: Improvement > Components: Distributed Metadata >Reporter: Tyler Hobbs >Assignee: Sam Tunnicliffe > Labels: docs-impacting > Fix For: 3.6 > > > Normally, when a node is started from a clean state with the same address as > an existing down node, it will fail to start with an error like this: > {noformat} > ERROR [main] 2015-08-19 15:07:51,577 CassandraDaemon.java:554 - Exception > encountered during startup > java.lang.RuntimeException: A node with address /127.0.0.3 already exists, > cancelling join. Use cassandra.replace_address if you want to replace this > node. > at > org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:543) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:783) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:720) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:611) > ~[main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:626) > [main/:na] > {noformat} > However, if {{auto_bootstrap}} is set to false or the node is in its own seed > list, it will not throw this error and will start normally. The new node > then takes over the host ID of the old node (even if the tokens are > different), and the only message you will see is a warning in the other > nodes' logs: > {noformat} > logger.warn("Changing {}'s host ID from {} to {}", endpoint, storedId, > hostId); > {noformat} > This could cause an operator to accidentally wipe out the token information > for a down node without replacing it. To fix this, we should check for an > endpoint collision even if {{auto_bootstrap}} is false or the node is a seed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11672) Upgradesstables errors with "CompoundComposite cannot be cast to org.apache.cassandra.db.composites.CellName"
Simon Ashley created CASSANDRA-11672: Summary: Upgradesstables errors with "CompoundComposite cannot be cast to org.apache.cassandra.db.composites.CellName" Key: CASSANDRA-11672 URL: https://issues.apache.org/jira/browse/CASSANDRA-11672 Project: Cassandra Issue Type: Bug Reporter: Simon Ashley Upgradesstables in C* 2.1 fails on thrift tables originally created on C*1.2 with the following error: {quote} $ nodetool upgradesstables -a error: org.apache.cassandra.db.composites.CompoundComposite cannot be cast to org.apache.cassandra.db.composites.CellName -- StackTrace -- java.lang.ClassCastException: org.apache.cassandra.db.composites.CompoundComposite cannot be cast to org.apache.cassandra.db.composites.CellName at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:86) at org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) at org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:171) at org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:166) at org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:121) at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:193) at org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:126) at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:197) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) at org.apache.cassandra.db.compaction.CompactionManager$4.execute(CompactionManager.java:376) at org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:304) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {quote} This problem is not seen if the thrift table was originally created in C* 2.0.x The suspicion is that this is related to the use of a CompositeType comparator. The following schema is an example of a cf that will cause this issue. {quote} create column family cf1 with column_type = 'Standard' and comparator = 'CompositeType(org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.DateType),org.apache.cassandra.db.marshal.UUIDType,org.apache.cassandra.db.marshal.AsciiType,org.apache.cassandra.db.marshal.UUIDType,org.apache.cassandra.db.marshal.UUIDType,org.apache.cassandra.db.marshal.AsciiType,org.apache.cassandra.db.marshal.AsciiType,org.apache.cassandra.db.marshal.AsciiType,org.apache.cassandra.db.marshal.AsciiType,org.apache.cassandra.db.marshal.AsciiType,org.apache.cassandra.db.marshal.AsciiType)' and default_validation_class = 'UTF8Type' and key_validation_class = 'CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.IntegerType)' and read_repair_chance = 1.0 and dclocal_read_repair_chance = 0.0 and populate_io_cache_on_flush = false and gc_grace = 259200 and min_compaction_threshold = 4 and max_compaction_threshold = 32 and replicate_on_write = true and compaction_strategy = 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' and caching = 'KEYS_ONLY' and compression_options = {'sstable_compression' : 'org.apache.cassandra.io.compress.SnappyCompressor', 'chunk_length_kb' : '64'}; {quote} You can workaround this via the creation of a dummy table and update of schema_columnfamilies for each cf affected. The dummy cf can be deleted afterwards. cassandra-cli [default@unknown] use ks1; [defa
[jira] [Comment Edited] (CASSANDRA-10134) Always require replace_address to replace existing address
[ https://issues.apache.org/jira/browse/CASSANDRA-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259801#comment-15259801 ] Sam Tunnicliffe edited comment on CASSANDRA-10134 at 4/27/16 8:52 AM: -- Committed to trunk in {{2bc5f0c61ddb428b4826d83d42dad473eaeac002}} (with a couple of the log statements emitted during a shadow round switched from trace to debug, at the suggestion of [~brandon.williams]). I've opened CASSANDRA-11671 for the change to {{DynamicEndpointSnitch::updateScores}}. was (Author: beobal): Committed to trunk in {{2bc5f0c61ddb428b4826d83d42dad473eaeac002}} (with a couple of the log statements emitted during a shadow round switched from trace to debug). I've opened CASSANDRA-11671 for the change to {{DynamicEndpointSnitch::updateScores}}. > Always require replace_address to replace existing address > -- > > Key: CASSANDRA-10134 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10134 > Project: Cassandra > Issue Type: Improvement > Components: Distributed Metadata >Reporter: Tyler Hobbs >Assignee: Sam Tunnicliffe > Labels: docs-impacting > Fix For: 3.6 > > > Normally, when a node is started from a clean state with the same address as > an existing down node, it will fail to start with an error like this: > {noformat} > ERROR [main] 2015-08-19 15:07:51,577 CassandraDaemon.java:554 - Exception > encountered during startup > java.lang.RuntimeException: A node with address /127.0.0.3 already exists, > cancelling join. Use cassandra.replace_address if you want to replace this > node. > at > org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:543) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:783) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:720) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:611) > ~[main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:626) > [main/:na] > {noformat} > However, if {{auto_bootstrap}} is set to false or the node is in its own seed > list, it will not throw this error and will start normally. The new node > then takes over the host ID of the old node (even if the tokens are > different), and the only message you will see is a warning in the other > nodes' logs: > {noformat} > logger.warn("Changing {}'s host ID from {} to {}", endpoint, storedId, > hostId); > {noformat} > This could cause an operator to accidentally wipe out the token information > for a down node without replacing it. To fix this, we should check for an > endpoint collision even if {{auto_bootstrap}} is false or the node is a seed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11137) JSON datetime formatting needs timezone
[ https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259837#comment-15259837 ] Alex Petrov commented on CASSANDRA-11137: - I've also branch changes to 2.2 and 3.0 (merged mostly seamlessly) |[2.2|https://github.com/ifesdjeen/cassandra/tree/11137-2.2]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-dtest/]| |[3.0|https://github.com/ifesdjeen/cassandra/tree/11137-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-dtest/]| And [opened up an PR that fix tests|https://github.com/riptano/cassandra-dtest/pull/955]. I'll track the progress in corresponding [test team issue|https://issues.apache.org/jira/browse/CASSANDRA-11650]. > JSON datetime formatting needs timezone > --- > > Key: CASSANDRA-11137 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11137 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Stefania >Assignee: Alex Petrov > Fix For: 3.6 > > > The JSON date time string representation lacks the timezone information: > {code} > cqlsh:events> select toJson(created_at) AS created_at from > event_by_user_timestamp ; > created_at > --- > "2016-01-04 16:05:47.123" > (1 rows) > {code} > vs. > {code} > cqlsh:events> select created_at FROM event_by_user_timestamp ; > created_at > -- > 2016-01-04 15:05:47+ > (1 rows) > cqlsh:events> > {code} > To make things even more complicated the JSON timestamp is not returned in > UTC. > At the moment {{DateType}} picks this formatting string {{"-MM-dd > HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a > minimum add the timezone? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-11137) JSON datetime formatting needs timezone
[ https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov resolved CASSANDRA-11137. - Resolution: Fixed > JSON datetime formatting needs timezone > --- > > Key: CASSANDRA-11137 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11137 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Stefania >Assignee: Alex Petrov > Fix For: 3.6 > > > The JSON date time string representation lacks the timezone information: > {code} > cqlsh:events> select toJson(created_at) AS created_at from > event_by_user_timestamp ; > created_at > --- > "2016-01-04 16:05:47.123" > (1 rows) > {code} > vs. > {code} > cqlsh:events> select created_at FROM event_by_user_timestamp ; > created_at > -- > 2016-01-04 15:05:47+ > (1 rows) > cqlsh:events> > {code} > To make things even more complicated the JSON timestamp is not returned in > UTC. > At the moment {{DateType}} picks this formatting string {{"-MM-dd > HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a > minimum add the timezone? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11137) JSON datetime formatting needs timezone
[ https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259837#comment-15259837 ] Alex Petrov edited comment on CASSANDRA-11137 at 4/27/16 9:19 AM: -- I've also branch changes to 2.2 and 3.0 (merged mostly seamlessly), in case we would like to have backported versions: |[2.2|https://github.com/ifesdjeen/cassandra/tree/11137-2.2]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-dtest/]| |[3.0|https://github.com/ifesdjeen/cassandra/tree/11137-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-dtest/]| And [opened up an PR that fix tests|https://github.com/riptano/cassandra-dtest/pull/955]. I'll track the progress in corresponding [test team issue|https://issues.apache.org/jira/browse/CASSANDRA-11650]. was (Author: ifesdjeen): I've also branch changes to 2.2 and 3.0 (merged mostly seamlessly) |[2.2|https://github.com/ifesdjeen/cassandra/tree/11137-2.2]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-2.2-dtest/]| |[3.0|https://github.com/ifesdjeen/cassandra/tree/11137-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11137-3.0-dtest/]| And [opened up an PR that fix tests|https://github.com/riptano/cassandra-dtest/pull/955]. I'll track the progress in corresponding [test team issue|https://issues.apache.org/jira/browse/CASSANDRA-11650]. > JSON datetime formatting needs timezone > --- > > Key: CASSANDRA-11137 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11137 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Stefania >Assignee: Alex Petrov > Fix For: 3.6 > > > The JSON date time string representation lacks the timezone information: > {code} > cqlsh:events> select toJson(created_at) AS created_at from > event_by_user_timestamp ; > created_at > --- > "2016-01-04 16:05:47.123" > (1 rows) > {code} > vs. > {code} > cqlsh:events> select created_at FROM event_by_user_timestamp ; > created_at > -- > 2016-01-04 15:05:47+ > (1 rows) > cqlsh:events> > {code} > To make things even more complicated the JSON timestamp is not returned in > UTC. > At the moment {{DateType}} picks this formatting string {{"-MM-dd > HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a > minimum add the timezone? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10783) Allow literal value as parameter of UDF & UDA
[ https://issues.apache.org/jira/browse/CASSANDRA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259840#comment-15259840 ] Ajeet Singh commented on CASSANDRA-10783: - Thanks Robert Stupp Benjamin Lerer, It will be great if it will be available in 3.6. Signature of my UDF: CREATE OR REPLACE FUNCTION spatial_keyspace.state_group_and_max( state map, type text, pkey int, level int) CQL Query: select spatial_keyspace.group_and_count(quadkey, pkey, %level_bind_parameter%) from spatial_keyspace.businesspoints where longitude >= -179.98333 and longitude <=86 and latitude >= -179.98333 and latitude <= 86 LIMIT 10 ALLOW FILTERING; > Allow literal value as parameter of UDF & UDA > - > > Key: CASSANDRA-10783 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10783 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: DOAN DuyHai >Assignee: Robert Stupp >Priority: Minor > Labels: CQL3, UDF, client-impacting, doc-impacting > Fix For: 3.x > > > I have defined the following UDF > {code:sql} > CREATE OR REPLACE FUNCTION maxOf(current int, testValue int) RETURNS NULL ON > NULL INPUT > RETURNS int > LANGUAGE java > AS 'return Math.max(current,testValue);' > CREATE TABLE maxValue(id int primary key, val int); > INSERT INTO maxValue(id, val) VALUES(1, 100); > SELECT maxOf(val, 101) FROM maxValue WHERE id=1; > {code} > I got the following error message: > {code} > SyntaxException: message="line 1:19 no viable alternative at input '101' (SELECT maxOf(val1, > [101]...)"> > {code} > It would be nice to allow literal value as parameter of UDF and UDA too. > I was thinking about an use-case for an UDA groupBy() function where the end > user can *inject* at runtime a literal value to select which aggregation he > want to display, something similar to GROUP BY ... HAVING -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11650) dtest failure in json_test.ToJsonSelectTests.complex_data_types_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259841#comment-15259841 ] Alex Petrov commented on CASSANDRA-11650: - The problem was caused by [11137|https://issues.apache.org/jira/browse/CASSANDRA-11137], I've opened a [PR to dtest|https://github.com/riptano/cassandra-dtest/pull/955] that would fix inconsistencies between 3.0 (/2.2) and 3.6. > dtest failure in json_test.ToJsonSelectTests.complex_data_types_test > > > Key: CASSANDRA-11650 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11650 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest/585/testReport/json_test/ToJsonSelectTests/complex_data_types_test > Failed on CassCI build cassandra-2.2_dtest #585 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11650) dtest failure in json_test.ToJsonSelectTests.complex_data_types_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-11650: Status: Patch Available (was: Open) > dtest failure in json_test.ToJsonSelectTests.complex_data_types_test > > > Key: CASSANDRA-11650 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11650 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest/585/testReport/json_test/ToJsonSelectTests/complex_data_types_test > Failed on CassCI build cassandra-2.2_dtest #585 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11629) java.lang.UnsupportedOperationException when selecting rows with counters
[ https://issues.apache.org/jira/browse/CASSANDRA-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-11629: Status: Patch Available (was: Open) Patch for {{3.0}} and {{trunk}}: |[trunk|https://github.com/ifesdjeen/cassandra/tree/11629-trunk]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11629-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11629-trunk-dtest/]| |[3.0|https://github.com/ifesdjeen/cassandra/tree/11629-3.0]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11629-3.0-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11629-3.0-dtest/]| I have also added paging tests with counter columns in [dtest|https://github.com/riptano/cassandra-dtest/pull/956]. The {{dtest}} failures on 3.0 are "known issues", existing before the patch: [11650|https://issues.apache.org/jira/browse/CASSANDRA-11650] [11127|https://issues.apache.org/jira/browse/CASSANDRA-11127]. Tests are passing locally.. > java.lang.UnsupportedOperationException when selecting rows with counters > - > > Key: CASSANDRA-11629 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11629 > Project: Cassandra > Issue Type: Bug > Environment: Ubuntu 16.04 LTS > Cassandra 3.0.5 Community Edition >Reporter: Arnd Hannemann >Assignee: Alex Petrov > Labels: 3.0.5 > Fix For: 3.6, 3.0.x > > > When selecting a non empty set of rows with counters a exception occurs: > {code} > WARN [SharedPool-Worker-2] 2016-04-21 23:47:47,542 > AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-2,5,main]: {} > java.lang.RuntimeException: java.lang.UnsupportedOperationException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2449) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_45] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-3.0.5.jar:3.0.5] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > Caused by: java.lang.UnsupportedOperationException: null > at > org.apache.cassandra.db.marshal.AbstractType.compareCustom(AbstractType.java:172) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.marshal.AbstractType.compare(AbstractType.java:158) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.marshal.AbstractType.compareForCQL(AbstractType.java:202) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.cql3.Operator.isSatisfiedBy(Operator.java:169) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:619) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:258) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:246) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:236) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:295) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > o
[jira] [Commented] (CASSANDRA-10783) Allow literal value as parameter of UDF & UDA
[ https://issues.apache.org/jira/browse/CASSANDRA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259895#comment-15259895 ] Benjamin Lerer commented on CASSANDRA-10783: Sorry guys, I underestimated the time that I needed for some other tasks. Taking into account that the code freeze for 3.6 is on monday and that I still have a several reviews that have higher priorities, I do not think that this ticket will make it. > Allow literal value as parameter of UDF & UDA > - > > Key: CASSANDRA-10783 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10783 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: DOAN DuyHai >Assignee: Robert Stupp >Priority: Minor > Labels: CQL3, UDF, client-impacting, doc-impacting > Fix For: 3.x > > > I have defined the following UDF > {code:sql} > CREATE OR REPLACE FUNCTION maxOf(current int, testValue int) RETURNS NULL ON > NULL INPUT > RETURNS int > LANGUAGE java > AS 'return Math.max(current,testValue);' > CREATE TABLE maxValue(id int primary key, val int); > INSERT INTO maxValue(id, val) VALUES(1, 100); > SELECT maxOf(val, 101) FROM maxValue WHERE id=1; > {code} > I got the following error message: > {code} > SyntaxException: message="line 1:19 no viable alternative at input '101' (SELECT maxOf(val1, > [101]...)"> > {code} > It would be nice to allow literal value as parameter of UDF and UDA too. > I was thinking about an use-case for an UDA groupBy() function where the end > user can *inject* at runtime a literal value to select which aggregation he > want to display, something similar to GROUP BY ... HAVING -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10783) Allow literal value as parameter of UDF & UDA
[ https://issues.apache.org/jira/browse/CASSANDRA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259922#comment-15259922 ] DOAN DuyHai commented on CASSANDRA-10783: - Ok so it'll be in 3.8 then > Allow literal value as parameter of UDF & UDA > - > > Key: CASSANDRA-10783 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10783 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: DOAN DuyHai >Assignee: Robert Stupp >Priority: Minor > Labels: CQL3, UDF, client-impacting, doc-impacting > Fix For: 3.x > > > I have defined the following UDF > {code:sql} > CREATE OR REPLACE FUNCTION maxOf(current int, testValue int) RETURNS NULL ON > NULL INPUT > RETURNS int > LANGUAGE java > AS 'return Math.max(current,testValue);' > CREATE TABLE maxValue(id int primary key, val int); > INSERT INTO maxValue(id, val) VALUES(1, 100); > SELECT maxOf(val, 101) FROM maxValue WHERE id=1; > {code} > I got the following error message: > {code} > SyntaxException: message="line 1:19 no viable alternative at input '101' (SELECT maxOf(val1, > [101]...)"> > {code} > It would be nice to allow literal value as parameter of UDF and UDA too. > I was thinking about an use-case for an UDA groupBy() function where the end > user can *inject* at runtime a literal value to select which aggregation he > want to display, something similar to GROUP BY ... HAVING -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11655) sstabledump doesn't print out tombstone information for deleted collection column
[ https://issues.apache.org/jira/browse/CASSANDRA-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Lohfink updated CASSANDRA-11655: -- Attachment: trunk-11655v2.patch > sstabledump doesn't print out tombstone information for deleted collection > column > - > > Key: CASSANDRA-11655 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11655 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng >Assignee: Chris Lohfink > Labels: Tools > Attachments: CASSANDRA-11655.patch, trunk-11655v2.patch > > > Pretty trivial to reproduce. > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_set_of_int set, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_set_of_int) VALUES (1, > 'c1', 100, {1, 2, 3, 4, 5});" | cqlsh > echo "delete val1_set_of_int from testks.testcf where k=1 and c='c1';" | cqlsh > echo "select * from testks.testcf;" | cqlsh > nodetool flush testks testcf > {noformat} > Now if you run sstabledump (even after taking the > [patch|https://github.com/yukim/cassandra/tree/11654-3.0] for > CASSANDRA-11654) against the newly generated SSTable like the following: > {noformat} > ~/cassandra-trunk/tools/bin/sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461645231352208 }, > "cells" : [ > { "name" : "val0_int", "value" : "100" } > ] > } > ] > } > ] > {noformat} > You will see that the collection-level Deletion Info is nowhere to be found, > so you will not be able to know "markedForDeleteAt" or "localDeletionTime" > for this collection tombstone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11655) sstabledump doesn't print out tombstone information for deleted collection column
[ https://issues.apache.org/jira/browse/CASSANDRA-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259926#comment-15259926 ] Chris Lohfink commented on CASSANDRA-11655: --- merged with trunk and (per CASSANDRA-11656) changed timestamps to always print consistent iso8691 string. Added a {{-t}} option to print timestamps out like before. > sstabledump doesn't print out tombstone information for deleted collection > column > - > > Key: CASSANDRA-11655 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11655 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng >Assignee: Chris Lohfink > Labels: Tools > Attachments: CASSANDRA-11655.patch, trunk-11655v2.patch > > > Pretty trivial to reproduce. > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_set_of_int set, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_set_of_int) VALUES (1, > 'c1', 100, {1, 2, 3, 4, 5});" | cqlsh > echo "delete val1_set_of_int from testks.testcf where k=1 and c='c1';" | cqlsh > echo "select * from testks.testcf;" | cqlsh > nodetool flush testks testcf > {noformat} > Now if you run sstabledump (even after taking the > [patch|https://github.com/yukim/cassandra/tree/11654-3.0] for > CASSANDRA-11654) against the newly generated SSTable like the following: > {noformat} > ~/cassandra-trunk/tools/bin/sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461645231352208 }, > "cells" : [ > { "name" : "val0_int", "value" : "100" } > ] > } > ] > } > ] > {noformat} > You will see that the collection-level Deletion Info is nowhere to be found, > so you will not be able to know "markedForDeleteAt" or "localDeletionTime" > for this collection tombstone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11137) JSON datetime formatting needs timezone
[ https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259946#comment-15259946 ] Stefania commented on CASSANDRA-11137: -- I'm +1 on the dtest PR, assuming the test team is also OK with using ellipses to relax the output checks. We shouldn't backport a patch because of test limitations but, I've noticed that this ticket is classified as a bug, so back-porting it might be the correct thing to do after all. Do you agree that it should be back-ported to 2.2 and 3.0, [~iamaleksey] or [~slebresne]? > JSON datetime formatting needs timezone > --- > > Key: CASSANDRA-11137 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11137 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Stefania >Assignee: Alex Petrov > Fix For: 3.6 > > > The JSON date time string representation lacks the timezone information: > {code} > cqlsh:events> select toJson(created_at) AS created_at from > event_by_user_timestamp ; > created_at > --- > "2016-01-04 16:05:47.123" > (1 rows) > {code} > vs. > {code} > cqlsh:events> select created_at FROM event_by_user_timestamp ; > created_at > -- > 2016-01-04 15:05:47+ > (1 rows) > cqlsh:events> > {code} > To make things even more complicated the JSON timestamp is not returned in > UTC. > At the moment {{DateType}} picks this formatting string {{"-MM-dd > HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a > minimum add the timezone? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10783) Allow literal value as parameter of UDF & UDA
[ https://issues.apache.org/jira/browse/CASSANDRA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259981#comment-15259981 ] Benjamin Lerer commented on CASSANDRA-10783: My plan is to review it as soon as possible. > Allow literal value as parameter of UDF & UDA > - > > Key: CASSANDRA-10783 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10783 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: DOAN DuyHai >Assignee: Robert Stupp >Priority: Minor > Labels: CQL3, UDF, client-impacting, doc-impacting > Fix For: 3.x > > > I have defined the following UDF > {code:sql} > CREATE OR REPLACE FUNCTION maxOf(current int, testValue int) RETURNS NULL ON > NULL INPUT > RETURNS int > LANGUAGE java > AS 'return Math.max(current,testValue);' > CREATE TABLE maxValue(id int primary key, val int); > INSERT INTO maxValue(id, val) VALUES(1, 100); > SELECT maxOf(val, 101) FROM maxValue WHERE id=1; > {code} > I got the following error message: > {code} > SyntaxException: message="line 1:19 no viable alternative at input '101' (SELECT maxOf(val1, > [101]...)"> > {code} > It would be nice to allow literal value as parameter of UDF and UDA too. > I was thinking about an use-case for an UDA groupBy() function where the end > user can *inject* at runtime a literal value to select which aggregation he > want to display, something similar to GROUP BY ... HAVING -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11137) JSON datetime formatting needs timezone
[ https://issues.apache.org/jira/browse/CASSANDRA-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259985#comment-15259985 ] Aleksey Yeschenko commented on CASSANDRA-11137: --- It is a bug, and something that should normally be backported. It's also potentially a breaking behaviour change for JSON consumers. That said, I think benefits of fixing the bug outweigh that risk. > JSON datetime formatting needs timezone > --- > > Key: CASSANDRA-11137 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11137 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Stefania >Assignee: Alex Petrov > Fix For: 3.6 > > > The JSON date time string representation lacks the timezone information: > {code} > cqlsh:events> select toJson(created_at) AS created_at from > event_by_user_timestamp ; > created_at > --- > "2016-01-04 16:05:47.123" > (1 rows) > {code} > vs. > {code} > cqlsh:events> select created_at FROM event_by_user_timestamp ; > created_at > -- > 2016-01-04 15:05:47+ > (1 rows) > cqlsh:events> > {code} > To make things even more complicated the JSON timestamp is not returned in > UTC. > At the moment {{DateType}} picks this formatting string {{"-MM-dd > HH:mm:ss.SSS"}}. Shouldn't we somehow make this configurable by users or at a > minimum add the timezone? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11662) Cassandra 2.0 and later require Java 7u25 or later - java sre 1.7.0_101-b14
[ https://issues.apache.org/jira/browse/CASSANDRA-11662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260003#comment-15260003 ] William Boutin commented on CASSANDRA-11662: Thank you for the replies. How do I close my duplicate request? Billy S. Boutin Office Phone No. (913) 241-5574 Cell Phone No. (732) 213-1368 LYNC IM: william.bou...@ericsson.com > Cassandra 2.0 and later require Java 7u25 or later - java sre 1.7.0_101-b14 > --- > > Key: CASSANDRA-11662 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11662 > Project: Cassandra > Issue Type: Bug > Components: Testing > Environment: cassandra server 2.1.5 and java jdk1.7.0_101-b14 >Reporter: William Boutin > Fix For: 2.1.x > > > We have the Cassandra Server 2.1.5 running. When we applied java patch java > jdk1.7.0_101-b14, cassandra will not start. The cassandra log states > "Cassandra 2.0 and later require Java 7u25 or later". -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed
[ https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260041#comment-15260041 ] Paulo Motta commented on CASSANDRA-11670: - This is strange, can you double check that none of your nodes in any data center have a custom {{commitlog_segment_size_in_mb}} or {{max_mutation_size_in_kb}} configuration set? Also, can you verify during node initialization on {{system.log}} that {{commitlog_segment_size_in_mb=128}} was picked up by configuration when you changed and that {{max_mutation_size_in_kb=null}}? Maybe check that on other nodes as well to see if you find any strange combination. > Error while waiting on bootstrap to complete. Bootstrap will have to be > restarted. Stream failed > > > Key: CASSANDRA-11670 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11670 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Streaming and Messaging >Reporter: Anastasia Osintseva > Fix For: 3.0.5 > > > I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each > DC. One node has been added successfully after I had made scrubing. > Now I'm trying to add node to another DC, but get error: > org.apache.cassandra.streaming.StreamException: Stream failed. > After scrubing and repair I get the same error. > {noformat} > ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - > Unknown exception caught while attempting to update MaterializedView! > messages_dump.messages > java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large > for the maxiumum size of 33554432 > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:724) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:487) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > [apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyUnsafe(Mutation.java:236) > [apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:169) > [apache-cassandra-3.0.5.jar:3.0.5] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_11] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_11] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_11] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11] > ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 > StreamReceiveTask.java:214 - Error applying streamed data: > java.lang.IllegalArgumentException: Mutation of 34974901 bytes is too large > for the maxiumum size of 33554432 > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:264) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:469) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:205) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:217) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.batchlog.BatchlogManager.store(BatchlogManager.java:146) > ~[apache-cassandra-3.0.5.jar:3.0.5] > at > org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy
[jira] [Resolved] (CASSANDRA-11662) Cassandra 2.0 and later require Java 7u25 or later - java sre 1.7.0_101-b14
[ https://issues.apache.org/jira/browse/CASSANDRA-11662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko resolved CASSANDRA-11662. --- Resolution: Duplicate Fix Version/s: (was: 2.1.x) > Cassandra 2.0 and later require Java 7u25 or later - java sre 1.7.0_101-b14 > --- > > Key: CASSANDRA-11662 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11662 > Project: Cassandra > Issue Type: Bug > Components: Testing > Environment: cassandra server 2.1.5 and java jdk1.7.0_101-b14 >Reporter: William Boutin > > We have the Cassandra Server 2.1.5 running. When we applied java patch java > jdk1.7.0_101-b14, cassandra will not start. The cassandra log states > "Cassandra 2.0 and later require Java 7u25 or later". -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11502) Fix denseness and column metadata updates coming from Thrift
[ https://issues.apache.org/jira/browse/CASSANDRA-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260110#comment-15260110 ] Sylvain Lebresne commented on CASSANDRA-11502: -- bq. but I, instead, feel more paranoid about leaving it in Fair enough, I'm good getting rid of it. bq. I think we should be safe here b/c of the {{isThriftCompatible()}} guard in {{CassandraServer::system_update_column_family()}}. You're right. I got confused because I help someone a few days ago with an upgrade problem and was able to do an update on a CQL table, but that was on some 2.0 version so must have been on some version from before we introduced that. Would still be great to double check but +1 on the patch in any case. > Fix denseness and column metadata updates coming from Thrift > > > Key: CASSANDRA-11502 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11502 > Project: Cassandra > Issue Type: Bug > Components: Distributed Metadata >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko >Priority: Minor > Fix For: 2.2.x, 3.0.x, 3.x > > > It was > [decided|https://issues.apache.org/jira/browse/CASSANDRA-7744?focusedCommentId=14095472&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14095472] > that we'd be recalculating {{is_dense}} for table updates coming from Thrift > on every change. However, due to some oversight, {{is_dense}} can only go > from {{false}} to {{true}}. Once dense, even adding a {{REGULAR}} column will > not reset {{is_dense}} back to {{false}}. > The recalculation fails because no matter what happens, we never remove the > auto-generated {{CLUSTERING}} and {{COMPACT_VALUE}} columns of a dense table. > Which ultimately leads to the issue on 2.2 to 3.0 upgrade (see > CASSANDRA-11315). > What we should do is remove the special-case for Thrift in > {{LegacySchemaTables::makeUpdateTableMutation}} and correct the logic in > {{ThriftConversion::internalFromThrift}} to remove those columns when going > from dense to sparse. > This is not enough to fix CASSANDRA-11315, however, as we need to handle > pre-patch upgrades, and upgrades from 2.1. Fixing it in 2.2 means a) getting > proper schema from {{DESCRIBE}} now and b) using the more efficient > {{SparseCellNameType}} when you add columns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-5863) In process (uncompressed) page cache
[ https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260127#comment-15260127 ] Branimir Lambov commented on CASSANDRA-5863: In the latest couple of updates I did some renaming: - {{BufferlessRebufferer}} to {{ChunkReader}} with {{rebuffer}} to {{readChunk}} - {{BaseRebufferer}} to {{ReaderFileProxy}} - {{SharedRebufferer}} to {{RebuffererFactory}} with factory method - {{ReaderCache}} to {{ChunkCache}} and updated some of the documentation. Hopefully this reads better now? Switched to Caffeine as planned in CASSANDRA-11452: - [better cache efficiency|https://docs.google.com/spreadsheets/d/11VcYh8wiCbpVmeix10onalAS4phfREWcxE-RMPTM7cc/edit#gid=0] on CachingBench which includes compaction, scans and collation from multiple sstables - [cstar_perf with everything served off cache|http://cstar.datastax.com/tests/id/b5963866-0b9a-11e6-a761-0256e416528f] shows equivalent performance, i.e. it does not degrade on heavy load - [cstar_perf on smaller cache|http://cstar.datastax.com/tests/id/41b4c650-0c6d-11e6-bf41-0256e416528f] shows better hit rate even with uniformly random access patterns (48.8 vs 45.4% as reported by nodetool info) - unlike LIRS, memory overheads are very controlled and specified [here|https://github.com/ben-manes/caffeine/wiki/Memory-overhead]: at most 112 bytes per chunk including key, i.e. 0.2% for 64k chunks to 3% for 4k chunks. And finally rebased to get dtest in sync: |[code|https://github.com/blambov/cassandra/tree/5863-page-cache-caffeine-rebased]|[utest|http://cassci.datastax.com/job/blambov-5863-page-cache-caffeine-rebased-testall/]|[dtest|http://cassci.datastax.com/job/blambov-5863-page-cache-caffeine-rebased-dtest/]| > In process (uncompressed) page cache > > > Key: CASSANDRA-5863 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5863 > Project: Cassandra > Issue Type: Sub-task >Reporter: T Jake Luciani >Assignee: Branimir Lambov > Labels: performance > Fix For: 3.x > > > Currently, for every read, the CRAR reads each compressed chunk into a > byte[], sends it to ICompressor, gets back another byte[] and verifies a > checksum. > This process is where the majority of time is spent in a read request. > Before compression, we would have zero-copy of data and could respond > directly from the page-cache. > It would be useful to have some kind of Chunk cache that could speed up this > process for hot data, possibly off heap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11502) Fix denseness and column metadata updates coming from Thrift
[ https://issues.apache.org/jira/browse/CASSANDRA-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260146#comment-15260146 ] Aleksey Yeschenko commented on CASSANDRA-11502: --- bq. I help someone a few days ago with an upgrade problem and was able to do an update on a CQL table, but that was on some 2.0 version so must have been on some version from before we introduced that. Do you have that table schema handy? I might as well check if the check for that fails in 2.1+ and open a new ticket if so. > Fix denseness and column metadata updates coming from Thrift > > > Key: CASSANDRA-11502 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11502 > Project: Cassandra > Issue Type: Bug > Components: Distributed Metadata >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko >Priority: Minor > Fix For: 2.2.x, 3.0.x, 3.x > > > It was > [decided|https://issues.apache.org/jira/browse/CASSANDRA-7744?focusedCommentId=14095472&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14095472] > that we'd be recalculating {{is_dense}} for table updates coming from Thrift > on every change. However, due to some oversight, {{is_dense}} can only go > from {{false}} to {{true}}. Once dense, even adding a {{REGULAR}} column will > not reset {{is_dense}} back to {{false}}. > The recalculation fails because no matter what happens, we never remove the > auto-generated {{CLUSTERING}} and {{COMPACT_VALUE}} columns of a dense table. > Which ultimately leads to the issue on 2.2 to 3.0 upgrade (see > CASSANDRA-11315). > What we should do is remove the special-case for Thrift in > {{LegacySchemaTables::makeUpdateTableMutation}} and correct the logic in > {{ThriftConversion::internalFromThrift}} to remove those columns when going > from dense to sparse. > This is not enough to fix CASSANDRA-11315, however, as we need to handle > pre-patch upgrades, and upgrades from 2.1. Fixing it in 2.2 means a) getting > proper schema from {{DESCRIBE}} now and b) using the more efficient > {{SparseCellNameType}} when you add columns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11555) Make prepared statement cache size configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-11555: - Reviewer: Benjamin Lerer > Make prepared statement cache size configurable > --- > > Key: CASSANDRA-11555 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11555 > Project: Cassandra > Issue Type: Improvement >Reporter: Robert Stupp >Assignee: Robert Stupp >Priority: Minor > > The prepared statement caches in {{org.apache.cassandra.cql3.QueryProcessor}} > are configured using the formula {{Runtime.getRuntime().maxMemory() / 256}}. > Sometimes applications may need more than that. Proposal is to make that > value configurable - probably also distinguish thrift and native CQL3 queries > (new applications don't need the thrift stuff). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11670) Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. Stream failed
[ https://issues.apache.org/jira/browse/CASSANDRA-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260338#comment-15260338 ] Anastasia Osintseva commented on CASSANDRA-11670: - I had no more Mutation of Y bytes is too large for the maxiumum size of X, but I got again Error: {noformat} ERROR [main] 2016-04-27 17:32:24,714 StorageService.java:1300 - Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. java.util.concurrent.ExecutionException: org.apache.cassandra.streaming.StreamException: Stream failed at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299) ~[guava-18.0.jar:na] at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286) ~[guava-18.0.jar:na] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) ~[guava-18.0.jar:na] at org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1295) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:971) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:745) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:610) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:333) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551) [apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) [apache-cassandra-3.0.5.jar:3.0.5] Caused by: org.apache.cassandra.streaming.StreamException: Stream failed at org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85) ~[apache-cassandra-3.0.5.jar:3.0.5] at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310) ~[guava-18.0.jar:na] at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457) ~[guava-18.0.jar:na] at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156) ~[guava-18.0.jar:na] at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145) ~[guava-18.0.jar:na] at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202) ~[guava-18.0.jar:na] at org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:210) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:186) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:430) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamSession.maybeCompleted(StreamSession.java:707) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamSession.taskCompleted(StreamSession.java:668) ~[apache-cassandra-3.0.5.jar:3.0.5] at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:210) ~[apache-cassandra-3.0.5.jar:3.0.5] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_11] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_11] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_11] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_11] at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_11] {noformat} > Error while waiting on bootstrap to complete. Bootstrap will have to be > restarted. Stream failed > > > Key: CASSANDRA-11670 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11670 > Project: Cassandra > Issue Type: Bug > Components: Configuration, Streaming and Messaging >Reporter: Anastasia Osintseva > Fix For: 3.0.5 > > > I have in cluster 2 DC, in each DC - 2 Nodes. I wanted to add 1 node to each > DC. One node has been added successfully after I had made scrubing. > Now I'm trying to add node to another DC, but get error: > org.apache.cassandra.streaming.StreamException: Stream failed. > After scrubing and repair I get the same error. > {noformat} > ERROR [StreamReceiveTask:5] 2016-04-27 00:33:21,082 Keyspace.java:492 - > Unknown exception caught while attempting to u
[jira] [Commented] (CASSANDRA-10745) Deprecate PropertyFileSnitch
[ https://issues.apache.org/jira/browse/CASSANDRA-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260368#comment-15260368 ] Brandon Williams commented on CASSANDRA-10745: -- I think if people want to continue using PFS, that's fine. I think the best step we can take here is making GPFS not be PFS-compatible unless a -D flag is passed. This way we're optimized for the new cluster with GPFS case, instead of the migration case, since the latter is likely in the minority now. > Deprecate PropertyFileSnitch > > > Key: CASSANDRA-10745 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10745 > Project: Cassandra > Issue Type: Improvement > Components: Coordination, Distributed Metadata >Reporter: Paulo Motta >Priority: Minor > > Opening this ticket to discuss deprecating PropertyFileSnitch, since it's > error-prone and more snitch code to maintain (See CASSANDRA-10243). Migration > from existing cluster with PropertyFileSnitch to GossipingPropertyFileSnitch > is straightforward. > Is there any useful use case that can be achieved only with > PropertyFileSnitch? > If not objections, we would add deprecation warnings in 2.2.x, 3.0.x, 3.2 and > deprecate in 3.4 or 3.6. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-3486) Node Tool command to stop repair
[ https://issues.apache.org/jira/browse/CASSANDRA-3486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260388#comment-15260388 ] Nick Bailey commented on CASSANDRA-3486: bq. Do you think a blocking + timeout approach would be preferable? Maybe. My goal in asking would be to know if the repair needs to be canceled on other nodes or not. Right now you need to either just run the abort on all nodes from the start or run it on the coordinator then check the participants to double check that it succeeded there as well. bq. I personally think we should go this route of making repair more stateful I agree, especially with the upcoming coordinated repairs in C* > Node Tool command to stop repair > > > Key: CASSANDRA-3486 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3486 > Project: Cassandra > Issue Type: Improvement > Components: Tools > Environment: JVM >Reporter: Vijay >Assignee: Paulo Motta >Priority: Minor > Labels: repair > Fix For: 2.1.x > > Attachments: 0001-stop-repair-3583.patch > > > After CASSANDRA-1740, If the validation compaction is stopped then the repair > will hang. This ticket will allow users to kill the original repair. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11514) trunk compaction performance regression
[ https://issues.apache.org/jira/browse/CASSANDRA-11514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260406#comment-15260406 ] Michael Shuler commented on CASSANDRA-11514: I was unable to find a concrete method to bisect this - I attempted a good number of variations to find a way to call a commit "good" or "bad", but was unsuccessful. Those are on a private jira [CSTAR-478|https://datastax.jira.com/browse/CSTAR-478], which I'm going to close, since I'm currently unsure of how to proceed. > trunk compaction performance regression > --- > > Key: CASSANDRA-11514 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11514 > Project: Cassandra > Issue Type: Bug > Components: Compaction > Environment: cstar_perf >Reporter: Michael Shuler > Labels: performance > Fix For: 3.x > > Attachments: trunk-compaction_dtcs-op_rate.png, > trunk-compaction_lcs-op_rate.png > > > It appears that a commit between Mar 29-30 has resulted in a drop in > compaction performance. I attempted to get a log list of commits to post > here, but > {noformat} > git log trunk@{2016-03-29}..trunk@{2016-03-31} > {noformat} > appears to be incomplete, since reading through {{git log}} I see netty and > och were upgraded during this time period. > !trunk-compaction_dtcs-op_rate.png! > !trunk-compaction_lcs-op_rate.png! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10091) Integrated JMX authn & authz
[ https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260458#comment-15260458 ] Nick Bailey commented on CASSANDRA-10091: - I'm curious how this would behave during a bootstrap operation with auth enabled. Would jmx be unavailable until the relevant auth data had been streamed to the system_auth keyspace? > Integrated JMX authn & authz > > > Key: CASSANDRA-10091 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10091 > Project: Cassandra > Issue Type: New Feature >Reporter: Jan Karlsson >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.x > > > It would be useful to authenticate with JMX through Cassandra's internal > authentication. This would reduce the overhead of keeping passwords in files > on the machine and would consolidate passwords to one location. It would also > allow the possibility to handle JMX permissions in Cassandra. > It could be done by creating our own JMX server and setting custom classes > for the authenticator and authorizer. We could then add some parameters where > the user could specify what authenticator and authorizer to use in case they > want to make their own. > This could also be done by creating a premain method which creates a jmx > server. This would give us the feature without changing the Cassandra code > itself. However I believe this would be a good feature to have in Cassandra. > I am currently working on a solution which creates a JMX server and uses a > custom authenticator and authorizer. It is currently build as a premain, > however it would be great if we could put this in Cassandra instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[2/8] cassandra git commit: Fix is_dense recalculation for Thrift-updated tables
Fix is_dense recalculation for Thrift-updated tables patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-11502 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5c40278 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5c40278 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5c40278 Branch: refs/heads/cassandra-3.0 Commit: e5c40278001bf3a9582085a58941e5f4765f118c Parents: 3db30aa Author: Aleksey Yeschenko Authored: Fri Apr 1 17:36:14 2016 +0100 Committer: Aleksey Yeschenko Committed: Wed Apr 27 17:47:29 2016 +0100 -- CHANGES.txt | 3 +- .../cql3/statements/AlterTableStatement.java| 2 +- .../cql3/statements/AlterTypeStatement.java | 2 +- .../cql3/statements/CreateIndexStatement.java | 2 +- .../cql3/statements/CreateTriggerStatement.java | 2 +- .../cql3/statements/DropIndexStatement.java | 2 +- .../cql3/statements/DropTriggerStatement.java | 2 +- .../cassandra/schema/LegacySchemaTables.java| 10 +--- .../cassandra/service/MigrationManager.java | 8 +-- .../cassandra/thrift/CassandraServer.java | 2 +- .../cassandra/thrift/ThriftConversion.java | 24 +++- .../config/LegacySchemaTablesTest.java | 60 +++- .../org/apache/cassandra/schema/DefsTest.java | 14 ++--- .../cassandra/triggers/TriggersSchemaTest.java | 4 +- 14 files changed, 103 insertions(+), 34 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index e8a301a..3641816 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,9 +1,10 @@ 2.2.7 + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502) * Remove unnescessary file existence check during anticompaction (CASSANDRA-11660) * Add missing files to debian packages (CASSANDRA-11642) * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) * cqlsh: COPY FROM should use regular inserts for single statement batches and - report errors correctly if workers processes crash on initialization (CASSANDRA-11474) + report errors correctly if workers processes crash on initialization (CASSANDRA-11474) * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553) Merged from 2.1: * cqlsh COPY FROM fails for null values with non-prepared statements (CASSANDRA-11631) http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java index 63a53fa..f4a7b39 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java @@ -284,7 +284,7 @@ public class AlterTableStatement extends SchemaAlteringStatement break; } -MigrationManager.announceColumnFamilyUpdate(cfm, false, isLocalOnly); +MigrationManager.announceColumnFamilyUpdate(cfm, isLocalOnly); return true; } http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java index 6459e6b..9203cf9 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java @@ -113,7 +113,7 @@ public abstract class AlterTypeStatement extends SchemaAlteringStatement for (ColumnDefinition def : copy.allColumns()) modified |= updateDefinition(copy, def, toUpdate.keyspace, toUpdate.name, updated); if (modified) -MigrationManager.announceColumnFamilyUpdate(copy, false, isLocalOnly); +MigrationManager.announceColumnFamilyUpdate(copy, isLocalOnly); } // Other user types potentially using the updated type http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java b/src/java/org/apache/cassandra/cql3/stateme
[8/8] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Merge branch 'cassandra-3.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c5cc540 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c5cc540 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c5cc540 Branch: refs/heads/trunk Commit: 5c5cc540facef9f8645a179e1467ad7edffbda48 Parents: 2bc5f0c 3079ae6 Author: Aleksey Yeschenko Authored: Wed Apr 27 17:58:12 2016 +0100 Committer: Aleksey Yeschenko Committed: Wed Apr 27 17:58:12 2016 +0100 -- CHANGES.txt | 3 ++- .../cql3/statements/AlterTableStatement.java | 2 +- .../cassandra/cql3/statements/AlterTypeStatement.java | 2 +- .../cql3/statements/CreateIndexStatement.java | 2 +- .../cql3/statements/CreateTriggerStatement.java | 2 +- .../cassandra/cql3/statements/DropIndexStatement.java | 2 +- .../cql3/statements/DropTriggerStatement.java | 2 +- .../org/apache/cassandra/schema/SchemaKeyspace.java | 14 ++ .../apache/cassandra/service/MigrationManager.java| 8 .../org/apache/cassandra/thrift/CassandraServer.java | 2 +- test/unit/org/apache/cassandra/schema/DefsTest.java | 14 +++--- .../apache/cassandra/schema/SchemaKeyspaceTest.java | 2 +- .../apache/cassandra/triggers/TriggersSchemaTest.java | 4 ++-- 13 files changed, 25 insertions(+), 34 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/schema/SchemaKeyspace.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/service/MigrationManager.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5cc540/src/java/org/apache/cassandra/thrift/CassandraServer.java --
[3/8] cassandra git commit: Fix is_dense recalculation for Thrift-updated tables
Fix is_dense recalculation for Thrift-updated tables patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-11502 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5c40278 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5c40278 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5c40278 Branch: refs/heads/trunk Commit: e5c40278001bf3a9582085a58941e5f4765f118c Parents: 3db30aa Author: Aleksey Yeschenko Authored: Fri Apr 1 17:36:14 2016 +0100 Committer: Aleksey Yeschenko Committed: Wed Apr 27 17:47:29 2016 +0100 -- CHANGES.txt | 3 +- .../cql3/statements/AlterTableStatement.java| 2 +- .../cql3/statements/AlterTypeStatement.java | 2 +- .../cql3/statements/CreateIndexStatement.java | 2 +- .../cql3/statements/CreateTriggerStatement.java | 2 +- .../cql3/statements/DropIndexStatement.java | 2 +- .../cql3/statements/DropTriggerStatement.java | 2 +- .../cassandra/schema/LegacySchemaTables.java| 10 +--- .../cassandra/service/MigrationManager.java | 8 +-- .../cassandra/thrift/CassandraServer.java | 2 +- .../cassandra/thrift/ThriftConversion.java | 24 +++- .../config/LegacySchemaTablesTest.java | 60 +++- .../org/apache/cassandra/schema/DefsTest.java | 14 ++--- .../cassandra/triggers/TriggersSchemaTest.java | 4 +- 14 files changed, 103 insertions(+), 34 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index e8a301a..3641816 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,9 +1,10 @@ 2.2.7 + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502) * Remove unnescessary file existence check during anticompaction (CASSANDRA-11660) * Add missing files to debian packages (CASSANDRA-11642) * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) * cqlsh: COPY FROM should use regular inserts for single statement batches and - report errors correctly if workers processes crash on initialization (CASSANDRA-11474) + report errors correctly if workers processes crash on initialization (CASSANDRA-11474) * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553) Merged from 2.1: * cqlsh COPY FROM fails for null values with non-prepared statements (CASSANDRA-11631) http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java index 63a53fa..f4a7b39 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java @@ -284,7 +284,7 @@ public class AlterTableStatement extends SchemaAlteringStatement break; } -MigrationManager.announceColumnFamilyUpdate(cfm, false, isLocalOnly); +MigrationManager.announceColumnFamilyUpdate(cfm, isLocalOnly); return true; } http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java index 6459e6b..9203cf9 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java @@ -113,7 +113,7 @@ public abstract class AlterTypeStatement extends SchemaAlteringStatement for (ColumnDefinition def : copy.allColumns()) modified |= updateDefinition(copy, def, toUpdate.keyspace, toUpdate.name, updated); if (modified) -MigrationManager.announceColumnFamilyUpdate(copy, false, isLocalOnly); +MigrationManager.announceColumnFamilyUpdate(copy, isLocalOnly); } // Other user types potentially using the updated type http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java b/src/java/org/apache/cassandra/cql3/statements/Crea
[7/8] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3079ae60 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3079ae60 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3079ae60 Branch: refs/heads/cassandra-3.0 Commit: 3079ae60d29baec262a4b05d7082e88091299d26 Parents: 8bfe09f e5c4027 Author: Aleksey Yeschenko Authored: Wed Apr 27 17:55:27 2016 +0100 Committer: Aleksey Yeschenko Committed: Wed Apr 27 17:57:59 2016 +0100 -- CHANGES.txt | 3 ++- .../cql3/statements/AlterTableStatement.java | 2 +- .../cassandra/cql3/statements/AlterTypeStatement.java | 2 +- .../cql3/statements/CreateIndexStatement.java | 2 +- .../cql3/statements/CreateTriggerStatement.java | 2 +- .../cassandra/cql3/statements/DropIndexStatement.java | 2 +- .../cql3/statements/DropTriggerStatement.java | 2 +- .../org/apache/cassandra/schema/SchemaKeyspace.java | 14 ++ .../apache/cassandra/service/MigrationManager.java| 8 .../org/apache/cassandra/thrift/CassandraServer.java | 2 +- test/unit/org/apache/cassandra/schema/DefsTest.java | 14 +++--- .../apache/cassandra/schema/SchemaKeyspaceTest.java | 2 +- .../apache/cassandra/triggers/TriggersSchemaTest.java | 4 ++-- 13 files changed, 25 insertions(+), 34 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/CHANGES.txt -- diff --cc CHANGES.txt index bc15d32,3641816..6b6bc1f --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,27 -1,11 +1,28 @@@ -2.2.7 +3.0.6 + * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654) + * Ignore all LocalStrategy keyspaces for streaming and other related + operations (CASSANDRA-11627) + * Ensure columnfilter covers indexed columns for thrift 2i queries (CASSANDRA-11523) + * Only open one sstable scanner per sstable (CASSANDRA-11412) + * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410) + * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485) + * LogAwareFileLister should only use OLD sstable files in current folder to determine disk consistency (CASSANDRA-11470) + * Notify indexers of expired rows during compaction (CASSANDRA-11329) + * Properly respond with ProtocolError when a v1/v2 native protocol + header is received (CASSANDRA-11464) + * Validate that num_tokens and initial_token are consistent with one another (CASSANDRA-10120) +Merged from 2.2: + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502) * Remove unnescessary file existence check during anticompaction (CASSANDRA-11660) * Add missing files to debian packages (CASSANDRA-11642) * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) * cqlsh: COPY FROM should use regular inserts for single statement batches and - report errors correctly if workers processes crash on initialization (CASSANDRA-11474) +report errors correctly if workers processes crash on initialization (CASSANDRA-11474) * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553) + * Allow only DISTINCT queries with partition keys restrictions (CASSANDRA-11339) + * CqlConfigHelper no longer requires both a keystore and truststore to work (CASSANDRA-11532) + * Make deprecated repair methods backward-compatible with previous notification service (CASSANDRA-11430) + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462) Merged from 2.1: * cqlsh COPY FROM fails for null values with non-prepared statements (CASSANDRA-11631) * Make cython optional in pylib/setup.py (CASSANDRA-11630) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java -- diff --cc src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java index 3515c6b,f4a7b39..381971f --- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java @@@ -322,61 -284,8 +322,61 @@@ public class AlterTableStatement extend break; } - MigrationManager.announceColumnFamilyUpdate(cfm, false, isLocalOnly); + MigrationManager.announceColumnFamilyUpdate(cfm, isLocalOnly); -return true; + +if (viewUpdates != null) +{ +for (ViewDefinition viewUpdate : viewUpdates) +MigrationManager.announceViewUpdate(viewUpdate, isLocalOnly); +
[1/8] cassandra git commit: Fix is_dense recalculation for Thrift-updated tables
Repository: cassandra Updated Branches: refs/heads/cassandra-2.2 3db30aab9 -> e5c402780 refs/heads/cassandra-3.0 8bfe09f46 -> 3079ae60d refs/heads/trunk 2bc5f0c61 -> 5c5cc540f Fix is_dense recalculation for Thrift-updated tables patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for CASSANDRA-11502 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5c40278 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5c40278 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5c40278 Branch: refs/heads/cassandra-2.2 Commit: e5c40278001bf3a9582085a58941e5f4765f118c Parents: 3db30aa Author: Aleksey Yeschenko Authored: Fri Apr 1 17:36:14 2016 +0100 Committer: Aleksey Yeschenko Committed: Wed Apr 27 17:47:29 2016 +0100 -- CHANGES.txt | 3 +- .../cql3/statements/AlterTableStatement.java| 2 +- .../cql3/statements/AlterTypeStatement.java | 2 +- .../cql3/statements/CreateIndexStatement.java | 2 +- .../cql3/statements/CreateTriggerStatement.java | 2 +- .../cql3/statements/DropIndexStatement.java | 2 +- .../cql3/statements/DropTriggerStatement.java | 2 +- .../cassandra/schema/LegacySchemaTables.java| 10 +--- .../cassandra/service/MigrationManager.java | 8 +-- .../cassandra/thrift/CassandraServer.java | 2 +- .../cassandra/thrift/ThriftConversion.java | 24 +++- .../config/LegacySchemaTablesTest.java | 60 +++- .../org/apache/cassandra/schema/DefsTest.java | 14 ++--- .../cassandra/triggers/TriggersSchemaTest.java | 4 +- 14 files changed, 103 insertions(+), 34 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index e8a301a..3641816 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,9 +1,10 @@ 2.2.7 + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502) * Remove unnescessary file existence check during anticompaction (CASSANDRA-11660) * Add missing files to debian packages (CASSANDRA-11642) * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) * cqlsh: COPY FROM should use regular inserts for single statement batches and - report errors correctly if workers processes crash on initialization (CASSANDRA-11474) + report errors correctly if workers processes crash on initialization (CASSANDRA-11474) * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553) Merged from 2.1: * cqlsh COPY FROM fails for null values with non-prepared statements (CASSANDRA-11631) http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java index 63a53fa..f4a7b39 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java @@ -284,7 +284,7 @@ public class AlterTableStatement extends SchemaAlteringStatement break; } -MigrationManager.announceColumnFamilyUpdate(cfm, false, isLocalOnly); +MigrationManager.announceColumnFamilyUpdate(cfm, isLocalOnly); return true; } http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java index 6459e6b..9203cf9 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AlterTypeStatement.java @@ -113,7 +113,7 @@ public abstract class AlterTypeStatement extends SchemaAlteringStatement for (ColumnDefinition def : copy.allColumns()) modified |= updateDefinition(copy, def, toUpdate.keyspace, toUpdate.name, updated); if (modified) -MigrationManager.announceColumnFamilyUpdate(copy, false, isLocalOnly); +MigrationManager.announceColumnFamilyUpdate(copy, isLocalOnly); } // Other user types potentially using the updated type http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5c40278/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java --
[4/8] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/src/java/org/apache/cassandra/schema/SchemaKeyspace.java -- diff --cc src/java/org/apache/cassandra/schema/SchemaKeyspace.java index 6e9d44b,000..e3756ec mode 100644,00..100644 --- a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java +++ b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java @@@ -1,1410 -1,0 +1,1400 @@@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.schema; + +import java.nio.ByteBuffer; +import java.nio.charset.CharacterCodingException; +import java.security.MessageDigest; +import java.security.NoSuchAlgorithmException; +import java.util.*; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; + +import com.google.common.collect.ImmutableList; +import com.google.common.collect.MapDifference; +import com.google.common.collect.Maps; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.cassandra.config.*; +import org.apache.cassandra.config.ColumnDefinition.ClusteringOrder; +import org.apache.cassandra.cql3.*; +import org.apache.cassandra.cql3.functions.*; +import org.apache.cassandra.cql3.statements.SelectStatement; +import org.apache.cassandra.db.*; +import org.apache.cassandra.db.marshal.*; +import org.apache.cassandra.db.partitions.*; +import org.apache.cassandra.db.rows.*; +import org.apache.cassandra.db.view.View; +import org.apache.cassandra.exceptions.ConfigurationException; +import org.apache.cassandra.exceptions.InvalidRequestException; +import org.apache.cassandra.transport.Server; +import org.apache.cassandra.utils.ByteBufferUtil; +import org.apache.cassandra.utils.FBUtilities; +import org.apache.cassandra.utils.Pair; + +import static java.lang.String.format; + +import static java.util.stream.Collectors.toList; +import static org.apache.cassandra.cql3.QueryProcessor.executeInternal; +import static org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal; +import static org.apache.cassandra.schema.CQLTypeParser.parse; + +/** + * system_schema.* tables and methods for manipulating them. + */ +public final class SchemaKeyspace +{ +private SchemaKeyspace() +{ +} + +private static final Logger logger = LoggerFactory.getLogger(SchemaKeyspace.class); + +private static final boolean FLUSH_SCHEMA_TABLES = Boolean.valueOf(System.getProperty("cassandra.test.flush_local_schema_changes", "true")); + +public static final String NAME = "system_schema"; + +public static final String KEYSPACES = "keyspaces"; +public static final String TABLES = "tables"; +public static final String COLUMNS = "columns"; +public static final String DROPPED_COLUMNS = "dropped_columns"; +public static final String TRIGGERS = "triggers"; +public static final String VIEWS = "views"; +public static final String TYPES = "types"; +public static final String FUNCTIONS = "functions"; +public static final String AGGREGATES = "aggregates"; +public static final String INDEXES = "indexes"; + +public static final List ALL = +ImmutableList.of(KEYSPACES, TABLES, COLUMNS, DROPPED_COLUMNS, TRIGGERS, VIEWS, TYPES, FUNCTIONS, AGGREGATES, INDEXES); + +private static final CFMetaData Keyspaces = +compile(KEYSPACES, +"keyspace definitions", +"CREATE TABLE %s (" ++ "keyspace_name text," ++ "durable_writes boolean," ++ "replication frozen>," ++ "PRIMARY KEY ((keyspace_name)))"); + +private static final CFMetaData Tables = +compile(TABLES, +"table definitions", +"CREATE TABLE %s (" ++ "keyspace_name text," ++ "table_name text," ++ "bloom_filter_fp_chance double," ++ "caching frozen>," ++ "comment text," ++ "compaction frozen>," ++ "compression frozen>," ++ "crc_check_chance double," +
[5/8] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3079ae60 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3079ae60 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3079ae60 Branch: refs/heads/trunk Commit: 3079ae60d29baec262a4b05d7082e88091299d26 Parents: 8bfe09f e5c4027 Author: Aleksey Yeschenko Authored: Wed Apr 27 17:55:27 2016 +0100 Committer: Aleksey Yeschenko Committed: Wed Apr 27 17:57:59 2016 +0100 -- CHANGES.txt | 3 ++- .../cql3/statements/AlterTableStatement.java | 2 +- .../cassandra/cql3/statements/AlterTypeStatement.java | 2 +- .../cql3/statements/CreateIndexStatement.java | 2 +- .../cql3/statements/CreateTriggerStatement.java | 2 +- .../cassandra/cql3/statements/DropIndexStatement.java | 2 +- .../cql3/statements/DropTriggerStatement.java | 2 +- .../org/apache/cassandra/schema/SchemaKeyspace.java | 14 ++ .../apache/cassandra/service/MigrationManager.java| 8 .../org/apache/cassandra/thrift/CassandraServer.java | 2 +- test/unit/org/apache/cassandra/schema/DefsTest.java | 14 +++--- .../apache/cassandra/schema/SchemaKeyspaceTest.java | 2 +- .../apache/cassandra/triggers/TriggersSchemaTest.java | 4 ++-- 13 files changed, 25 insertions(+), 34 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/CHANGES.txt -- diff --cc CHANGES.txt index bc15d32,3641816..6b6bc1f --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,27 -1,11 +1,28 @@@ -2.2.7 +3.0.6 + * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654) + * Ignore all LocalStrategy keyspaces for streaming and other related + operations (CASSANDRA-11627) + * Ensure columnfilter covers indexed columns for thrift 2i queries (CASSANDRA-11523) + * Only open one sstable scanner per sstable (CASSANDRA-11412) + * Option to specify ProtocolVersion in cassandra-stress (CASSANDRA-11410) + * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485) + * LogAwareFileLister should only use OLD sstable files in current folder to determine disk consistency (CASSANDRA-11470) + * Notify indexers of expired rows during compaction (CASSANDRA-11329) + * Properly respond with ProtocolError when a v1/v2 native protocol + header is received (CASSANDRA-11464) + * Validate that num_tokens and initial_token are consistent with one another (CASSANDRA-10120) +Merged from 2.2: + * Fix is_dense recalculation for Thrift-updated tables (CASSANDRA-11502) * Remove unnescessary file existence check during anticompaction (CASSANDRA-11660) * Add missing files to debian packages (CASSANDRA-11642) * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) * cqlsh: COPY FROM should use regular inserts for single statement batches and - report errors correctly if workers processes crash on initialization (CASSANDRA-11474) +report errors correctly if workers processes crash on initialization (CASSANDRA-11474) * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553) + * Allow only DISTINCT queries with partition keys restrictions (CASSANDRA-11339) + * CqlConfigHelper no longer requires both a keystore and truststore to work (CASSANDRA-11532) + * Make deprecated repair methods backward-compatible with previous notification service (CASSANDRA-11430) + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462) Merged from 2.1: * cqlsh COPY FROM fails for null values with non-prepared statements (CASSANDRA-11631) * Make cython optional in pylib/setup.py (CASSANDRA-11630) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java -- diff --cc src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java index 3515c6b,f4a7b39..381971f --- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java @@@ -322,61 -284,8 +322,61 @@@ public class AlterTableStatement extend break; } - MigrationManager.announceColumnFamilyUpdate(cfm, false, isLocalOnly); + MigrationManager.announceColumnFamilyUpdate(cfm, isLocalOnly); -return true; + +if (viewUpdates != null) +{ +for (ViewDefinition viewUpdate : viewUpdates) +MigrationManager.announceViewUpdate(viewUpdate, isLocalOnly); +} +
[6/8] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
http://git-wip-us.apache.org/repos/asf/cassandra/blob/3079ae60/src/java/org/apache/cassandra/schema/SchemaKeyspace.java -- diff --cc src/java/org/apache/cassandra/schema/SchemaKeyspace.java index 6e9d44b,000..e3756ec mode 100644,00..100644 --- a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java +++ b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java @@@ -1,1410 -1,0 +1,1400 @@@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.schema; + +import java.nio.ByteBuffer; +import java.nio.charset.CharacterCodingException; +import java.security.MessageDigest; +import java.security.NoSuchAlgorithmException; +import java.util.*; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; + +import com.google.common.collect.ImmutableList; +import com.google.common.collect.MapDifference; +import com.google.common.collect.Maps; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.cassandra.config.*; +import org.apache.cassandra.config.ColumnDefinition.ClusteringOrder; +import org.apache.cassandra.cql3.*; +import org.apache.cassandra.cql3.functions.*; +import org.apache.cassandra.cql3.statements.SelectStatement; +import org.apache.cassandra.db.*; +import org.apache.cassandra.db.marshal.*; +import org.apache.cassandra.db.partitions.*; +import org.apache.cassandra.db.rows.*; +import org.apache.cassandra.db.view.View; +import org.apache.cassandra.exceptions.ConfigurationException; +import org.apache.cassandra.exceptions.InvalidRequestException; +import org.apache.cassandra.transport.Server; +import org.apache.cassandra.utils.ByteBufferUtil; +import org.apache.cassandra.utils.FBUtilities; +import org.apache.cassandra.utils.Pair; + +import static java.lang.String.format; + +import static java.util.stream.Collectors.toList; +import static org.apache.cassandra.cql3.QueryProcessor.executeInternal; +import static org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal; +import static org.apache.cassandra.schema.CQLTypeParser.parse; + +/** + * system_schema.* tables and methods for manipulating them. + */ +public final class SchemaKeyspace +{ +private SchemaKeyspace() +{ +} + +private static final Logger logger = LoggerFactory.getLogger(SchemaKeyspace.class); + +private static final boolean FLUSH_SCHEMA_TABLES = Boolean.valueOf(System.getProperty("cassandra.test.flush_local_schema_changes", "true")); + +public static final String NAME = "system_schema"; + +public static final String KEYSPACES = "keyspaces"; +public static final String TABLES = "tables"; +public static final String COLUMNS = "columns"; +public static final String DROPPED_COLUMNS = "dropped_columns"; +public static final String TRIGGERS = "triggers"; +public static final String VIEWS = "views"; +public static final String TYPES = "types"; +public static final String FUNCTIONS = "functions"; +public static final String AGGREGATES = "aggregates"; +public static final String INDEXES = "indexes"; + +public static final List ALL = +ImmutableList.of(KEYSPACES, TABLES, COLUMNS, DROPPED_COLUMNS, TRIGGERS, VIEWS, TYPES, FUNCTIONS, AGGREGATES, INDEXES); + +private static final CFMetaData Keyspaces = +compile(KEYSPACES, +"keyspace definitions", +"CREATE TABLE %s (" ++ "keyspace_name text," ++ "durable_writes boolean," ++ "replication frozen>," ++ "PRIMARY KEY ((keyspace_name)))"); + +private static final CFMetaData Tables = +compile(TABLES, +"table definitions", +"CREATE TABLE %s (" ++ "keyspace_name text," ++ "table_name text," ++ "bloom_filter_fp_chance double," ++ "caching frozen>," ++ "comment text," ++ "compaction frozen>," ++ "compression frozen>," ++ "crc_check_chance double," +
[jira] [Commented] (CASSANDRA-11502) Fix denseness and column metadata updates coming from Thrift
[ https://issues.apache.org/jira/browse/CASSANDRA-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260482#comment-15260482 ] Aleksey Yeschenko commented on CASSANDRA-11502: --- Committed as [e5c40278001bf3a9582085a58941e5f4765f118c|https://github.com/apache/cassandra/commit/e5c40278001bf3a9582085a58941e5f4765f118c] to 2.2 and merged with 3.0 and trunk, thanks. Did some manual testing w/ cqlsh/nodetool to make sure sparse CFs w/ clustering columns don't pass {{isThriftCompatibleTest()}}, and it seems like we are all good. > Fix denseness and column metadata updates coming from Thrift > > > Key: CASSANDRA-11502 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11502 > Project: Cassandra > Issue Type: Bug > Components: Distributed Metadata >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko >Priority: Minor > Fix For: 2.2.x, 3.0.x, 3.x > > > It was > [decided|https://issues.apache.org/jira/browse/CASSANDRA-7744?focusedCommentId=14095472&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14095472] > that we'd be recalculating {{is_dense}} for table updates coming from Thrift > on every change. However, due to some oversight, {{is_dense}} can only go > from {{false}} to {{true}}. Once dense, even adding a {{REGULAR}} column will > not reset {{is_dense}} back to {{false}}. > The recalculation fails because no matter what happens, we never remove the > auto-generated {{CLUSTERING}} and {{COMPACT_VALUE}} columns of a dense table. > Which ultimately leads to the issue on 2.2 to 3.0 upgrade (see > CASSANDRA-11315). > What we should do is remove the special-case for Thrift in > {{LegacySchemaTables::makeUpdateTableMutation}} and correct the logic in > {{ThriftConversion::internalFromThrift}} to remove those columns when going > from dense to sparse. > This is not enough to fix CASSANDRA-11315, however, as we need to handle > pre-patch upgrades, and upgrades from 2.1. Fixing it in 2.2 means a) getting > proper schema from {{DESCRIBE}} now and b) using the more efficient > {{SparseCellNameType}} when you add columns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11502) Fix denseness and column metadata updates coming from Thrift
[ https://issues.apache.org/jira/browse/CASSANDRA-11502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-11502: -- Resolution: Fixed Fix Version/s: (was: 3.0.x) (was: 2.2.x) (was: 3.x) 2.2.7 3.0.6 3.6 Reproduced In: 2.2.5, 2.1.13 (was: 2.1.13, 2.2.5) Status: Resolved (was: Patch Available) > Fix denseness and column metadata updates coming from Thrift > > > Key: CASSANDRA-11502 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11502 > Project: Cassandra > Issue Type: Bug > Components: Distributed Metadata >Reporter: Aleksey Yeschenko >Assignee: Aleksey Yeschenko >Priority: Minor > Fix For: 3.6, 3.0.6, 2.2.7 > > > It was > [decided|https://issues.apache.org/jira/browse/CASSANDRA-7744?focusedCommentId=14095472&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14095472] > that we'd be recalculating {{is_dense}} for table updates coming from Thrift > on every change. However, due to some oversight, {{is_dense}} can only go > from {{false}} to {{true}}. Once dense, even adding a {{REGULAR}} column will > not reset {{is_dense}} back to {{false}}. > The recalculation fails because no matter what happens, we never remove the > auto-generated {{CLUSTERING}} and {{COMPACT_VALUE}} columns of a dense table. > Which ultimately leads to the issue on 2.2 to 3.0 upgrade (see > CASSANDRA-11315). > What we should do is remove the special-case for Thrift in > {{LegacySchemaTables::makeUpdateTableMutation}} and correct the logic in > {{ThriftConversion::internalFromThrift}} to remove those columns when going > from dense to sparse. > This is not enough to fix CASSANDRA-11315, however, as we need to handle > pre-patch upgrades, and upgrades from 2.1. Fixing it in 2.2 means a) getting > proper schema from {{DESCRIBE}} now and b) using the more efficient > {{SparseCellNameType}} when you add columns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10091) Integrated JMX authn & authz
[ https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260507#comment-15260507 ] Sam Tunnicliffe commented on CASSANDRA-10091: - Yes, jmx would be unavailable until the node has joined the ring because it's only at that point that auth setup happens which initializes the authenticator, authorizer & role manager. (related: CASSANDRA-11381). > Integrated JMX authn & authz > > > Key: CASSANDRA-10091 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10091 > Project: Cassandra > Issue Type: New Feature >Reporter: Jan Karlsson >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.x > > > It would be useful to authenticate with JMX through Cassandra's internal > authentication. This would reduce the overhead of keeping passwords in files > on the machine and would consolidate passwords to one location. It would also > allow the possibility to handle JMX permissions in Cassandra. > It could be done by creating our own JMX server and setting custom classes > for the authenticator and authorizer. We could then add some parameters where > the user could specify what authenticator and authorizer to use in case they > want to make their own. > This could also be done by creating a premain method which creates a jmx > server. This would give us the feature without changing the Cassandra code > itself. However I believe this would be a good feature to have in Cassandra. > I am currently working on a solution which creates a JMX server and uses a > custom authenticator and authorizer. It is currently build as a premain, > however it would be great if we could put this in Cassandra instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[2/3] cassandra git commit: Don't require HEAP_NEW_SIZE to be set when using G1
Don't require HEAP_NEW_SIZE to be set when using G1 patch by Blake Eggleston; reviewed by Paulo Motta for CASSANDRA-11600 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7a2be8fa Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7a2be8fa Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7a2be8fa Branch: refs/heads/trunk Commit: 7a2be8fa4a539dde2553996d57df02453e213c2f Parents: 3079ae6 Author: Blake Eggleston Authored: Wed Apr 27 18:25:04 2016 +0100 Committer: Aleksey Yeschenko Committed: Wed Apr 27 18:25:04 2016 +0100 -- CHANGES.txt| 1 + conf/cassandra-env.ps1 | 14 +-- conf/cassandra-env.sh | 58 ++--- 3 files changed, 37 insertions(+), 36 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 6b6bc1f..8877fa9 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.6 + * Don't require HEAP_NEW_SIZE to be set when using G1 (CASSANDRA-11600) * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654) * Ignore all LocalStrategy keyspaces for streaming and other related operations (CASSANDRA-11627) http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/conf/cassandra-env.ps1 -- diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1 index a322a4d..794189f 100644 --- a/conf/cassandra-env.ps1 +++ b/conf/cassandra-env.ps1 @@ -133,7 +133,7 @@ Function CalculateHeapSizes return } -if (($env:MAX_HEAP_SIZE -and !$env:HEAP_NEWSIZE) -or (!$env:MAX_HEAP_SIZE -and $env:HEAP_NEWSIZE)) +if ((($env:MAX_HEAP_SIZE -and !$env:HEAP_NEWSIZE) -or (!$env:MAX_HEAP_SIZE -and $env:HEAP_NEWSIZE)) -and ($using_cms -eq $true)) { echo "Please set or unset MAX_HEAP_SIZE and HEAP_NEWSIZE in pairs. Aborting startup." exit 1 @@ -327,12 +327,6 @@ Function SetCassandraEnvironment # times. If in doubt, and if you do not particularly want to tweak, go # 100 MB per physical CPU core. -#$env:MAX_HEAP_SIZE="4096M" -#$env:HEAP_NEWSIZE="800M" -CalculateHeapSizes - -ParseJVMInfo - #GC log path has to be defined here since it needs to find CASSANDRA_HOME $env:JVM_OPTS="$env:JVM_OPTS -Xloggc:""$env:CASSANDRA_HOME/logs/gc.log""" @@ -352,6 +346,12 @@ Function SetCassandraEnvironment $defined_xms = $env:JVM_OPTS -like '*Xms*' $using_cms = $env:JVM_OPTS -like '*UseConcMarkSweepGC*' +#$env:MAX_HEAP_SIZE="4096M" +#$env:HEAP_NEWSIZE="800M" +CalculateHeapSizes + +ParseJVMInfo + # We only set -Xms and -Xmx if they were not defined on jvm.options file # If defined, both Xmx and Xms should be defined together. if (($defined_xmx -eq $false) -and ($defined_xms -eq $false)) http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/conf/cassandra-env.sh -- diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh index 83fe4c5..0ba0c4e 100644 --- a/conf/cassandra-env.sh +++ b/conf/cassandra-env.sh @@ -121,6 +121,31 @@ case "$jvm" in ;; esac +#GC log path has to be defined here because it needs to access CASSANDRA_HOME +JVM_OPTS="$JVM_OPTS -Xloggc:${CASSANDRA_HOME}/logs/gc.log" + +# Here we create the arguments that will get passed to the jvm when +# starting cassandra. + +# Read user-defined JVM options from jvm.options file +JVM_OPTS_FILE=$CASSANDRA_CONF/jvm.options +for opt in `grep "^-" $JVM_OPTS_FILE` +do + JVM_OPTS="$JVM_OPTS $opt" +done + +# Check what parameters were defined on jvm.options file to avoid conflicts +echo $JVM_OPTS | grep -q Xmn +DEFINED_XMN=$? +echo $JVM_OPTS | grep -q Xmx +DEFINED_XMX=$? +echo $JVM_OPTS | grep -q Xms +DEFINED_XMS=$? +echo $JVM_OPTS | grep -q UseConcMarkSweepGC +USING_CMS=$? +echo $JVM_OPTS | grep -q UseG1GC +USING_G1=$? + # Override these to set the amount of memory to allocate to the JVM at # start-up. For production use you may wish to adjust this for your # environment. MAX_HEAP_SIZE is the total amount of memory dedicated @@ -143,42 +168,17 @@ esac #export MALLOC_ARENA_MAX=4 # only calculate the size if it's not set manually -if [ "x$MAX_HEAP_SIZE" = "x" ] && [ "x$HEAP_NEWSIZE" = "x" ]; then +if [ "x$MAX_HEAP_SIZE" = "x" ] && [ "x$HEAP_NEWSIZE" = "x" -o $USING_G1 -eq 0 ]; then calculate_heap_sizes -else -if [ "x$MAX_HEAP_SIZE" = "x" ] || [ "x$HEAP_NEWSIZE" = "x" ]; then -echo "please set or unset MAX_HEAP_SIZE and HEAP_NEWSIZE in pairs (see cassandra-env.sh)" -exit 1 -fi +elif [ "x$MAX_HEAP_SIZE" = "x" ]
[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Merge branch 'cassandra-3.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4254de17 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4254de17 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4254de17 Branch: refs/heads/trunk Commit: 4254de17f4416fbd032068f2223ba32c5e8d097b Parents: 5c5cc54 7a2be8f Author: Aleksey Yeschenko Authored: Wed Apr 27 18:26:22 2016 +0100 Committer: Aleksey Yeschenko Committed: Wed Apr 27 18:26:22 2016 +0100 -- CHANGES.txt| 1 + conf/cassandra-env.ps1 | 14 +-- conf/cassandra-env.sh | 58 ++--- 3 files changed, 37 insertions(+), 36 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4254de17/CHANGES.txt -- diff --cc CHANGES.txt index 50ec72b,8877fa9..6466310 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,64 -1,5 +1,65 @@@ -3.0.6 +3.6 + * Always perform collision check before joining ring (CASSANDRA-10134) + * SSTableWriter output discrepancy (CASSANDRA-11646) + * Fix potential timeout in NativeTransportService.testConcurrentDestroys (CASSANDRA-10756) + * Support large partitions on the 3.0 sstable format (CASSANDRA-11206) + * JSON datetime formatting needs timezone (CASSANDRA-11137) + * Add support to rebuild from specific range (CASSANDRA-10406) + * Optimize the overlapping lookup by calculating all the + bounds in advance (CASSANDRA-11571) + * Support json/yaml output in noetool tablestats (CASSANDRA-5977) + * (stress) Add datacenter option to -node options (CASSANDRA-11591) + * Fix handling of empty slices (CASSANDRA-11513) + * Make number of cores used by cqlsh COPY visible to testing code (CASSANDRA-11437) + * Allow filtering on clustering columns for queries without secondary indexes (CASSANDRA-11310) + * Refactor Restriction hierarchy (CASSANDRA-11354) + * Eliminate allocations in R/W path (CASSANDRA-11421) + * Update Netty to 4.0.36 (CASSANDRA-11567) + * Fix PER PARTITION LIMIT for queries requiring post-query ordering (CASSANDRA-11556) + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818) + * Support UDT in CQLSSTableWriter (CASSANDRA-10624) + * Support for non-frozen user-defined types, updating + individual fields of user-defined types (CASSANDRA-7423) + * Make LZ4 compression level configurable (CASSANDRA-11051) + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017) + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295) + * Improve field-checking and error reporting in cassandra.yaml (CASSANDRA-10649) + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507) + * More user friendly error when providing an invalid token to nodetool (CASSANDRA-9348) + * Add static column support to SASI index (CASSANDRA-11183) + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization (CASSANDRA-11434) + * Support LIKE operator in prepared statements (CASSANDRA-11456) + * Add a command to see if a Materialized View has finished building (CASSANDRA-9967) + * Log endpoint and port associated with streaming operation (CASSANDRA-8777) + * Print sensible units for all log messages (CASSANDRA-9692) + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096) + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372) + * Compress only inter-dc traffic by default (CASSANDRA-) + * Add metrics to track write amplification (CASSANDRA-11420) + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739) + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411) + * Add require_endpoint_verification opt for internode encryption (CASSANDRA-9220) + * Add auto import java.util for UDF code block (CASSANDRA-11392) + * Add --hex-format option to nodetool getsstables (CASSANDRA-11337) + * sstablemetadata should print sstable min/max token (CASSANDRA-7159) + * Do not wrap CassandraException in TriggerExecutor (CASSANDRA-9421) + * COPY TO should have higher double precision (CASSANDRA-11255) + * Stress should exit with non-zero status after failure (CASSANDRA-10340) + * Add client to cqlsh SHOW_SESSION (CASSANDRA-8958) + * Fix nodetool tablestats keyspace level metrics (CASSANDRA-11226) + * Store repair options in parent_repair_history (CASSANDRA-11244) + * Print current leveling in sstableofflinerelevel (CASSANDRA-9588) + * Change repair message for keyspaces with RF 1 (CASSANDRA-11203) + * Remove hard-coded SSL cipher suites and protocols (CASSANDRA-10508) + * Improve concurrency in CompactionStrategyManager (CASSANDRA-10099) + * (cqlsh) interpret CQL type for formatting blobs (CASSANDRA-11274) + * Refuse to start and print txn log infor
[1/3] cassandra git commit: Don't require HEAP_NEW_SIZE to be set when using G1
Repository: cassandra Updated Branches: refs/heads/cassandra-3.0 3079ae60d -> 7a2be8fa4 refs/heads/trunk 5c5cc540f -> 4254de17f Don't require HEAP_NEW_SIZE to be set when using G1 patch by Blake Eggleston; reviewed by Paulo Motta for CASSANDRA-11600 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7a2be8fa Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7a2be8fa Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7a2be8fa Branch: refs/heads/cassandra-3.0 Commit: 7a2be8fa4a539dde2553996d57df02453e213c2f Parents: 3079ae6 Author: Blake Eggleston Authored: Wed Apr 27 18:25:04 2016 +0100 Committer: Aleksey Yeschenko Committed: Wed Apr 27 18:25:04 2016 +0100 -- CHANGES.txt| 1 + conf/cassandra-env.ps1 | 14 +-- conf/cassandra-env.sh | 58 ++--- 3 files changed, 37 insertions(+), 36 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 6b6bc1f..8877fa9 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.6 + * Don't require HEAP_NEW_SIZE to be set when using G1 (CASSANDRA-11600) * Fix sstabledump not showing cells after tombstone marker (CASSANDRA-11654) * Ignore all LocalStrategy keyspaces for streaming and other related operations (CASSANDRA-11627) http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/conf/cassandra-env.ps1 -- diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1 index a322a4d..794189f 100644 --- a/conf/cassandra-env.ps1 +++ b/conf/cassandra-env.ps1 @@ -133,7 +133,7 @@ Function CalculateHeapSizes return } -if (($env:MAX_HEAP_SIZE -and !$env:HEAP_NEWSIZE) -or (!$env:MAX_HEAP_SIZE -and $env:HEAP_NEWSIZE)) +if ((($env:MAX_HEAP_SIZE -and !$env:HEAP_NEWSIZE) -or (!$env:MAX_HEAP_SIZE -and $env:HEAP_NEWSIZE)) -and ($using_cms -eq $true)) { echo "Please set or unset MAX_HEAP_SIZE and HEAP_NEWSIZE in pairs. Aborting startup." exit 1 @@ -327,12 +327,6 @@ Function SetCassandraEnvironment # times. If in doubt, and if you do not particularly want to tweak, go # 100 MB per physical CPU core. -#$env:MAX_HEAP_SIZE="4096M" -#$env:HEAP_NEWSIZE="800M" -CalculateHeapSizes - -ParseJVMInfo - #GC log path has to be defined here since it needs to find CASSANDRA_HOME $env:JVM_OPTS="$env:JVM_OPTS -Xloggc:""$env:CASSANDRA_HOME/logs/gc.log""" @@ -352,6 +346,12 @@ Function SetCassandraEnvironment $defined_xms = $env:JVM_OPTS -like '*Xms*' $using_cms = $env:JVM_OPTS -like '*UseConcMarkSweepGC*' +#$env:MAX_HEAP_SIZE="4096M" +#$env:HEAP_NEWSIZE="800M" +CalculateHeapSizes + +ParseJVMInfo + # We only set -Xms and -Xmx if they were not defined on jvm.options file # If defined, both Xmx and Xms should be defined together. if (($defined_xmx -eq $false) -and ($defined_xms -eq $false)) http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a2be8fa/conf/cassandra-env.sh -- diff --git a/conf/cassandra-env.sh b/conf/cassandra-env.sh index 83fe4c5..0ba0c4e 100644 --- a/conf/cassandra-env.sh +++ b/conf/cassandra-env.sh @@ -121,6 +121,31 @@ case "$jvm" in ;; esac +#GC log path has to be defined here because it needs to access CASSANDRA_HOME +JVM_OPTS="$JVM_OPTS -Xloggc:${CASSANDRA_HOME}/logs/gc.log" + +# Here we create the arguments that will get passed to the jvm when +# starting cassandra. + +# Read user-defined JVM options from jvm.options file +JVM_OPTS_FILE=$CASSANDRA_CONF/jvm.options +for opt in `grep "^-" $JVM_OPTS_FILE` +do + JVM_OPTS="$JVM_OPTS $opt" +done + +# Check what parameters were defined on jvm.options file to avoid conflicts +echo $JVM_OPTS | grep -q Xmn +DEFINED_XMN=$? +echo $JVM_OPTS | grep -q Xmx +DEFINED_XMX=$? +echo $JVM_OPTS | grep -q Xms +DEFINED_XMS=$? +echo $JVM_OPTS | grep -q UseConcMarkSweepGC +USING_CMS=$? +echo $JVM_OPTS | grep -q UseG1GC +USING_G1=$? + # Override these to set the amount of memory to allocate to the JVM at # start-up. For production use you may wish to adjust this for your # environment. MAX_HEAP_SIZE is the total amount of memory dedicated @@ -143,42 +168,17 @@ esac #export MALLOC_ARENA_MAX=4 # only calculate the size if it's not set manually -if [ "x$MAX_HEAP_SIZE" = "x" ] && [ "x$HEAP_NEWSIZE" = "x" ]; then +if [ "x$MAX_HEAP_SIZE" = "x" ] && [ "x$HEAP_NEWSIZE" = "x" -o $USING_G1 -eq 0 ]; then calculate_heap_sizes -else -if [ "x$MAX_HEAP_SIZE" = "x" ] || [ "x$HEAP_NEWSIZE" = "x" ]; then -echo "
[jira] [Updated] (CASSANDRA-11600) Don't require HEAP_NEW_SIZE to be set when using G1
[ https://issues.apache.org/jira/browse/CASSANDRA-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-11600: -- Resolution: Fixed Fix Version/s: (was: 3.0.x) 3.0.6 Status: Resolved (was: Ready to Commit) > Don't require HEAP_NEW_SIZE to be set when using G1 > --- > > Key: CASSANDRA-11600 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11600 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 3.6, 3.0.6 > > > Although cassandra-env.sh doesn't set -Xmn (unless set in jvm.options) when > using G1GC, it still requires that you set HEAP_NEW_SIZE and MAX_HEAP_SIZE > together, and won't start until you do. Since we ignore that setting if > you're using G1, we shouldn't require that the user set it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11600) Don't require HEAP_NEW_SIZE to be set when using G1
[ https://issues.apache.org/jira/browse/CASSANDRA-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260535#comment-15260535 ] Aleksey Yeschenko commented on CASSANDRA-11600: --- Committed as [7a2be8fa4a539dde2553996d57df02453e213c2f|https://github.com/apache/cassandra/commit/7a2be8fa4a539dde2553996d57df02453e213c2f] to 3.0 and merged with trunk, thanks. > Don't require HEAP_NEW_SIZE to be set when using G1 > --- > > Key: CASSANDRA-11600 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11600 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 3.6, 3.0.6 > > > Although cassandra-env.sh doesn't set -Xmn (unless set in jvm.options) when > using G1GC, it still requires that you set HEAP_NEW_SIZE and MAX_HEAP_SIZE > together, and won't start until you do. Since we ignore that setting if > you're using G1, we shouldn't require that the user set it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11673) (2.1) dtest failure in bootstrap_test.TestBootstrap.test_cleanup
Russ Hatch created CASSANDRA-11673: -- Summary: (2.1) dtest failure in bootstrap_test.TestBootstrap.test_cleanup Key: CASSANDRA-11673 URL: https://issues.apache.org/jira/browse/CASSANDRA-11673 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng This test was originally waiting on CASSANDRA-11179, which I recently removed the 'require' annotation from (since 11179 is committed). Not sure why failing on 2.1 now, perhaps didn't get committed. http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/339/testReport/bootstrap_test/TestBootstrap/test_cleanup Failed on CassCI build cassandra-2.1_offheap_dtest #339 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11647) Don't use static dataDirectories field in Directories instances
[ https://issues.apache.org/jira/browse/CASSANDRA-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15256972#comment-15256972 ] Blake Eggleston edited comment on CASSANDRA-11647 at 4/27/16 6:11 PM: -- | *trunk* | | [branch|https://github.com/bdeggleston/cassandra/tree/11647] | | [dtests|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11647-dtest/4/] | | [testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11647-testall/3/] | was (Author: bdeggleston): | *trunk* | | [branch|https://github.com/bdeggleston/cassandra/tree/11647] | | [dtests|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11647-dtest/1/] | | [testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11647-testall/1/] | > Don't use static dataDirectories field in Directories instances > --- > > Key: CASSANDRA-11647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11647 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston > Fix For: 3.6 > > > Some of the changes to Directories by CASSANDRA-6696 use the static > {{dataDirectories}} field, instead of the instance field {{paths}}. This > complicates things for external code creating their own Directories instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load
[ https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260641#comment-15260641 ] Paulo Motta commented on CASSANDRA-11363: - I wasn't able to reproduce this condition so far in a 2.1 [cstar_perf|http://cstar.datastax.com/] cluster with the following spec: 1 stress, 3 Cassandra, each node 2x Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz (12 cores total), 64G, 3 Samsung SSD 845DC EVO 240GB, mdadm RAID 0. The test consistent in the following sequence of stress steps followed by {{nodetool tpstats}}: * {{user profile=https://raw.githubusercontent.com/mesosphere/cassandra-mesos/master/driver-extensions/cluster-loadtest/cqlstress-example.yaml ops\(insert=1\) n=1M -rate threads=300}} * {{user profile=https://raw.githubusercontent.com/mesosphere/cassandra-mesos/master/driver-extensions/cluster-loadtest/cqlstress-example.yaml ops\(simple1=1\) n=1M -rate threads=300}} * {{user profile=https://raw.githubusercontent.com/mesosphere/cassandra-mesos/master/driver-extensions/cluster-loadtest/cqlstress-example.yaml ops\(range1=1\) n=1M -rate threads=300}} At the end of 5 runs, the total number of blocked NTR threads was negligible (0 for all runs, except one with 0.004% blocked). I will try running on a larger mixed workload, ramping up the number of stress threads and also try it on 3.0. Meanwhile, some JFR files, reproduction steps or at least more detailed description on the environment/workload to reproduce this would be greatly appreciated. > Blocked NTR When Connecting Causing Excessive Load > -- > > Key: CASSANDRA-11363 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11363 > Project: Cassandra > Issue Type: Bug > Components: Coordination >Reporter: Russell Bradberry >Assignee: Paulo Motta >Priority: Critical > Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack > > > When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the > machine load increases to very high levels (> 120 on an 8 core machine) and > native transport requests get blocked in tpstats. > I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8. > The issue does not seem to affect the nodes running 2.1.9. > The issue seems to coincide with the number of connections OR the number of > total requests being processed at a given time (as the latter increases with > the former in our system) > Currently there is between 600 and 800 client connections on each machine and > each machine is handling roughly 2000-3000 client requests per second. > Disabling the binary protocol fixes the issue for this node but isn't a > viable option cluster-wide. > Here is the output from tpstats: > {code} > Pool NameActive Pending Completed Blocked All > time blocked > MutationStage 0 88387821 0 > 0 > ReadStage 0 0 355860 0 > 0 > RequestResponseStage 0 72532457 0 > 0 > ReadRepairStage 0 0150 0 > 0 > CounterMutationStage 32 104 897560 0 > 0 > MiscStage 0 0 0 0 > 0 > HintedHandoff 0 0 65 0 > 0 > GossipStage 0 0 2338 0 > 0 > CacheCleanupExecutor 0 0 0 0 > 0 > InternalResponseStage 0 0 0 0 > 0 > CommitLogArchiver 0 0 0 0 > 0 > CompactionExecutor2 190474 0 > 0 > ValidationExecutor0 0 0 0 > 0 > MigrationStage0 0 10 0 > 0 > AntiEntropyStage 0 0 0 0 > 0 > PendingRangeCalculator0 0310 0 > 0 > Sampler 0 0 0 0 > 0 > MemtableFlushWriter 110 94 0 > 0 > MemtablePostFlush 134257 0 > 0 > MemtableReclaimMemory 0 0 94 0 > 0 > Native-Transport-Requests 128 156 38795716
[jira] [Updated] (CASSANDRA-11647) Don't use static dataDirectories field in Directories instances
[ https://issues.apache.org/jira/browse/CASSANDRA-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Blake Eggleston updated CASSANDRA-11647: Status: Patch Available (was: Open) There were some new failures in testall/dtest that don't appear to be related to the patch, and that I wasn't able to reproduce locally (or on cassci for that matter) > Don't use static dataDirectories field in Directories instances > --- > > Key: CASSANDRA-11647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11647 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston > Fix For: 3.6 > > > Some of the changes to Directories by CASSANDRA-6696 use the static > {{dataDirectories}} field, instead of the instance field {{paths}}. This > complicates things for external code creating their own Directories instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11363) Blocked NTR When Connecting Causing Excessive Load
[ https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-11363: -- Reproduced In: 3.0.3, 2.1.13, 2.1.12 (was: 2.1.12, 2.1.13, 3.0.3) Status: Awaiting Feedback (was: Open) > Blocked NTR When Connecting Causing Excessive Load > -- > > Key: CASSANDRA-11363 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11363 > Project: Cassandra > Issue Type: Bug > Components: Coordination >Reporter: Russell Bradberry >Assignee: Paulo Motta >Priority: Critical > Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack > > > When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the > machine load increases to very high levels (> 120 on an 8 core machine) and > native transport requests get blocked in tpstats. > I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8. > The issue does not seem to affect the nodes running 2.1.9. > The issue seems to coincide with the number of connections OR the number of > total requests being processed at a given time (as the latter increases with > the former in our system) > Currently there is between 600 and 800 client connections on each machine and > each machine is handling roughly 2000-3000 client requests per second. > Disabling the binary protocol fixes the issue for this node but isn't a > viable option cluster-wide. > Here is the output from tpstats: > {code} > Pool NameActive Pending Completed Blocked All > time blocked > MutationStage 0 88387821 0 > 0 > ReadStage 0 0 355860 0 > 0 > RequestResponseStage 0 72532457 0 > 0 > ReadRepairStage 0 0150 0 > 0 > CounterMutationStage 32 104 897560 0 > 0 > MiscStage 0 0 0 0 > 0 > HintedHandoff 0 0 65 0 > 0 > GossipStage 0 0 2338 0 > 0 > CacheCleanupExecutor 0 0 0 0 > 0 > InternalResponseStage 0 0 0 0 > 0 > CommitLogArchiver 0 0 0 0 > 0 > CompactionExecutor2 190474 0 > 0 > ValidationExecutor0 0 0 0 > 0 > MigrationStage0 0 10 0 > 0 > AntiEntropyStage 0 0 0 0 > 0 > PendingRangeCalculator0 0310 0 > 0 > Sampler 0 0 0 0 > 0 > MemtableFlushWriter 110 94 0 > 0 > MemtablePostFlush 134257 0 > 0 > MemtableReclaimMemory 0 0 94 0 > 0 > Native-Transport-Requests 128 156 38795716 > 278451 > Message type Dropped > READ 0 > RANGE_SLICE 0 > _TRACE 0 > MUTATION 0 > COUNTER_MUTATION 0 > BINARY 0 > REQUEST_RESPONSE 0 > PAGED_RANGE 0 > READ_REPAIR 0 > {code} > Attached is the jstack output for both CMS and G1GC. > Flight recordings are here: > https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr > https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr > It is interesting to note that while the flight recording was taking place, > the load on the machine went back to healthy, and when the flight recording > finished the load went back to > 100. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11674) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test
Russ Hatch created CASSANDRA-11674: -- Summary: dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test Key: CASSANDRA-11674 URL: https://issues.apache.org/jira/browse/CASSANDRA-11674 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng Priority: Minor single failure, but might be worth looking into to see if it repros at all. http://cassci.datastax.com/job/cassandra-3.0_dtest/669/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test Failed on CassCI build cassandra-3.0_dtest #669 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11665) dtest failure in topology_test.TestTopology.decommissioned_node_cant_rejoin_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-11665: Status: Patch Available (was: Open) https://github.com/riptano/cassandra-dtest/pull/958 > dtest failure in > topology_test.TestTopology.decommissioned_node_cant_rejoin_test > > > Key: CASSANDRA-11665 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11665 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: Philip Thompson > Labels: dtest > > intermittent failure, example failure: > failed on trunk no-vnodes job > "True is not false" > http://cassci.datastax.com/job/trunk_novnode_dtest/351/testReport/topology_test/TestTopology/decommissioned_node_cant_rejoin_test > Failed on CassCI build trunk_novnode_dtest #351 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-9007) Run stress nightly against trunk in a way that validates
[ https://issues.apache.org/jira/browse/CASSANDRA-9007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson resolved CASSANDRA-9007. Resolution: Fixed A series of open source Jepsen tests written by Joel Knighton fulfill what we wanted here. https://github.com/riptano/jepsen > Run stress nightly against trunk in a way that validates > > > Key: CASSANDRA-9007 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9007 > Project: Cassandra > Issue Type: Task >Reporter: Ariel Weisberg >Assignee: Philip Thompson > Labels: monthly-release > > Stress has some very basic validation functionality when used without > workload profiles. It found a bug on trunk when I first ran it so it has > value even though the validation is basic. > As a beachhead for the kind of blackbox validation that we are missing we can > start by running stress nightly or 24/7 in some rotation. > There should be two jobs. One job has inverted success criteria (C* should > lose some data) and the job should only "pass" if the failure is detected. > This is just to prove that the harness reports failure if failure occurs. > Another would be the real job that runs stress, parses and parses the output > for reports of missing data. > This job is the first pass and basis of what we can point to when a developer > makes a change, implements a feature, or fixes a bug, and say "go add > validation to this job." > Follow on tickets to link to this > * Test multiple configurations > * Get stress to validate more query functionality and APIs (counters, LWT, > batches) > * Parse logs and fail tests on error level logs (great way to improve log > messages over time) > * ? > I am going to hold off on creating a ton of issues until we have a basic > version of the job running. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-9049) Run validation harness against a real cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson resolved CASSANDRA-9049. Resolution: Fixed A series of open source Jepsen tests written by Joel Knighton fulfill what we wanted here. https://github.com/riptano/jepsen > Run validation harness against a real cluster > - > > Key: CASSANDRA-9049 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9049 > Project: Cassandra > Issue Type: Sub-task >Reporter: Philip Thompson >Assignee: Philip Thompson > > Currently we run against CCM nodes. We will get more useful data and feedback > if we run against real C* clusters, whether on dedicated hardware or > provisioned on a cloud. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-8187) Create long-running Test Suite
[ https://issues.apache.org/jira/browse/CASSANDRA-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson resolved CASSANDRA-8187. Resolution: Fixed A series of open source Jepsen tests written by Joel Knighton fulfill what we wanted here. https://github.com/riptano/jepsen > Create long-running Test Suite > -- > > Key: CASSANDRA-8187 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8187 > Project: Cassandra > Issue Type: Test > Components: Testing >Reporter: Philip Thompson >Assignee: Philip Thompson > > We need to start running tests that run for at least several hours. Our > current dtest suite is inadequate at catching data loss bugs and compaction > problems. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11665) dtest failure in topology_test.TestTopology.decommissioned_node_cant_rejoin_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-11665: Resolution: Fixed Status: Resolved (was: Patch Available) > dtest failure in > topology_test.TestTopology.decommissioned_node_cant_rejoin_test > > > Key: CASSANDRA-11665 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11665 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: Philip Thompson > Labels: dtest > > intermittent failure, example failure: > failed on trunk no-vnodes job > "True is not false" > http://cassci.datastax.com/job/trunk_novnode_dtest/351/testReport/topology_test/TestTopology/decommissioned_node_cant_rejoin_test > Failed on CassCI build trunk_novnode_dtest #351 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11539) dtest failure in topology_test.TestTopology.movement_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-11539: Resolution: Fixed Status: Resolved (was: Patch Available) > dtest failure in topology_test.TestTopology.movement_test > - > > Key: CASSANDRA-11539 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11539 > Project: Cassandra > Issue Type: Test > Components: Testing >Reporter: Michael Shuler >Assignee: Russ Hatch > Labels: dtest > Fix For: 3.x > > > example failure: > {noformat} > Error Message > values not within 16.00% of the max: (335.88, 404.31) () > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-XGOyDd > dtest: DEBUG: Custom init_config not found. Setting defaults. > dtest: DEBUG: Done setting configuration options: > { 'num_tokens': None, > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/topology_test.py", line 93, in > movement_test > assert_almost_equal(sizes[1], sizes[2]) > File "/home/automaton/cassandra-dtest/assertions.py", line 75, in > assert_almost_equal > assert vmin > vmax * (1.0 - error) or vmin == vmax, "values not within > %.2f%% of the max: %s (%s)" % (error * 100, args, error_message) > "values not within 16.00% of the max: (335.88, 404.31) > ()\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /mnt/tmp/dtest-XGOyDd\ndtest: DEBUG: Custom init_config not found. Setting > defaults.\ndtest: DEBUG: Done setting configuration options:\n{ > 'num_tokens': None,\n'phi_convict_threshold': 5,\n > 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': > 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\n- >> end captured logging << > -" > {noformat} > http://cassci.datastax.com/job/cassandra-3.5_novnode_dtest/22/testReport/topology_test/TestTopology/movement_test > > I dug through this test's history on the trunk, 3.5, 3.0, and 2.2 branches. > It appears this test is stable and passing on 3.0 & 2.2 (which could be just > luck). On trunk & 3.5, however, this test has flapped a small number of times. > The test's threshold is 16% and I found test failures in the 3.5 branch of > 16.2%, 16.9%, and 18.3%. In trunk I found 17.4% and 23.5% diff failures. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-11666) dtest failure in topology_test.TestTopology.movement_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson resolved CASSANDRA-11666. - Resolution: Duplicate > dtest failure in topology_test.TestTopology.movement_test > - > > Key: CASSANDRA-11666 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11666 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest > > example failure: > http://cassci.datastax.com/job/trunk_novnode_dtest/353/testReport/topology_test/TestTopology/movement_test > Failed on CassCI build trunk_novnode_dtest #353 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11675) multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest
[ https://issues.apache.org/jira/browse/CASSANDRA-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260747#comment-15260747 ] Russ Hatch commented on CASSANDRA-11675: /cc [~Stefania] > multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest > > > Key: CASSANDRA-11675 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11675 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest > > these appear to be related, all failed on the same build (but appear to be > passing now). > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_brackets_in_UDT/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_undefined_as_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_sub_second_precision/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_null_as_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_default_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_all_datatypes_write/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11675) multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest
Russ Hatch created CASSANDRA-11675: -- Summary: multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest Key: CASSANDRA-11675 URL: https://issues.apache.org/jira/browse/CASSANDRA-11675 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng these appear to be related, all failed on the same build (but appear to be passing now). http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_brackets_in_UDT/ http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_undefined_as_null_indicator/ http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_sub_second_precision/ http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_null_as_null_indicator/ http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_default_null_indicator/ http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_all_datatypes_write/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11626) cqlsh fails and exists on non-ascii chars
[ https://issues.apache.org/jira/browse/CASSANDRA-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259587#comment-15259587 ] Wei Deng edited comment on CASSANDRA-11626 at 4/27/16 7:03 PM: --- Yeah I don't think it's the same problem as CASSANDRA-11124. See the following using latest trunk build: {noformat} root@node0:~/cassandra-trunk# ~/cassandra-trunk/bin/cqlsh --encoding=utf-8 --debug Using CQL driver: Using connect timeout: 5 seconds Using 'utf-8' encoding Using ssl: False Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.6-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4] Use HELP for help. cqlsh> ä Invalid syntax at line 1, char 1 Traceback (most recent call last): File "/root/cassandra-trunk/bin/cqlsh.py", line 2636, in main(*read_options(sys.argv[1:], os.environ)) File "/root/cassandra-trunk/bin/cqlsh.py", line 2625, in main shell.cmdloop() File "/root/cassandra-trunk/bin/cqlsh.py", line 1114, in cmdloop if self.onecmd(self.statement.getvalue()): File "/root/cassandra-trunk/bin/cqlsh.py", line 1139, in onecmd self.printerr(' %s' % statementline) File "/root/cassandra-trunk/bin/cqlsh.py", line 2314, in printerr self.writeresult(text, color, newline=newline, out=sys.stderr) File "/root/cassandra-trunk/bin/cqlsh.py", line 2303, in writeresult out.write(self.applycolor(str(text), color) + ('\n' if newline else '')) UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 2: ordinal not in range(128) {noformat} This is easily reproducible on a number of C* 3.x versions (3.0.4 and 3.6). was (Author: weideng): Yeah I don't think it's the same problem as CASSANDRA-11124. See the following using latest trunk build: {noformat} root@node0:~/cassandra-trunk# ~/cassandra-trunk/bin/cqlsh --encoding=utf-8 --debug Using CQL driver: Using connect timeout: 5 seconds Using 'utf-8' encoding Using ssl: False Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.6-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4] Use HELP for help. cqlsh> ä Invalid syntax at line 1, char 1 Traceback (most recent call last): File "/root/cassandra-trunk/bin/cqlsh.py", line 2636, in main(*read_options(sys.argv[1:], os.environ)) File "/root/cassandra-trunk/bin/cqlsh.py", line 2625, in main shell.cmdloop() File "/root/cassandra-trunk/bin/cqlsh.py", line 1114, in cmdloop if self.onecmd(self.statement.getvalue()): File "/root/cassandra-trunk/bin/cqlsh.py", line 1139, in onecmd self.printerr(' %s' % statementline) File "/root/cassandra-trunk/bin/cqlsh.py", line 2314, in printerr self.writeresult(text, color, newline=newline, out=sys.stderr) File "/root/cassandra-trunk/bin/cqlsh.py", line 2303, in writeresult out.write(self.applycolor(str(text), color) + ('\n' if newline else '')) UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 2: ordinal not in range(128) {noformat} This is easily reproducible on a number C* 3.x version (3.0.4 and 3.6). > cqlsh fails and exists on non-ascii chars > - > > Key: CASSANDRA-11626 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11626 > Project: Cassandra > Issue Type: Bug >Reporter: Robert Stupp >Priority: Minor > > Just seen on cqlsh on current trunk: > To repro, copy {{ä}} (german umlaut) to cqlsh and press return. > cqlsh errors out and immediately exits. > {code} > $ bin/cqlsh > Connected to Test Cluster at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 2.1.13-SNAPSHOT | CQL spec 3.2.1 | Native protocol > v3] > Use HELP for help. > cqlsh> ä > Invalid syntax at line 1, char 1 > Traceback (most recent call last): > File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2636, in > > main(*read_options(sys.argv[1:], os.environ)) > File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2625, in main > shell.cmdloop() > File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 1114, in > cmdloop > if self.onecmd(self.statement.getvalue()): > File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 1139, in onecmd > self.printerr(' %s' % statementline) > File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2314, in > printerr > self.writeresult(text, color, newline=newline, out=sys.stderr) > File "/Users/snazy/devel/cassandra/trunk/bin/cqlsh.py", line 2303, in > writeresult > out.write(self.applycolor(str(text), color) + ('\n' if newline else '')) > UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position > 2: ordinal not in range(128) > $ > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11676) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows
Russ Hatch created CASSANDRA-11676: -- Summary: dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows Key: CASSANDRA-11676 URL: https://issues.apache.org/jira/browse/CASSANDRA-11676 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng failed on most recent trunk-offheap job example failure: http://cassci.datastax.com/job/trunk_offheap_dtest/162/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_large_cql_rows Failed on CassCI build trunk_offheap_dtest #162 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11675) multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest
[ https://issues.apache.org/jira/browse/CASSANDRA-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260777#comment-15260777 ] Russ Hatch commented on CASSANDRA-11675: CASSANDRA-11676 may be related > multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest > > > Key: CASSANDRA-11675 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11675 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest > > these appear to be related, all failed on the same build (but appear to be > passing now). > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_brackets_in_UDT/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_undefined_as_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_sub_second_precision/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_null_as_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_default_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_all_datatypes_write/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11676) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows
[ https://issues.apache.org/jira/browse/CASSANDRA-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260775#comment-15260775 ] Russ Hatch commented on CASSANDRA-11676: seems like could be related to CASSANDRA-11675 since it's in the same test module and started about the same time. > dtest failure in > cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows > -- > > Key: CASSANDRA-11676 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11676 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest > > failed on most recent trunk-offheap job example failure: > http://cassci.datastax.com/job/trunk_offheap_dtest/162/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_large_cql_rows > Failed on CassCI build trunk_offheap_dtest #162 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-11674) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson reassigned CASSANDRA-11674: --- Assignee: Philip Thompson (was: DS Test Eng) > dtest failure in > materialized_views_test.TestMaterializedViews.clustering_column_test > - > > Key: CASSANDRA-11674 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11674 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: Philip Thompson >Priority: Minor > Labels: dtest > > single failure, but might be worth looking into to see if it repros at all. > http://cassci.datastax.com/job/cassandra-3.0_dtest/669/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test > Failed on CassCI build cassandra-3.0_dtest #669 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11674) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-11674: Status: Patch Available (was: Open) https://github.com/riptano/cassandra-dtest/pull/959 > dtest failure in > materialized_views_test.TestMaterializedViews.clustering_column_test > - > > Key: CASSANDRA-11674 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11674 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: Philip Thompson >Priority: Minor > Labels: dtest > > single failure, but might be worth looking into to see if it repros at all. > http://cassci.datastax.com/job/cassandra-3.0_dtest/669/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test > Failed on CassCI build cassandra-3.0_dtest #669 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-11673) (2.1) dtest failure in bootstrap_test.TestBootstrap.test_cleanup
[ https://issues.apache.org/jira/browse/CASSANDRA-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson reassigned CASSANDRA-11673: --- Assignee: Philip Thompson (was: DS Test Eng) > (2.1) dtest failure in bootstrap_test.TestBootstrap.test_cleanup > > > Key: CASSANDRA-11673 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11673 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: Philip Thompson > Labels: dtest > > This test was originally waiting on CASSANDRA-11179, which I recently removed > the 'require' annotation from (since 11179 is committed). Not sure why > failing on 2.1 now, perhaps didn't get committed. > http://cassci.datastax.com/job/cassandra-2.1_offheap_dtest/339/testReport/bootstrap_test/TestBootstrap/test_cleanup > Failed on CassCI build cassandra-2.1_offheap_dtest #339 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9766) Bootstrap outgoing streaming speeds are much slower than during repair
[ https://issues.apache.org/jira/browse/CASSANDRA-9766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260982#comment-15260982 ] T Jake Luciani commented on CASSANDRA-9766: --- [testall | https://cassci.datastax.com/view/Dev/view/tjake/job/tjake-faster-streaming-testall/] [dtest | https://cassci.datastax.com/view/Dev/view/tjake/job/tjake-faster-streaming-dtest/] > Bootstrap outgoing streaming speeds are much slower than during repair > -- > > Key: CASSANDRA-9766 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9766 > Project: Cassandra > Issue Type: Improvement > Components: Streaming and Messaging > Environment: Cassandra 2.1.2. more details in the pdf attached >Reporter: Alexei K >Assignee: T Jake Luciani > Labels: performance > Fix For: 3.x > > Attachments: problem.pdf > > > I have a cluster in Amazon cloud , its described in detail in the attachment. > What I've noticed is that we during bootstrap we never go above 12MB/sec > transmission speeds and also those speeds flat line almost like we're hitting > some sort of a limit ( this remains true for other tests that I've ran) > however during the repair we see much higher,variable sending rates. I've > provided network charts in the attachment as well . Is there an explanation > for this? Is something wrong with my configuration, or is it a possible bug? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11432) Counter values become under-counted when running repair.
[ https://issues.apache.org/jira/browse/CASSANDRA-11432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15261023#comment-15261023 ] Dikang Gu commented on CASSANDRA-11432: --- [~iamaleksey], any ideas about this? Thanks! > Counter values become under-counted when running repair. > > > Key: CASSANDRA-11432 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11432 > Project: Cassandra > Issue Type: Bug >Reporter: Dikang Gu >Assignee: Aleksey Yeschenko > > We are experimenting Counters in Cassandra 2.2.5. Our setup is that we have 6 > nodes, across three different regions, and in each region, the replication > factor is 2. Basically, each nodes holds a full copy of the data. > We are writing to cluster with CL = 2, and reading with CL = 1. > When are doing 30k/s counter increment/decrement per node, and at the > meanwhile, we are double writing to our mysql tier, so that we can measure > the accuracy of C* counter, compared to mysql. > The experiment result was great at the beginning, the counter value in C* and > mysql are very close. The difference is less than 0.1%. > But when we start to run the repair on one node, the counter value in C* > become much less than the value in mysql, the difference becomes larger than > 1%. > My question is that is it a known problem that the counter value will become > under-counted if repair is running? Should we avoid running repair for > counter tables? > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-5863) In process (uncompressed) page cache
[ https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15261121#comment-15261121 ] Pavel Yaskevich commented on CASSANDRA-5863: +1 on the changes, much more readable now. Maybe one more nit from my original comments - is there anyway we can change ChunkCache#invalidatePosition so instead of doing instance-of checks and redirects to CachedRebufferer it simply does invalidate(new Key(...)), since ChunkReader is effectively stateless maybe we could drop RebuffererFactory and use ChunkReader as a source of all Rebufferers? This way IMHO it's clearer that ChunkReader is the source of the data and doesn't have any bufferering, if buffering/caching is needed it can produce Rebufferer which manages the memory, WDYT? Also how do you want to proceed with this? After all of the changes can you squash/rebase, so I can push? > In process (uncompressed) page cache > > > Key: CASSANDRA-5863 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5863 > Project: Cassandra > Issue Type: Sub-task >Reporter: T Jake Luciani >Assignee: Branimir Lambov > Labels: performance > Fix For: 3.x > > > Currently, for every read, the CRAR reads each compressed chunk into a > byte[], sends it to ICompressor, gets back another byte[] and verifies a > checksum. > This process is where the majority of time is spent in a read request. > Before compression, we would have zero-copy of data and could respond > directly from the page-cache. > It would be useful to have some kind of Chunk cache that could speed up this > process for hot data, possibly off heap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11677) Incredibly slow jolokia response times
Andrew Jorgensen created CASSANDRA-11677: Summary: Incredibly slow jolokia response times Key: CASSANDRA-11677 URL: https://issues.apache.org/jira/browse/CASSANDRA-11677 Project: Cassandra Issue Type: Bug Reporter: Andrew Jorgensen I am seeing some very slow jolokia request times on my Cassandra 3.0 cluster. Specifically when running the following: {code} curl 127.0.0.1:8778/jolokia/list {code} on a slightly loaded cluster I am seeing request times around 30-40 seconds and on a more heavily loaded cluster I am seeing request times in the 2 minute mark. We are currently using jolokia 1.3.2 and v4 of the diamond collector. I also have a Cassandra 1.1 cluster that has the same load and number of nodes and running the same curl command comes back in about 1 second. Is there anything I can do to help diagnose this issue to see what is causing the slowdown or has anyone else experience this? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11597) dtest failure in upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15261211#comment-15261211 ] Russ Hatch commented on CASSANDRA-11597: [~philipthompson] If I understand correctly, this test is always starting on 1.2 and upgrading to 2.0.17 . If another 2.0 release is unlikely, can we just retire this test? > dtest failure in > upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test > --- > > Key: CASSANDRA-11597 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11597 > Project: Cassandra > Issue Type: Test >Reporter: Jim Witschey >Assignee: DS Test Eng > Labels: dtest > > Looks like a new flap. Example failure: > http://cassci.datastax.com/job/cassandra-2.1_dtest/447/testReport/upgrade_supercolumns_test/TestSCUpgrade/upgrade_with_counters_test > Failed on CassCI build cassandra-2.1_dtest #447 - 2.1.14-tentative > {code} > Error Message > TimedOutException(acknowledged_by=0, paxos_in_progress=None, > acknowledged_by_batchlog=None) > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-1Fi9qz > dtest: DEBUG: Custom init_config not found. Setting defaults. > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > dtest: DEBUG: Upgrading to binary:2.0.17 > dtest: DEBUG: Shutting down node: node1 > dtest: DEBUG: Set new cassandra dir for node1: > /home/automaton/.ccm/repository/2.0.17 > dtest: DEBUG: Starting node1 on new version (binary:2.0.17) > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/upgrade_supercolumns_test.py", line > 215, in upgrade_with_counters_test > client.add('Counter1', column_parent, column, ThriftConsistencyLevel.ONE) > File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", > line 985, in add > self.recv_add() > File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", > line 1013, in recv_add > raise result.te > "TimedOutException(acknowledged_by=0, paxos_in_progress=None, > acknowledged_by_batchlog=None)\n >> begin captured > logging << \ndtest: DEBUG: cluster ccm directory: > /mnt/tmp/dtest-1Fi9qz\ndtest: DEBUG: Custom init_config not found. Setting > defaults.\ndtest: DEBUG: Done setting configuration options:\n{ > 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': > 5,\n'range_request_timeout_in_ms': 1,\n > 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\ndtest: DEBUG: Upgrading to binary:2.0.17\ndtest: DEBUG: Shutting down > node: node1\ndtest: DEBUG: Set new cassandra dir for node1: > /home/automaton/.ccm/repository/2.0.17\ndtest: DEBUG: Starting node1 on new > version (binary:2.0.17)\n- >> end captured logging << > -" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11597) dtest failure in upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15261213#comment-15261213 ] Philip Thompson commented on CASSANDRA-11597: - I wish :(. After the upgrade to 2.0.17, it then undergoes an upgrade to 2.1. We won't be able to retire this until 2.1 is EOL. > dtest failure in > upgrade_supercolumns_test.TestSCUpgrade.upgrade_with_counters_test > --- > > Key: CASSANDRA-11597 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11597 > Project: Cassandra > Issue Type: Test >Reporter: Jim Witschey >Assignee: DS Test Eng > Labels: dtest > > Looks like a new flap. Example failure: > http://cassci.datastax.com/job/cassandra-2.1_dtest/447/testReport/upgrade_supercolumns_test/TestSCUpgrade/upgrade_with_counters_test > Failed on CassCI build cassandra-2.1_dtest #447 - 2.1.14-tentative > {code} > Error Message > TimedOutException(acknowledged_by=0, paxos_in_progress=None, > acknowledged_by_batchlog=None) > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-1Fi9qz > dtest: DEBUG: Custom init_config not found. Setting defaults. > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > dtest: DEBUG: Upgrading to binary:2.0.17 > dtest: DEBUG: Shutting down node: node1 > dtest: DEBUG: Set new cassandra dir for node1: > /home/automaton/.ccm/repository/2.0.17 > dtest: DEBUG: Starting node1 on new version (binary:2.0.17) > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/upgrade_supercolumns_test.py", line > 215, in upgrade_with_counters_test > client.add('Counter1', column_parent, column, ThriftConsistencyLevel.ONE) > File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", > line 985, in add > self.recv_add() > File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", > line 1013, in recv_add > raise result.te > "TimedOutException(acknowledged_by=0, paxos_in_progress=None, > acknowledged_by_batchlog=None)\n >> begin captured > logging << \ndtest: DEBUG: cluster ccm directory: > /mnt/tmp/dtest-1Fi9qz\ndtest: DEBUG: Custom init_config not found. Setting > defaults.\ndtest: DEBUG: Done setting configuration options:\n{ > 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': > 5,\n'range_request_timeout_in_ms': 1,\n > 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\ndtest: DEBUG: Upgrading to binary:2.0.17\ndtest: DEBUG: Shutting down > node: node1\ndtest: DEBUG: Set new cassandra dir for node1: > /home/automaton/.ccm/repository/2.0.17\ndtest: DEBUG: Starting node1 on new > version (binary:2.0.17)\n- >> end captured logging << > -" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-11636) dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Russ Hatch reassigned CASSANDRA-11636: -- Assignee: Russ Hatch (was: DS Test Eng) > dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test > --- > > Key: CASSANDRA-11636 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11636 > Project: Cassandra > Issue Type: Test >Reporter: Michael Shuler >Assignee: Russ Hatch > Labels: dtest > > example failure: > http://cassci.datastax.com/job/cassandra-2.1_dtest/448/testReport/auth_test/TestAuth/restart_node_doesnt_lose_auth_data_test > Failed on CassCI build cassandra-2.1_dtest #448 - 2.1.14-tentative > {noformat} > Error Message > Problem stopping node node1 > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-sLlSHx > dtest: DEBUG: Custom init_config not found. Setting defaults. > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > dtest: DEBUG: Default role created by node1 > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/auth_test.py", line 910, in > restart_node_doesnt_lose_auth_data_test > self.cluster.stop() > File "/home/automaton/ccm/ccmlib/cluster.py", line 376, in stop > if not node.stop(wait, gently=gently): > File "/home/automaton/ccm/ccmlib/node.py", line 677, in stop > raise NodeError("Problem stopping node %s" % self.name) > "Problem stopping node node1\n >> begin captured logging > << \ndtest: DEBUG: cluster ccm directory: > /mnt/tmp/dtest-sLlSHx\ndtest: DEBUG: Custom init_config not found. Setting > defaults.\ndtest: DEBUG: Done setting configuration options:\n{ > 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': > 5,\n'range_request_timeout_in_ms': 1,\n > 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\ndtest: DEBUG: Default role created by node1\n- >> > end captured logging << -" > {noformat} > This test was successful in the next build on a commit that does not appear > to be auth-related, and the test does not appear to be flappy. Looping over > the test, I have not gotten a failure. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11636) dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15261228#comment-15261228 ] Russ Hatch commented on CASSANDRA-11636: trying a bulk run here: http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/85/ > dtest failure in auth_test.TestAuth.restart_node_doesnt_lose_auth_data_test > --- > > Key: CASSANDRA-11636 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11636 > Project: Cassandra > Issue Type: Test >Reporter: Michael Shuler >Assignee: Russ Hatch > Labels: dtest > > example failure: > http://cassci.datastax.com/job/cassandra-2.1_dtest/448/testReport/auth_test/TestAuth/restart_node_doesnt_lose_auth_data_test > Failed on CassCI build cassandra-2.1_dtest #448 - 2.1.14-tentative > {noformat} > Error Message > Problem stopping node node1 > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-sLlSHx > dtest: DEBUG: Custom init_config not found. Setting defaults. > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > dtest: DEBUG: Default role created by node1 > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/auth_test.py", line 910, in > restart_node_doesnt_lose_auth_data_test > self.cluster.stop() > File "/home/automaton/ccm/ccmlib/cluster.py", line 376, in stop > if not node.stop(wait, gently=gently): > File "/home/automaton/ccm/ccmlib/node.py", line 677, in stop > raise NodeError("Problem stopping node %s" % self.name) > "Problem stopping node node1\n >> begin captured logging > << \ndtest: DEBUG: cluster ccm directory: > /mnt/tmp/dtest-sLlSHx\ndtest: DEBUG: Custom init_config not found. Setting > defaults.\ndtest: DEBUG: Done setting configuration options:\n{ > 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': > 5,\n'range_request_timeout_in_ms': 1,\n > 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\ndtest: DEBUG: Default role created by node1\n- >> > end captured logging << -" > {noformat} > This test was successful in the next build on a commit that does not appear > to be auth-related, and the test does not appear to be flappy. Looping over > the test, I have not gotten a failure. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11677) Incredibly slow jolokia response times
[ https://issues.apache.org/jira/browse/CASSANDRA-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15261231#comment-15261231 ] Andrew Jorgensen commented on CASSANDRA-11677: -- So this appears to actually be a jolokia problem. I was able to downgrade to jolokia 1.2.3 and now metrics are coming in fine and the requests to that endpoint are down to only a couple seconds. I am not sure what changed between jolokia 1.2.3 and 1.3.3 but it appears to be causing an issue. > Incredibly slow jolokia response times > -- > > Key: CASSANDRA-11677 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11677 > Project: Cassandra > Issue Type: Bug >Reporter: Andrew Jorgensen > > I am seeing some very slow jolokia request times on my Cassandra 3.0 cluster. > Specifically when running the following: > {code} > curl 127.0.0.1:8778/jolokia/list > {code} > on a slightly loaded cluster I am seeing request times around 30-40 seconds > and on a more heavily loaded cluster I am seeing request times in the 2 > minute mark. We are currently using jolokia 1.3.2 and v4 of the diamond > collector. I also have a Cassandra 1.1 cluster that has the same load and > number of nodes and running the same curl command comes back in about 1 > second. > Is there anything I can do to help diagnose this issue to see what is causing > the slowdown or has anyone else experience this? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11675) multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest
[ https://issues.apache.org/jira/browse/CASSANDRA-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15261283#comment-15261283 ] Stefania commented on CASSANDRA-11675: -- I've merged the dtest PR for CASSANDRA-11631 a few minutes after committing and this caused the intermittent failures. I don't think CASSANDRA-11676 is related, it's the first time I see that failure. > multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest > > > Key: CASSANDRA-11675 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11675 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest > Fix For: 3.6 > > > these appear to be related, all failed on the same build (but appear to be > passing now). > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_brackets_in_UDT/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_undefined_as_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_sub_second_precision/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_null_as_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_default_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_all_datatypes_write/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-11675) multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest
[ https://issues.apache.org/jira/browse/CASSANDRA-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania resolved CASSANDRA-11675. -- Resolution: Fixed Reviewer: Stefania Fix Version/s: 3.6 > multiple dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest > > > Key: CASSANDRA-11675 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11675 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest > Fix For: 3.6 > > > these appear to be related, all failed on the same build (but appear to be > passing now). > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_brackets_in_UDT/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_undefined_as_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_round_trip_with_sub_second_precision/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_null_as_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_default_null_indicator/ > http://cassci.datastax.com/job/trunk_dtest/1165/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_all_datatypes_write/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11676) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows
[ https://issues.apache.org/jira/browse/CASSANDRA-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15261284#comment-15261284 ] Stefania commented on CASSANDRA-11676: -- It is not related to CASSANDRA-11675, I will take a look. > dtest failure in > cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows > -- > > Key: CASSANDRA-11676 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11676 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: Stefania > Labels: dtest > > failed on most recent trunk-offheap job example failure: > http://cassci.datastax.com/job/trunk_offheap_dtest/162/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_large_cql_rows > Failed on CassCI build trunk_offheap_dtest #162 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-11676) dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows
[ https://issues.apache.org/jira/browse/CASSANDRA-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania reassigned CASSANDRA-11676: Assignee: Stefania (was: DS Test Eng) > dtest failure in > cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_copy_from_with_large_cql_rows > -- > > Key: CASSANDRA-11676 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11676 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: Stefania > Labels: dtest > > failed on most recent trunk-offheap job example failure: > http://cassci.datastax.com/job/trunk_offheap_dtest/162/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_copy_from_with_large_cql_rows > Failed on CassCI build trunk_offheap_dtest #162 -- This message was sent by Atlassian JIRA (v6.3.4#6332)