[jira] [Created] (CASSANDRA-14414) Errors in Supercolumn support in 2.0 upgrade
Ken Hancock created CASSANDRA-14414: --- Summary: Errors in Supercolumn support in 2.0 upgrade Key: CASSANDRA-14414 URL: https://issues.apache.org/jira/browse/CASSANDRA-14414 Project: Cassandra Issue Type: Bug Reporter: Ken Hancock In upgrading from 1.2.18 to 2.0.17, the following exceptions started showing in cassandra log files when the 2.0.17 node is chosen as the coordinator. CL=ALL reads will fail as a result. The following ccm script will create a 3-node cassandra cluster and upgrade the 3rd node to cassandra 2.0.17 {code} ccm create -n3 -v1.2.17 test ccm start ccm node1 cli -v -x "create keyspace test with placement_strategy='org.apache.cassandra.locator.SimpleStrategy' and strategy_options={replication_factor:3}" ccm node1 cli -v -x "use test; create column family super with column_type = 'Super' and key_validation_class='IntegerType' and comparator = 'IntegerType' and subcomparator = 'IntegerType' and default_validation_class = 'AsciiType'" ccm node1 cli -v -x "use test; create column family shadow with column_type = 'Super' and key_validation_class='IntegerType' and comparator = 'IntegerType' and subcomparator = 'IntegerType' and default_validation_class = 'AsciiType'" ccm node1 cli -v -x "use test; set super[1][1][1]='1-1-1'; set super[1][1][2]='1-1-2'; set super[1][2][1]='1-2-1'; set super[1][2][2]='1-2-2'; set super[2][1][1]='2-1-1'; set super[2][1][2]='2-1-2'; set super[2][2][1]='2-2-1'; set super[2][2][2]='2-2-2'; set super[3][1][1]='3-1-1'; set super[3][1][2]='3-1-2'; " ccm flush ccm node3 stop ccm node3 setdir -v2.0.17 ccm node3 start ccm node3 nodetool upgradesstables {code} The following python uses pycassa to exercise the range_slice Thrift API: {code} import pycassa from pycassa.pool import ConnectionPool from pycassa.columnfamily import ColumnFamily from pycassa import ConsistencyLevel pool = ConnectionPool('test', server_list=['127.0.0.3:9160'], max_retries=0) super = ColumnFamily(pool, 'super') print "fails with ClassCastException" super.get(1, columns=[1,2], read_consistency_level=ConsistencyLevel.ONE) print "fails with RuntimeException: Cannot convert filter to old super column format..."" super.get(1, column_start=2, column_finish=3, read_consistency_level=ConsistencyLevel.ONE) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14298) cqlshlib tests broken on b.a.o
[ https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449131#comment-16449131 ] Patrick Bannister commented on CASSANDRA-14298: --- I agree that porting C* cqlsh to Python 3 is inevitable. This ticket probably isn't a good reason to do it, but there are other better reasons to do it, such as the pending end of life for Python 2. [~jasobrown], I suspect you were more asking Stefan than me, but I agree that the question of porting cqlsh to Python 3 should be discussed on the list. It would be worth some discussion on how to do it. For example, do we want to go all the way to Python 3, or would we prefer it to be 2/3 cross-compatible? And, how far back are we going to port - would we go back to 3.0, since we're still supporting it until six months after 4.0 is released? I suggest that if we're going to start a Python 3 epic, we should do it in a separate ticket, and make this ticket a subticket under it. Dialing back the scope to just the cqlsh tests - keep in mind that this problem only impacts a third of the cqlsh copy tests. I have everything in cqlsh_tests/cqlsh_tests.py working fine right now, nothing in that set of tests depends on the C* cqlshlib. However, for the impacted copy tests, we can only completely avoid these ugly workarounds if we port cqlshlib to Python 3 not only in trunk, but also in all other supported versions. Any branch of C* left behind on Python 2.7 will either have to be skipped for the copy tests, or else tested through some kind of alternate approach such as the awful hack I'm working on right now. > cqlshlib tests broken on b.a.o > -- > > Key: CASSANDRA-14298 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14298 > Project: Cassandra > Issue Type: Bug > Components: Build, Testing >Reporter: Stefan Podkowinski >Assignee: Patrick Bannister >Priority: Major > Attachments: cqlsh_tests_notes.md > > > It appears that cqlsh-tests on builds.apache.org on all branches stopped > working since we removed nosetests from the system environment. See e.g. > [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console]. > Looks like we either have to make nosetests available again or migrate to > pytest as we did with dtests. Giving pytest a quick try resulted in many > errors locally, but I haven't inspected them in detail yet. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
cassandra-dtest git commit: add tests for network auth (CASSANDRA-13985)
Repository: cassandra-dtest Updated Branches: refs/heads/master 5afbb7445 -> 0e9388d77 add tests for network auth (CASSANDRA-13985) Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/0e9388d7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/0e9388d7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/0e9388d7 Branch: refs/heads/master Commit: 0e9388d7783859084925fe6825215374a66206de Parents: 5afbb74 Author: Blake EgglestonAuthored: Wed Apr 18 14:16:22 2018 -0700 Committer: Blake Eggleston Committed: Mon Apr 23 16:33:43 2018 -0700 -- auth_test.py | 178 ++ 1 file changed, 152 insertions(+), 26 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/0e9388d7/auth_test.py -- diff --git a/auth_test.py b/auth_test.py index 34f7212..e7aef05 100644 --- a/auth_test.py +++ b/auth_test.py @@ -1,3 +1,5 @@ +import random +import string import time from collections import namedtuple from datetime import datetime, timedelta @@ -395,20 +397,20 @@ class TestAuth(Tester): self.prepare() session = self.get_session(user='cassandra', password='cassandra') -assert_one(session, "LIST USERS", ['cassandra', True]) +assert_one(session, "LIST USERS", ['cassandra', True] + all_dcs) session.execute("CREATE USER IF NOT EXISTS aleksey WITH PASSWORD 'sup'") session.execute("CREATE USER IF NOT EXISTS aleksey WITH PASSWORD 'ignored'") self.get_session(user='aleksey', password='sup') -assert_all(session, "LIST USERS", [['aleksey', False], ['cassandra', True]]) +assert_all(session, "LIST USERS", [['aleksey', False] + all_dcs, ['cassandra', True] + all_dcs]) session.execute("DROP USER IF EXISTS aleksey") -assert_one(session, "LIST USERS", ['cassandra', True]) +assert_one(session, "LIST USERS", ['cassandra', True] + all_dcs) session.execute("DROP USER IF EXISTS aleksey") -assert_one(session, "LIST USERS", ['cassandra', True]) +assert_one(session, "LIST USERS", ['cassandra', True] + all_dcs) def test_create_ks_auth(self): """ @@ -1008,13 +1010,15 @@ class TestAuth(Tester): self.cluster.stop() config = {'authenticator': 'org.apache.cassandra.auth.AllowAllAuthenticator', - 'authorizer': 'org.apache.cassandra.auth.AllowAllAuthorizer'} + 'authorizer': 'org.apache.cassandra.auth.AllowAllAuthorizer', + 'network_authorizer': 'org.apache.cassandra.auth.AllowAllNetworkAuthorizer'} self.cluster.set_configuration_options(values=config) self.cluster.start(wait_for_binary_proto=True) self.cluster.stop() config = {'authenticator': 'org.apache.cassandra.auth.PasswordAuthenticator', - 'authorizer': 'org.apache.cassandra.auth.CassandraAuthorizer'} + 'authorizer': 'org.apache.cassandra.auth.CassandraAuthorizer', + 'network_authorizer': 'org.apache.cassandra.auth.CassandraNetworkAuthorizer'} self.cluster.set_configuration_options(values=config) self.cluster.start(wait_for_binary_proto=True) @@ -1073,6 +1077,7 @@ class TestAuth(Tester): """ config = {'authenticator': 'org.apache.cassandra.auth.PasswordAuthenticator', 'authorizer': 'org.apache.cassandra.auth.CassandraAuthorizer', + 'network_authorizer': 'org.apache.cassandra.auth.CassandraNetworkAuthorizer', 'permissions_validity_in_ms': permissions_validity} self.cluster.set_configuration_options(values=config) self.cluster.populate(nodes).start() @@ -1129,12 +1134,15 @@ def data_resource_creator_permissions(creator, resource): # Third value is login status # Fourth value is role options # See CASSANDRA-7653 for explanations of these -Role = namedtuple('Role', ['name', 'superuser', 'login', 'options']) +dcs_field = [] if CASSANDRA_VERSION_FROM_BUILD < '4.0' else ['dcs'] +Role = namedtuple('Role', ['name', 'superuser', 'login', 'options'] + dcs_field) -mike_role = Role('mike', False, True, {}) -role1_role = Role('role1', False, False, {}) -role2_role = Role('role2', False, False, {}) -cassandra_role = Role('cassandra', True, True, {}) +all_dcs = [] if CASSANDRA_VERSION_FROM_BUILD < '4.0' else ['ALL'] +na_dcs = [] if CASSANDRA_VERSION_FROM_BUILD < '4.0' else ['n/a'] +mike_role = Role('mike', False, True, {}, *all_dcs) +role1_role = Role('role1', False, False, {}, *na_dcs) +role2_role = Role('role2', False, False, {}, *na_dcs)
[jira] [Updated] (CASSANDRA-14413) minor network auth improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-14413: --- Status: Ready to Commit (was: Patch Available) > minor network auth improvements > --- > > Key: CASSANDRA-14413 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14413 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 4.0 > > > CASSANDRA-13985 has a few minor things that could be improved -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14413) minor network auth improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448924#comment-16448924 ] Ariel Weisberg commented on CASSANDRA-14413: +1 > minor network auth improvements > --- > > Key: CASSANDRA-14413 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14413 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 4.0 > > > CASSANDRA-13985 has a few minor things that could be improved -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14413) minor network auth improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Blake Eggleston updated CASSANDRA-14413: Reviewer: Ariel Weisberg Status: Patch Available (was: Open) https://github.com/bdeggleston/cassandra/tree/CASSANDRA-14413 > minor network auth improvements > --- > > Key: CASSANDRA-14413 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14413 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 4.0 > > > CASSANDRA-13985 has a few minor things that could be improved -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14413) minor network auth improvements
Blake Eggleston created CASSANDRA-14413: --- Summary: minor network auth improvements Key: CASSANDRA-14413 URL: https://issues.apache.org/jira/browse/CASSANDRA-14413 Project: Cassandra Issue Type: Improvement Reporter: Blake Eggleston Assignee: Blake Eggleston Fix For: 4.0 CASSANDRA-13985 has a few minor things that could be improved -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-7622) Implement virtual tables
[ https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448699#comment-16448699 ] Dinesh Joshi edited comment on CASSANDRA-7622 at 4/23/18 9:29 PM: -- [~blerer] do you have a design or code that you can share? It would be great if you can post it. Is there a timeline around when you'll post it? {quote}As there is already some effort going on for a proper pluggable storage solution, I came to the conclusion that we should drop that idea of {{Virtual Table}} and simply expose the system information through what we could call {{System Views}}. It will make the transition easier for people coming from the relational word and will help us to focus on what is really important for users which is the usability of the all thing. {quote} I am not fussy about naming. However, using the same terminology does confuse users as they may expect the same feature set from Cassandra as they got in their relational database. I would personally avoid it. was (Author: djoshi3): [~blerer] do you have a design or code that you can share? It would be great if you can post it. Is there a timeline around when you'll post it? > Implement virtual tables > > > Key: CASSANDRA-7622 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7622 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Chris Lohfink >Priority: Major > Fix For: 4.x > > > There are a variety of reasons to want virtual tables, which would be any > table that would be backed by an API, rather than data explicitly managed and > stored as sstables. > One possible use case would be to expose JMX data through CQL as a > resurrection of CASSANDRA-3527. > Another is a more general framework to implement the ability to expose yaml > configuration information. So it would be an alternate approach to > CASSANDRA-7370. > A possible implementation would be in terms of CASSANDRA-7443, but I am not > presupposing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
cassandra git commit: ninja: remove out of date comments
Repository: cassandra Updated Branches: refs/heads/trunk 54de771e6 -> 6970ac215 ninja: remove out of date comments Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6970ac21 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6970ac21 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6970ac21 Branch: refs/heads/trunk Commit: 6970ac2154b77d6856a05183df0ab70fa6d661f8 Parents: 54de771 Author: Blake EgglestonAuthored: Mon Apr 23 14:06:43 2018 -0700 Committer: Blake Eggleston Committed: Mon Apr 23 14:06:43 2018 -0700 -- src/java/org/apache/cassandra/auth/INetworkAuthorizer.java | 3 --- src/java/org/apache/cassandra/auth/NetworkAuthCache.java | 3 --- 2 files changed, 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6970ac21/src/java/org/apache/cassandra/auth/INetworkAuthorizer.java -- diff --git a/src/java/org/apache/cassandra/auth/INetworkAuthorizer.java b/src/java/org/apache/cassandra/auth/INetworkAuthorizer.java index 8ff058e..4582b5e 100644 --- a/src/java/org/apache/cassandra/auth/INetworkAuthorizer.java +++ b/src/java/org/apache/cassandra/auth/INetworkAuthorizer.java @@ -20,9 +20,6 @@ package org.apache.cassandra.auth; import org.apache.cassandra.exceptions.ConfigurationException; -/** - * Not part of the roles hierarchy?? How would that even work? - */ public interface INetworkAuthorizer { /** http://git-wip-us.apache.org/repos/asf/cassandra/blob/6970ac21/src/java/org/apache/cassandra/auth/NetworkAuthCache.java -- diff --git a/src/java/org/apache/cassandra/auth/NetworkAuthCache.java b/src/java/org/apache/cassandra/auth/NetworkAuthCache.java index 1c82460..15b1819 100644 --- a/src/java/org/apache/cassandra/auth/NetworkAuthCache.java +++ b/src/java/org/apache/cassandra/auth/NetworkAuthCache.java @@ -20,9 +20,6 @@ package org.apache.cassandra.auth; import org.apache.cassandra.config.DatabaseDescriptor; -/** - * Created by blakeeggleston on 12/14/17. - */ public class NetworkAuthCache extends AuthCache implements AuthCacheMBean { public NetworkAuthCache(INetworkAuthorizer authorizer) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-7622) Implement virtual tables
[ https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448699#comment-16448699 ] Dinesh Joshi commented on CASSANDRA-7622: - [~blerer] do you have a design or code that you can share? It would be great if you can post it. Is there a timeline around when you'll post it? > Implement virtual tables > > > Key: CASSANDRA-7622 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7622 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Chris Lohfink >Priority: Major > Fix For: 4.x > > > There are a variety of reasons to want virtual tables, which would be any > table that would be backed by an API, rather than data explicitly managed and > stored as sstables. > One possible use case would be to expose JMX data through CQL as a > resurrection of CASSANDRA-3527. > Another is a more general framework to implement the ability to expose yaml > configuration information. So it would be an alternate approach to > CASSANDRA-7370. > A possible implementation would be in terms of CASSANDRA-7443, but I am not > presupposing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14404) Transient Replication & Cheap Quorums: Decouple storage requirements from consensus group size using incremental repair
[ https://issues.apache.org/jira/browse/CASSANDRA-14404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448626#comment-16448626 ] Duarte Nunes commented on CASSANDRA-14404: -- Ah, CL is now a function of RF + count(Witnesses). > Transient Replication & Cheap Quorums: Decouple storage requirements from > consensus group size using incremental repair > --- > > Key: CASSANDRA-14404 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14404 > Project: Cassandra > Issue Type: New Feature > Components: Coordination, Core, CQL, Distributed Metadata, Hints, > Local Write-Read Paths, Materialized Views, Repair, Secondary Indexes, > Testing, Tools >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Major > Fix For: 4.0 > > > Transient Replication is an implementation of [Witness > Replicas|http://www2.cs.uh.edu/~paris/MYPAPERS/Icdcs86.pdf] that leverages > incremental repair to make full replicas consistent with transient replicas > that don't store the entire data set. Witness replicas are used in real world > systems such as Megastore and Spanner to increase availability inexpensively > without having to commit to more full copies of the database. Transient > replicas implement functionality similar to upgradable and temporary replicas > from the paper. > With transient replication the replication factor is increased beyond the > desired level of data redundancy by adding replicas that only store data when > sufficient full replicas are unavailable to store the data. These replicas > are called transient replicas. When incremental repair runs transient > replicas stream any data they have received to full replicas and once the > data is fully replicated it is dropped at the transient replicas. > Cheap quorums are a further set of optimizations on the write path to avoid > writing to transient replicas unless sufficient full replicas are available > as well as optimizations on the read path to prefer reading from transient > replicas. When writing at quorum to a table configured to use transient > replication the quorum will always prefer available full replicas over > transient replicas so that transient replicas don't have to process writes. > Rapid write protection (similar to rapid read protection) reduces tail > latency when full replicas are temporarily late to respond by sending writes > to additional replicas if necessary. > Transient replicas can generally service reads faster because they don't have > do anything beyond bloom filter checks if they have no data. With vnodes and > larger size clusters they will not have a large quantity of data even in > failure cases where transient replicas start to serve a steady amount of > write traffic for some of their transiently replicated ranges. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-7622) Implement virtual tables
[ https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448475#comment-16448475 ] Chris Lohfink commented on CASSANDRA-7622: -- I think theres 2 things here # blanket table exposing ALL metrics (so that jmx isnt required) # provide a good ux for exploring them I think for #1 we still need a table like the one in patch, otherwise every time a metric is added a curated table would also need to be updated which might be prohibitive for 1 off metrics that while critical in some scenarios are so off the normal path that it would be wasted effort and overwhelming. For #2 I was thinking of basically having an equivalent of the nodetool views++ (tablestats, tablehistograms, info, netstats, clientlist etc but also some new things), but that needs to be broken into sub tasks since working on it until the mechanism for making them is nailed down is kinda waste of time. > Implement virtual tables > > > Key: CASSANDRA-7622 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7622 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Chris Lohfink >Priority: Major > Fix For: 4.x > > > There are a variety of reasons to want virtual tables, which would be any > table that would be backed by an API, rather than data explicitly managed and > stored as sstables. > One possible use case would be to expose JMX data through CQL as a > resurrection of CASSANDRA-3527. > Another is a more general framework to implement the ability to expose yaml > configuration information. So it would be an alternate approach to > CASSANDRA-7370. > A possible implementation would be in terms of CASSANDRA-7443, but I am not > presupposing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14401) Attempted serializing to buffer exceeded maximum of 65535 bytes
[ https://issues.apache.org/jira/browse/CASSANDRA-14401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448466#comment-16448466 ] Artem Rokhin commented on CASSANDRA-14401: -- Yes, the issue was in the query we used. Sorry for bothering you. > Attempted serializing to buffer exceeded maximum of 65535 bytes > > > Key: CASSANDRA-14401 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14401 > Project: Cassandra > Issue Type: Bug >Reporter: Artem Rokhin >Priority: Major > > Cassandra version: 3.11.2 > 3 nodes cluster > The following exception appears on all 3 nodes and after awhile cluster > becomes unreposnsive > > {code} > java.lang.AssertionError: Attempted serializing to buffer exceeded maximum of > 65535 bytes: 67661 > at > org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:309) > ~[apache-cassandra-3.11.2.jar:3.11.2] > at > org.apache.cassandra.db.filter.RowFilter$Expression$Serializer.serialize(RowFilter.java:547) > ~[apache-cassandra-3.11.2.jar:3.11.2] > at > org.apache.cassandra.db.filter.RowFilter$Serializer.serialize(RowFilter.java:1143) > ~[apache-cassandra-3.11.2.jar:3.11.2] > at > org.apache.cassandra.db.ReadCommand$Serializer.serialize(ReadCommand.java:726) > ~[apache-cassandra-3.11.2.jar:3.11.2] > at > org.apache.cassandra.db.ReadCommand$Serializer.serialize(ReadCommand.java:683) > ~[apache-cassandra-3.11.2.jar:3.11.2] > at > org.apache.cassandra.io.ForwardingVersionedSerializer.serialize(ForwardingVersionedSerializer.java:45) > ~[apache-cassandra-3.11.2.jar:3.11.2] > at org.apache.cassandra.net.MessageOut.serialize(MessageOut.java:120) > ~[apache-cassandra-3.11.2.jar:3.11.2] > at > org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:385) > [apache-cassandra-3.11.2.jar:3.11.2] > at > org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:337) > [apache-cassandra-3.11.2.jar:3.11.2] > at > org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:263) > [apache-cassandra-3.11.2.jar:3.11.2] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14360) Allow nodetool toppartitions without specifying table
[ https://issues.apache.org/jira/browse/CASSANDRA-14360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Joshi updated CASSANDRA-14360: - Reviewer: Dinesh Joshi > Allow nodetool toppartitions without specifying table > - > > Key: CASSANDRA-14360 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14360 > Project: Cassandra > Issue Type: Improvement >Reporter: Chris Lohfink >Assignee: Chris Lohfink >Priority: Major > > It can be hard to determine even which table is the one with most issue, so > determining if there is a single dominate partition being updated or queried > would be nicer without having to specify the table. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13985) Support restricting reads and writes to specific datacenters on a per user basis
[ https://issues.apache.org/jira/browse/CASSANDRA-13985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Blake Eggleston updated CASSANDRA-13985: Resolution: Fixed Status: Resolved (was: Ready to Commit) committed to trunk as {{54de771e643e9cc64d1f5dd28b5de8a9a91a219e}} > Support restricting reads and writes to specific datacenters on a per user > basis > > > Key: CASSANDRA-13985 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13985 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 4.0 > > > There are a few use cases where it makes sense to restrict the operations a > given user can perform in specific data centers. The obvious use case is the > production/analytics datacenter configuration. You don’t want the production > user to be reading/or writing to the analytics datacenter, and you don’t want > the analytics user to be reading from the production datacenter. > Although we expect users to get this right on that application level, we > should also be able to enforce this at the database level. The first approach > that comes to mind would be to support an optional DC parameter when granting > select and modify permissions to roles. Something like {{GRANT SELECT ON > some_keyspace TO that_user IN DC dc1}}, statements that omit the dc would > implicitly be granting permission to all dcs. However, I’m not married to > this approach. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
cassandra git commit: Add network authz
Repository: cassandra Updated Branches: refs/heads/trunk 63945228f -> 54de771e6 Add network authz Patch by Blake Eggleston; Reviewed by Sam Tunnicliffe for CASSANDRA-13985 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/54de771e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/54de771e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/54de771e Branch: refs/heads/trunk Commit: 54de771e643e9cc64d1f5dd28b5de8a9a91a219e Parents: 6394522 Author: Blake EgglestonAuthored: Wed Dec 13 13:17:05 2017 -0800 Committer: Blake Eggleston Committed: Mon Apr 23 09:55:31 2018 -0700 -- CHANGES.txt | 1 + NEWS.txt| 4 + conf/cassandra.yaml | 10 + doc/source/cql/security.rst | 19 ++ pylib/cqlshlib/cql3handling.py | 2 + src/antlr/Lexer.g | 2 + src/antlr/Parser.g | 26 +- .../auth/AllowAllNetworkAuthorizer.java | 47 .../org/apache/cassandra/auth/AuthConfig.java | 10 +- .../org/apache/cassandra/auth/AuthKeyspace.java | 10 +- .../cassandra/auth/AuthenticatedUser.java | 7 + .../cassandra/auth/CassandraAuthorizer.java | 26 +- .../auth/CassandraNetworkAuthorizer.java| 157 +++ .../cassandra/auth/CassandraRoleManager.java| 18 +- .../apache/cassandra/auth/DCPermissions.java| 217 .../cassandra/auth/INetworkAuthorizer.java | 63 + .../apache/cassandra/auth/NetworkAuthCache.java | 41 +++ .../org/apache/cassandra/config/Config.java | 1 + .../cassandra/config/DatabaseDescriptor.java| 12 + .../cql3/statements/AlterRoleStatement.java | 16 +- .../cql3/statements/CreateRoleStatement.java| 13 +- .../cql3/statements/DropRoleStatement.java | 1 + .../cql3/statements/ListRolesStatement.java | 5 +- .../cql3/statements/ListUsersStatement.java | 5 +- .../org/apache/cassandra/dht/Datacenters.java | 63 + .../locator/NetworkTopologyStrategy.java| 26 +- .../apache/cassandra/service/ClientState.java | 6 + .../cassandra/service/StorageService.java | 1 + .../org/apache/cassandra/utils/FBUtilities.java | 15 ++ .../unit/org/apache/cassandra/SchemaLoader.java | 20 ++ .../auth/CassandraNetworkAuthorizerTest.java| 259 +++ .../config/DatabaseDescriptorRefTest.java | 1 + .../cql3/statements/AlterRoleStatementTest.java | 73 ++ .../statements/CreateRoleStatementTest.java | 72 ++ .../statements/CreateUserStatementTest.java | 46 .../cql3/validation/operations/CreateTest.java | 5 +- 36 files changed, 1246 insertions(+), 54 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/54de771e/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4cdd8ba..6976c7f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 4.0 + * Add network authz (CASSANDRA-13985) * Use the correct IP/Port for Streaming when localAddress is left unbound (CASSANDAR-14389) * nodetool listsnapshots is missing local system keyspace snapshots (CASSANDRA-14381) * Remove StreamCoordinator.streamExecutor thread pool (CASSANDRA-14402) http://git-wip-us.apache.org/repos/asf/cassandra/blob/54de771e/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index 9216bc0..a13f633 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -72,6 +72,10 @@ New features See CASSANDRA-13848 for more detail - Metric for coordinator writes per table has been added. See CASSANDRA-14232 - Nodetool cfstats now has options to sort by various metrics as well as limit results. + - Operators can restrict login user activity to one or more datacenters. See `network_authorizer` + in cassandra.yaml, and the docs for create and alter role statements. CASSANDRA-13985 + - Roles altered from login=true to login=false will prevent existing connections from executing any + statements after the cache has been refreshed. CASSANDRA-13985 Upgrading - http://git-wip-us.apache.org/repos/asf/cassandra/blob/54de771e/conf/cassandra.yaml -- diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml index d466072..7e4b2c2 100644 --- a/conf/cassandra.yaml +++ b/conf/cassandra.yaml @@ -122,6 +122,16 @@ authorizer: AllowAllAuthorizer # increase system_auth keyspace replication factor if you use this role manager. role_manager: CassandraRoleManager +# Network
[jira] [Updated] (CASSANDRA-14404) Transient Replication & Cheap Quorums: Decouple storage requirements from consensus group size using incremental repair
[ https://issues.apache.org/jira/browse/CASSANDRA-14404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-14404: - Description: Transient Replication is an implementation of [Witness Replicas|http://www2.cs.uh.edu/~paris/MYPAPERS/Icdcs86.pdf] that leverages incremental repair to make full replicas consistent with transient replicas that don't store the entire data set. Witness replicas are used in real world systems such as Megastore and Spanner to increase availability inexpensively without having to commit to more full copies of the database. Transient replicas implement functionality similar to upgradable and temporary replicas from the paper. With transient replication the replication factor is increased beyond the desired level of data redundancy by adding replicas that only store data when sufficient full replicas are unavailable to store the data. These replicas are called transient replicas. When incremental repair runs transient replicas stream any data they have received to full replicas and once the data is fully replicated it is dropped at the transient replicas. Cheap quorums are a further set of optimizations on the write path to avoid writing to transient replicas unless sufficient full replicas are available as well as optimizations on the read path to prefer reading from transient replicas. When writing at quorum to a table configured to use transient replication the quorum will always prefer available full replicas over transient replicas so that transient replicas don't have to process writes. Rapid write protection (similar to rapid read protection) reduces tail latency when full replicas are temporarily late to respond by sending writes to additional replicas if necessary. Transient replicas can generally service reads faster because they don't have do anything beyond bloom filter checks if they have no data. With vnodes and larger size clusters they will not have a large quantity of data even in failure cases where transient replicas start to serve a steady amount of write traffic for some of their transiently replicated ranges. was: Transient Replication is an implementation of [Witness Replicas|http://www2.cs.uh.edu/~paris/MYPAPERS/Icdcs86.pdf (https://www.google.com/url?sa=t=j==s=web=1=rja=8=0ahUKEwi834a%E2%80%948HaAhWCneAKHdj8DzAQFggpMAA=http%3A%2F%2Fwww2.cs.uh.edu%2F~paris%2FMYPAPERS%2FIcdcs86.pdf=AOvVaw0GfCaaAtdzHiM65du1-qeI)] that leverages incremental repair to make full replicas consistent with transient replicas that don't store the entire data set. Witness replicas are used in real world systems such as Megastore and Spanner to increase availability inexpensively without having to commit to more full copies of the database. Transient replicas implement functionality similar to upgradable and temporary replicas from the paper. With transient replication the replication factor is increased beyond the desired level of data redundancy by adding replicas that only store data when sufficient full replicas are unavailable to store the data. These replicas are called transient replicas. When incremental repair runs transient replicas stream any data they have received to full replicas and once the data is fully replicated it is dropped at the transient replicas. Cheap quorums are a further set of optimizations on the write path to avoid writing to transient replicas unless sufficient full replicas are available as well as optimizations on the read path to prefer reading from transient replicas. When writing at quorum to a table configured to use transient replication the quorum will always prefer available full replicas over transient replicas so that transient replicas don't have to process writes. Rapid write protection (similar to rapid read protection) reduces tail latency when full replicas are temporarily late to respond by sending writes to additional replicas if necessary. Transient replicas can generally service reads faster because they don't have do anything beyond bloom filter checks if they have no data. With vnodes and larger size clusters they will not have a large quantity of data even in failure cases where transient replicas start to serve a steady amount of write traffic for some of their transiently replicated ranges. > Transient Replication & Cheap Quorums: Decouple storage requirements from > consensus group size using incremental repair > --- > > Key: CASSANDRA-14404 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14404 > Project: Cassandra > Issue Type: New Feature > Components: Coordination, Core, CQL, Distributed Metadata, Hints, > Local Write-Read Paths, Materialized Views, Repair, Secondary Indexes, > Testing,
[jira] [Updated] (CASSANDRA-14281) Improve LatencyMetrics performance by reducing write path processing
[ https://issues.apache.org/jira/browse/CASSANDRA-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Lohfink updated CASSANDRA-14281: -- Status: Ready to Commit (was: Patch Available) > Improve LatencyMetrics performance by reducing write path processing > > > Key: CASSANDRA-14281 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14281 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Michael Burman >Assignee: Michael Burman >Priority: Major > Attachments: bench.png, bench2.png, benchmark.html, benchmark2.png > > > Currently for each write/read/rangequery/CAS touching the CFS we write a > latency metric which takes a lot of processing time (up to 66% of the total > processing time if the update was empty). > The way latencies are recorded is to use both a dropwizard "Timer" as well as > "Counter". Latter is used for totalLatency and the previous is decaying > metric for rates and certain percentile metrics. We then replicate all of > these CFS writes to the KeyspaceMetrics and globalWriteLatencies. > Instead of doing this on the write phase we should merge the metrics when > they're read. This is much less common occurrence and thus we save a lot of > CPU time in total. This also speeds up the write path. > Currently, the DecayingEstimatedHistogramReservoir acquires a lock for each > update operation, which causes a contention if there are more than one thread > updating the histogram. This impacts scalability when using larger machines. > We should make it lock-free as much as possible and also avoid a single > CAS-update from blocking all the concurrent threads from making an update. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14411) Use Bounds instead of Range to represent sstable first/last token when checking how to anticompact sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448415#comment-16448415 ] Blake Eggleston commented on CASSANDRA-14411: - circle seems to be down, but +1 assuming tests look good > Use Bounds instead of Range to represent sstable first/last token when > checking how to anticompact sstables > --- > > Key: CASSANDRA-14411 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14411 > Project: Cassandra > Issue Type: Improvement >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Major > > There is currently a chance of missing marking a token as repaired due to the > fact that we use Range which are (a, b] to represent first/last token in > sstables instead of Bounds which are [a, b]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14411) Use Bounds instead of Range to represent sstable first/last token when checking how to anticompact sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Blake Eggleston updated CASSANDRA-14411: Status: Ready to Commit (was: Patch Available) > Use Bounds instead of Range to represent sstable first/last token when > checking how to anticompact sstables > --- > > Key: CASSANDRA-14411 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14411 > Project: Cassandra > Issue Type: Improvement >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Major > > There is currently a chance of missing marking a token as repaired due to the > fact that we use Range which are (a, b] to represent first/last token in > sstables instead of Bounds which are [a, b]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-7622) Implement virtual tables
[ https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448391#comment-16448391 ] Chris Lohfink commented on CASSANDRA-7622: -- > gauge value are simply returned as TEXT which prevent you to do arithmetic > operations or aggregation will fix that. you can do arithmetic operations, searches and aggregations on them but the type was wrong. > all the C* metrics. Streaming metrics, Cache metrics ... all of them should > be exposed in the best possible way for an admin. Planned on it, but that makes patch too big. once the implementation is done can make sub tasks for others. What would you think the table schema should look like? > Implement virtual tables > > > Key: CASSANDRA-7622 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7622 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Chris Lohfink >Priority: Major > Fix For: 4.x > > > There are a variety of reasons to want virtual tables, which would be any > table that would be backed by an API, rather than data explicitly managed and > stored as sstables. > One possible use case would be to expose JMX data through CQL as a > resurrection of CASSANDRA-3527. > Another is a more general framework to implement the ability to expose yaml > configuration information. So it would be an alternate approach to > CASSANDRA-7370. > A possible implementation would be in terms of CASSANDRA-7443, but I am not > presupposing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-7622) Implement virtual tables
[ https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448357#comment-16448357 ] Benjamin Lerer commented on CASSANDRA-7622: --- {quote}table metrics (all of them) {quote} The schema of this table could be much better in my opinion. The user needs to know what metric type is corresponding to the metric he is looking at to be able to fetch only the columns that are meaningful for that metric. Otherwise he will just get a bunch of {{NULL}} values. {{Gauge}} value are simply returned as {{TEXT}} which prevent you to do arithmetic operations or aggregation if you want to. As an admin I would expect a more powerful schema. When I am mentioning all metrics I am refering to all the C* metrics. Streaming metrics, Cache metrics ... all of them should be exposed in the best possible way for an admin. > Implement virtual tables > > > Key: CASSANDRA-7622 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7622 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Chris Lohfink >Priority: Major > Fix For: 4.x > > > There are a variety of reasons to want virtual tables, which would be any > table that would be backed by an API, rather than data explicitly managed and > stored as sstables. > One possible use case would be to expose JMX data through CQL as a > resurrection of CASSANDRA-3527. > Another is a more general framework to implement the ability to expose yaml > configuration information. So it would be an alternate approach to > CASSANDRA-7370. > A possible implementation would be in terms of CASSANDRA-7443, but I am not > presupposing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14404) Transient Replication & Cheap Quorums: Decouple storage requirements from consensus group size using incremental repair
[ https://issues.apache.org/jira/browse/CASSANDRA-14404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448355#comment-16448355 ] Ariel Weisberg commented on CASSANDRA-14404: This is not sloppy quorums. Sloppy quorums don't provide strong consistency. We still enforce strict quorum membership. >From the Dynamo paper: {quote}To remedy this it does not enforce strict quorum membership and instead it uses a “sloppy quorum”; all read and write operations are performed on the first N healthy nodes from the preference list, which may not always be the first N nodes encountered while walking the consistent hashing ring. {quote} We aren't going to allow you to use transient replication with 2i or MV in version 1. > Transient Replication & Cheap Quorums: Decouple storage requirements from > consensus group size using incremental repair > --- > > Key: CASSANDRA-14404 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14404 > Project: Cassandra > Issue Type: New Feature > Components: Coordination, Core, CQL, Distributed Metadata, Hints, > Local Write-Read Paths, Materialized Views, Repair, Secondary Indexes, > Testing, Tools >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Major > Fix For: 4.0 > > > Transient Replication is an implementation of [Witness > Replicas|http://www2.cs.uh.edu/~paris/MYPAPERS/Icdcs86.pdf > (https://www.google.com/url?sa=t=j==s=web=1=rja=8=0ahUKEwi834a%E2%80%948HaAhWCneAKHdj8DzAQFggpMAA=http%3A%2F%2Fwww2.cs.uh.edu%2F~paris%2FMYPAPERS%2FIcdcs86.pdf=AOvVaw0GfCaaAtdzHiM65du1-qeI)] > that leverages incremental repair to make full replicas consistent with > transient replicas that don't store the entire data set. Witness replicas are > used in real world systems such as Megastore and Spanner to increase > availability inexpensively without having to commit to more full copies of > the database. Transient replicas implement functionality similar to > upgradable and temporary replicas from the paper. > With transient replication the replication factor is increased beyond the > desired level of data redundancy by adding replicas that only store data when > sufficient full replicas are unavailable to store the data. These replicas > are called transient replicas. When incremental repair runs transient > replicas stream any data they have received to full replicas and once the > data is fully replicated it is dropped at the transient replicas. > Cheap quorums are a further set of optimizations on the write path to avoid > writing to transient replicas unless sufficient full replicas are available > as well as optimizations on the read path to prefer reading from transient > replicas. When writing at quorum to a table configured to use transient > replication the quorum will always prefer available full replicas over > transient replicas so that transient replicas don't have to process writes. > Rapid write protection (similar to rapid read protection) reduces tail > latency when full replicas are temporarily late to respond by sending writes > to additional replicas if necessary. > Transient replicas can generally service reads faster because they don't have > do anything beyond bloom filter checks if they have no data. With vnodes and > larger size clusters they will not have a large quantity of data even in > failure cases where transient replicas start to serve a steady amount of > write traffic for some of their transiently replicated ranges. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-7622) Implement virtual tables
[ https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448330#comment-16448330 ] Benjamin Lerer edited comment on CASSANDRA-7622 at 4/23/18 3:45 PM: {quote}You do not get {{CREATE TABLE USING CompactionStats compaction_stats}}, it still displays the schema how you would expect with a {{create table}} output because thats the way cqlsh is created to take a schema and display it.{quote} Do not get me wrong. Both approach have some issues in my opinion even if CREATE TABLE USING CompactionStats compaction_stats is less misleading. The problem of displaying the table as a normal one is that it let the user believe that it is a normal table which is not the case in many ways. {quote} While I think its possible to use that shim to do other things I very much doubt it would ever be used as such.{quote} Then we do not need that logic. It is wrong in my opinion to add some logic just in case we might need it one day. was (Author: blerer): {quote}You do not get {{CREATE TABLE USING CompactionStats compaction_stats}}, it still displays the schema how you would expect with a {{create table}} output because thats the way cqlsh is created to take a schema and display it.\{quote} Do not get me wrong. Both approach have some issues in my opinion even if CREATE TABLE USING CompactionStats compaction_stats is less misleading. The problem of displaying the table as a normal one is that it let the user believe that it is a normal table which is not the case in many ways. {quote} While I think its possible to use that shim to do other things I very much doubt it would ever be used as such.\{quote} Then we do not need that logic. It is wrong in my opinion to add some logic just in case we might need it one day. > Implement virtual tables > > > Key: CASSANDRA-7622 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7622 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Chris Lohfink >Priority: Major > Fix For: 4.x > > > There are a variety of reasons to want virtual tables, which would be any > table that would be backed by an API, rather than data explicitly managed and > stored as sstables. > One possible use case would be to expose JMX data through CQL as a > resurrection of CASSANDRA-3527. > Another is a more general framework to implement the ability to expose yaml > configuration information. So it would be an alternate approach to > CASSANDRA-7370. > A possible implementation would be in terms of CASSANDRA-7443, but I am not > presupposing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-7622) Implement virtual tables
[ https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448330#comment-16448330 ] Benjamin Lerer edited comment on CASSANDRA-7622 at 4/23/18 3:45 PM: {quote}You do not get {{CREATE TABLE USING CompactionStats compaction_stats}}, it still displays the schema how you would expect with a {{create table}} output because thats the way cqlsh is created to take a schema and display it.{quote} Do not get me wrong. Both approach have some issues in my opinion even if {{CREATE TABLE USING CompactionStats compaction_stats}} is less misleading. The problem of displaying the table as a normal one is that it let the user believe that it is a normal table which is not the case in many ways. {quote} While I think its possible to use that shim to do other things I very much doubt it would ever be used as such.{quote} Then we do not need that logic. It is wrong in my opinion to add some logic just in case we might need it one day. was (Author: blerer): {quote}You do not get {{CREATE TABLE USING CompactionStats compaction_stats}}, it still displays the schema how you would expect with a {{create table}} output because thats the way cqlsh is created to take a schema and display it.{quote} Do not get me wrong. Both approach have some issues in my opinion even if CREATE TABLE USING CompactionStats compaction_stats is less misleading. The problem of displaying the table as a normal one is that it let the user believe that it is a normal table which is not the case in many ways. {quote} While I think its possible to use that shim to do other things I very much doubt it would ever be used as such.{quote} Then we do not need that logic. It is wrong in my opinion to add some logic just in case we might need it one day. > Implement virtual tables > > > Key: CASSANDRA-7622 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7622 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Chris Lohfink >Priority: Major > Fix For: 4.x > > > There are a variety of reasons to want virtual tables, which would be any > table that would be backed by an API, rather than data explicitly managed and > stored as sstables. > One possible use case would be to expose JMX data through CQL as a > resurrection of CASSANDRA-3527. > Another is a more general framework to implement the ability to expose yaml > configuration information. So it would be an alternate approach to > CASSANDRA-7370. > A possible implementation would be in terms of CASSANDRA-7443, but I am not > presupposing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-7622) Implement virtual tables
[ https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448330#comment-16448330 ] Benjamin Lerer commented on CASSANDRA-7622: --- {quote}You do not get {{CREATE TABLE USING CompactionStats compaction_stats}}, it still displays the schema how you would expect with a {{create table}} output because thats the way cqlsh is created to take a schema and display it.\{quote} Do not get me wrong. Both approach have some issues in my opinion even if CREATE TABLE USING CompactionStats compaction_stats is less misleading. The problem of displaying the table as a normal one is that it let the user believe that it is a normal table which is not the case in many ways. {quote} While I think its possible to use that shim to do other things I very much doubt it would ever be used as such.\{quote} Then we do not need that logic. It is wrong in my opinion to add some logic just in case we might need it one day. > Implement virtual tables > > > Key: CASSANDRA-7622 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7622 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Chris Lohfink >Priority: Major > Fix For: 4.x > > > There are a variety of reasons to want virtual tables, which would be any > table that would be backed by an API, rather than data explicitly managed and > stored as sstables. > One possible use case would be to expose JMX data through CQL as a > resurrection of CASSANDRA-3527. > Another is a more general framework to implement the ability to expose yaml > configuration information. So it would be an alternate approach to > CASSANDRA-7370. > A possible implementation would be in terms of CASSANDRA-7443, but I am not > presupposing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-7622) Implement virtual tables
[ https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448295#comment-16448295 ] Chris Lohfink commented on CASSANDRA-7622: -- You do not get {{CREATE TABLE USING CompactionStats compaction_stats}}, it still displays the schema how you would expect with a {{create table}} output because thats the way cqlsh is created to take a schema and display it. This ticket does not add pluggable storage, it creates a system_info keyspace that has some pre-created (with more to come if it ever gets through) tables displaying internal information, like compaction stats, table metrics (all of them), all configuration settings (with some settable), and ring state. It makes it (hopefully) easy to make more of these as well with any kind of querying you want (normal cql query limitations do not apply, with exception of some of the order by ones). This patch creates a shim that sits between the parsing of the query and actually executing it. The only implementation is that shim is one for this system_info. While I think its possible to use that shim to do other things I very much doubt it would ever be used as such. > Implement virtual tables > > > Key: CASSANDRA-7622 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7622 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Chris Lohfink >Priority: Major > Fix For: 4.x > > > There are a variety of reasons to want virtual tables, which would be any > table that would be backed by an API, rather than data explicitly managed and > stored as sstables. > One possible use case would be to expose JMX data through CQL as a > resurrection of CASSANDRA-3527. > Another is a more general framework to implement the ability to expose yaml > configuration information. So it would be an alternate approach to > CASSANDRA-7370. > A possible implementation would be in terms of CASSANDRA-7443, but I am not > presupposing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14400) Subrange repair doesn't always mark as repaired
[ https://issues.apache.org/jira/browse/CASSANDRA-14400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448244#comment-16448244 ] Blake Eggleston commented on CASSANDRA-14400: - Nope, it's exclusively compaction. They were probably just compacted away on startup. > Subrange repair doesn't always mark as repaired > --- > > Key: CASSANDRA-14400 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14400 > Project: Cassandra > Issue Type: Bug >Reporter: Kurt Greaves >Priority: Major > > So was just messing around with subrange repair on trunk and found that if I > generated an SSTable with a single token and then tried to repair that > SSTable using subrange repairs it wouldn't get marked as repaired. > > Before repair: > {code:java} > First token: -9223362383595311662 (derphead4471291) > Last token: -9223362383595311662 (derphead4471291) > Repaired at: 0 > Pending repair: 862395e0-4394-11e8-8f20-3b8ee110d005 > {code} > Repair command: > {code} > ccm node1 nodetool "repair -st -9223362383595311663 -et -9223362383595311661 > aoeu" > [2018-04-19 05:44:42,806] Starting repair command #7 > (c23f76c0-4394-11e8-8f20-3b8ee110d005), repairing keyspace aoeu with repair > options (parallelism: parallel, primary range: false, incremental: true, job > threads: 1, ColumnFamilies: [], dataCenters: [], hosts: [], previewKind: > NONE, # of ranges: 1, pull repair: false, force repair: false, optimise > streams: false) > [2018-04-19 05:44:42,843] Repair session c242d220-4394-11e8-8f20-3b8ee110d005 > for range [(-9223362383595311663,-9223362383595311661]] finished (progress: > 20%) > [2018-04-19 05:44:43,139] Repair completed successfully > [2018-04-19 05:44:43,140] Repair command #7 finished in 0 seconds > {code} > After repair SSTable hasn't changed and sstablemetadata outputs: > {code} > First token: -9223362383595311662 (derphead4471291) > Last token: -9223362383595311662 (derphead4471291) > Repaired at: 0 > Pending repair: 862395e0-4394-11e8-8f20-3b8ee110d005 > {code} > And parent_repair_history states that the repair is complete/range was > successful: > {code} > select * from system_distributed.parent_repair_history where > parent_id=862395e0-4394-11e8-8f20-3b8ee110d005 ; > parent_id| columnfamily_names | > exception_message | exception_stacktrace | finished_at | > keyspace_name | options > > > | requested_ranges > | started_at | successful_ranges > --++---+--+-+---++-+-+- > 862395e0-4394-11e8-8f20-3b8ee110d005 | {'aoeu'} | > null | null | 2018-04-19 05:43:14.578000+ | aoeu > | {'dataCenters': '', 'forceRepair': 'false', 'hosts': '', 'incremental': > 'true', 'jobThreads': '1', 'optimiseStreams': 'false', 'parallelism': > 'parallel', 'previewKind': 'NONE', 'primaryRange': 'false', 'pullRepair': > 'false', 'sub_range_repair': 'true', 'trace': 'false'} | > {'(-9223362383595311663,-9223362383595311661]'} | 2018-04-19 > 05:43:01.952000+ | {'(-9223362383595311663,-9223362383595311661]'} > {code} > Subrange repairs seem to work fine over large ranges and set {{Repaired at}} > as expected, but I haven't figured out why it works for a large range versus > a small range so far. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-14298) cqlshlib tests broken on b.a.o
[ https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448114#comment-16448114 ] Jason Brown edited comment on CASSANDRA-14298 at 4/23/18 1:08 PM: -- bq. Time's probably better spend migrating code to Python 3, instead of creating workarounds to make the tests run with Python 2 again. I agree with this, only casually observing this ticket. Is it worth bringing up on the dev@ ML? Or should we just make this ticket "bring all cqlsh-related things up to Python 3?" was (Author: jasobrown): bq. Time's probably better spend migrating code to Python 3, instead of creating workarounds to make the tests run with Python 2 again. I agree with this, only casually observing this ticket. Is it worth bringing up on the dev@ ML? Or should be just make this ticket "bring all cqlsh-related things up to Python 3?" > cqlshlib tests broken on b.a.o > -- > > Key: CASSANDRA-14298 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14298 > Project: Cassandra > Issue Type: Bug > Components: Build, Testing >Reporter: Stefan Podkowinski >Assignee: Patrick Bannister >Priority: Major > Attachments: cqlsh_tests_notes.md > > > It appears that cqlsh-tests on builds.apache.org on all branches stopped > working since we removed nosetests from the system environment. See e.g. > [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console]. > Looks like we either have to make nosetests available again or migrate to > pytest as we did with dtests. Giving pytest a quick try resulted in many > errors locally, but I haven't inspected them in detail yet. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14298) cqlshlib tests broken on b.a.o
[ https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448114#comment-16448114 ] Jason Brown commented on CASSANDRA-14298: - bq. Time's probably better spend migrating code to Python 3, instead of creating workarounds to make the tests run with Python 2 again. I agree with this, only casually observing this ticket. Is it worth bringing up on the dev@ ML? Or should be just make this ticket "bring all cqlsh-related things up to Python 3?" > cqlshlib tests broken on b.a.o > -- > > Key: CASSANDRA-14298 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14298 > Project: Cassandra > Issue Type: Bug > Components: Build, Testing >Reporter: Stefan Podkowinski >Assignee: Patrick Bannister >Priority: Major > Attachments: cqlsh_tests_notes.md > > > It appears that cqlsh-tests on builds.apache.org on all branches stopped > working since we removed nosetests from the system environment. See e.g. > [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console]. > Looks like we either have to make nosetests available again or migrate to > pytest as we did with dtests. Giving pytest a quick try resulted in many > errors locally, but I haven't inspected them in detail yet. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14298) cqlshlib tests broken on b.a.o
[ https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448100#comment-16448100 ] Stefan Podkowinski commented on CASSANDRA-14298: "It's broken right now because cqlshlib isn't compatible with Python 3." We'll have to address this at some point anyways. Time's probably better spend migrating code to Python 3, instead of creating workarounds to make the tests run with Python 2 again. > cqlshlib tests broken on b.a.o > -- > > Key: CASSANDRA-14298 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14298 > Project: Cassandra > Issue Type: Bug > Components: Build, Testing >Reporter: Stefan Podkowinski >Assignee: Patrick Bannister >Priority: Major > Attachments: cqlsh_tests_notes.md > > > It appears that cqlsh-tests on builds.apache.org on all branches stopped > working since we removed nosetests from the system environment. See e.g. > [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console]. > Looks like we either have to make nosetests available again or migrate to > pytest as we did with dtests. Giving pytest a quick try resulted in many > errors locally, but I haven't inspected them in detail yet. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14412) Restore automatic snapshot of system keyspace during upgrade
[ https://issues.apache.org/jira/browse/CASSANDRA-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448077#comment-16448077 ] Sam Tunnicliffe commented on CASSANDRA-14412: - Pushed a branch [here|https://github.com/beobal/cassandra/tree/14412] which adds back the call to {{SystemKeyspace::snapshotOnVersionChange}}, minus the call to {{SystemKeyspace::migrateDataDirs}} that it was previously guarding, but which is genuinely no longer necessary. As snapshots are reasonably cheap and an upgrade should be a rare event anyway, I've extended the original method to also snapshot {{system_schema}}. CI here: https://circleci.com/workflow-run/677d3b4e-85e5-4b90-bb41-309cd4b361f2 > Restore automatic snapshot of system keyspace during upgrade > > > Key: CASSANDRA-14412 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14412 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle >Reporter: Sam Tunnicliffe >Assignee: Sam Tunnicliffe >Priority: Major > Fix For: 4.0 > > > Since 2.2, the installed version is compared with the version persisted in > system.local (if any) at startup. If these versions differ, the system > keyspace is snapshotted before proceeding in order to enable a rollback if > any other issue prevents startup from completing. Although the method to > perform this check & snapshot is still present in {{SystemKeyspace}}, its > only callsite was mistakenly removed from {{CassandraDaemon}} in > CASSANDRA-12716. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14412) Restore automatic snapshot of system keyspace during upgrade
[ https://issues.apache.org/jira/browse/CASSANDRA-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe updated CASSANDRA-14412: Status: Patch Available (was: Open) > Restore automatic snapshot of system keyspace during upgrade > > > Key: CASSANDRA-14412 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14412 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle >Reporter: Sam Tunnicliffe >Assignee: Sam Tunnicliffe >Priority: Major > Fix For: 4.0 > > > Since 2.2, the installed version is compared with the version persisted in > system.local (if any) at startup. If these versions differ, the system > keyspace is snapshotted before proceeding in order to enable a rollback if > any other issue prevents startup from completing. Although the method to > perform this check & snapshot is still present in {{SystemKeyspace}}, its > only callsite was mistakenly removed from {{CassandraDaemon}} in > CASSANDRA-12716. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-14412) Restore automatic snapshot of system keyspace during upgrade
[ https://issues.apache.org/jira/browse/CASSANDRA-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe reassigned CASSANDRA-14412: --- Assignee: Sam Tunnicliffe > Restore automatic snapshot of system keyspace during upgrade > > > Key: CASSANDRA-14412 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14412 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle >Reporter: Sam Tunnicliffe >Assignee: Sam Tunnicliffe >Priority: Major > Fix For: 4.0 > > > Since 2.2, the installed version is compared with the version persisted in > system.local (if any) at startup. If these versions differ, the system > keyspace is snapshotted before proceeding in order to enable a rollback if > any other issue prevents startup from completing. Although the method to > perform this check & snapshot is still present in {{SystemKeyspace}}, its > only callsite was mistakenly removed from {{CassandraDaemon}} in > CASSANDRA-12716. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-7622) Implement virtual tables
[ https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448021#comment-16448021 ] Benjamin Lerer commented on CASSANDRA-7622: --- I have always been frustrated by that JIRA ticket and that frustration has only grown with time. The root of my frustration is in fact the {{Virtual Table}} name. What every administrator would like to have when it start using C* is a way to access the system information through CQL (ask [~pmcfadin] about it ;)). Every major relational database provide that functionality. Oracle has [Dynamic Performance Views|https://docs.oracle.com/cd/B19306_01/server.102/b14237/dynviews_1.htm], MSSQL has [System Views/Dynamic Management Views|https://technet.microsoft.com/en-us/library/ms177862(v=sql.110).aspx] and PostgreSQL has [System Views|https://www.postgresql.org/docs/9.5/static/views-overview.html]. {{Virtual Table}} is a form of pluggable storage as used by [SQLite|https://www.sqlite.org/vtab.html]. The idea of this ticket was lets add some form of pluggable storage and use it to expose system information. At first glance, it looks like killing two bird with one stone but after some time thinking about it, I tend to believe that we could simply end up with 2 broken features. Adding support for pluggable storages is a complex challenge that requires a large number of changes all around the code base as outlined in the {{Rocksandra}} discussion. On the other hand, as system information (configuration, metrics, ring state ...) are locals, exposing them to the user require much less changes to the code base but more reflection towards user usability. It should be easy for administrator to navigate through system information tables/views but the {{Virtual Table}} syntax does not play well with it. If an administrator wants an idea of what columns are returned by the {{compaction_stats}} table and use {{DESCRIBE}} for it, what it will get will be {{CREATE TABLE USING CompactionStats compaction_stats}}. The tables should also be meaningful and easy to use. As we work a lot with the different metrics (counters, gauges, timers, histograms...) we tend to assume that it is also the case of the users but it is not so we should not rely on the user knowing it. As there is already some effort going on for a proper pluggable storage solution, I came to the conclusion that we should drop that idea of {{Virtual Table}} and simply expose the system information through what we could call {{System Views}}. It will make the transition easier for people coming from the relational word and will help us to focus on what is really important for users which is the usability of the all thing. [~adelapena] and I have been working on a patch on our side to expose most of the system information through system views. We have done our best to make the exposed views as useful as possible. Taking into account the fact that metrics have been added without any constraint, making them fit to table was not always easy :(. I do not want to hijacke that ticket but if our approach make sense and people are interested. I am willing to port our patch to C*. > Implement virtual tables > > > Key: CASSANDRA-7622 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7622 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tupshin Harper >Assignee: Chris Lohfink >Priority: Major > Fix For: 4.x > > > There are a variety of reasons to want virtual tables, which would be any > table that would be backed by an API, rather than data explicitly managed and > stored as sstables. > One possible use case would be to expose JMX data through CQL as a > resurrection of CASSANDRA-3527. > Another is a more general framework to implement the ability to expose yaml > configuration information. So it would be an alternate approach to > CASSANDRA-7370. > A possible implementation would be in terms of CASSANDRA-7443, but I am not > presupposing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14412) Restore automatic snapshot of system keyspace during upgrade
Sam Tunnicliffe created CASSANDRA-14412: --- Summary: Restore automatic snapshot of system keyspace during upgrade Key: CASSANDRA-14412 URL: https://issues.apache.org/jira/browse/CASSANDRA-14412 Project: Cassandra Issue Type: Bug Components: Lifecycle Reporter: Sam Tunnicliffe Fix For: 4.0 Since 2.2, the installed version is compared with the version persisted in system.local (if any) at startup. If these versions differ, the system keyspace is snapshotted before proceeding in order to enable a rollback if any other issue prevents startup from completing. Although the method to perform this check & snapshot is still present in {{SystemKeyspace}}, its only callsite was mistakenly removed from {{CassandraDaemon}} in CASSANDRA-12716. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13985) Support restricting reads and writes to specific datacenters on a per user basis
[ https://issues.apache.org/jira/browse/CASSANDRA-13985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447929#comment-16447929 ] Sam Tunnicliffe commented on CASSANDRA-13985: - LGTM, +1 > Support restricting reads and writes to specific datacenters on a per user > basis > > > Key: CASSANDRA-13985 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13985 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 4.0 > > > There are a few use cases where it makes sense to restrict the operations a > given user can perform in specific data centers. The obvious use case is the > production/analytics datacenter configuration. You don’t want the production > user to be reading/or writing to the analytics datacenter, and you don’t want > the analytics user to be reading from the production datacenter. > Although we expect users to get this right on that application level, we > should also be able to enforce this at the database level. The first approach > that comes to mind would be to support an optional DC parameter when granting > select and modify permissions to roles. Something like {{GRANT SELECT ON > some_keyspace TO that_user IN DC dc1}}, statements that omit the dc would > implicitly be granting permission to all dcs. However, I’m not married to > this approach. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13985) Support restricting reads and writes to specific datacenters on a per user basis
[ https://issues.apache.org/jira/browse/CASSANDRA-13985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe updated CASSANDRA-13985: Status: Ready to Commit (was: Patch Available) > Support restricting reads and writes to specific datacenters on a per user > basis > > > Key: CASSANDRA-13985 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13985 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 4.0 > > > There are a few use cases where it makes sense to restrict the operations a > given user can perform in specific data centers. The obvious use case is the > production/analytics datacenter configuration. You don’t want the production > user to be reading/or writing to the analytics datacenter, and you don’t want > the analytics user to be reading from the production datacenter. > Although we expect users to get this right on that application level, we > should also be able to enforce this at the database level. The first approach > that comes to mind would be to support an optional DC parameter when granting > select and modify permissions to roles. Something like {{GRANT SELECT ON > some_keyspace TO that_user IN DC dc1}}, statements that omit the dc would > implicitly be granting permission to all dcs. However, I’m not married to > this approach. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14404) Transient Replication & Cheap Quorums: Decouple storage requirements from consensus group size using incremental repair
[ https://issues.apache.org/jira/browse/CASSANDRA-14404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447871#comment-16447871 ] Duarte Nunes commented on CASSANDRA-14404: -- Still haven't read the linked paper, but this is pretty much sloppy quorums, no? Also, out of curiosity, how will this intersect with materialized views? Will a transient replica have a paired transient view replica, will it use the paired view replica of the base replica on which behalf it is accepting a write, or will it simply not call into the view write path? > Transient Replication & Cheap Quorums: Decouple storage requirements from > consensus group size using incremental repair > --- > > Key: CASSANDRA-14404 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14404 > Project: Cassandra > Issue Type: New Feature > Components: Coordination, Core, CQL, Distributed Metadata, Hints, > Local Write-Read Paths, Materialized Views, Repair, Secondary Indexes, > Testing, Tools >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg >Priority: Major > Fix For: 4.0 > > > Transient Replication is an implementation of [Witness > Replicas|http://www2.cs.uh.edu/~paris/MYPAPERS/Icdcs86.pdf > (https://www.google.com/url?sa=t=j==s=web=1=rja=8=0ahUKEwi834a%E2%80%948HaAhWCneAKHdj8DzAQFggpMAA=http%3A%2F%2Fwww2.cs.uh.edu%2F~paris%2FMYPAPERS%2FIcdcs86.pdf=AOvVaw0GfCaaAtdzHiM65du1-qeI)] > that leverages incremental repair to make full replicas consistent with > transient replicas that don't store the entire data set. Witness replicas are > used in real world systems such as Megastore and Spanner to increase > availability inexpensively without having to commit to more full copies of > the database. Transient replicas implement functionality similar to > upgradable and temporary replicas from the paper. > With transient replication the replication factor is increased beyond the > desired level of data redundancy by adding replicas that only store data when > sufficient full replicas are unavailable to store the data. These replicas > are called transient replicas. When incremental repair runs transient > replicas stream any data they have received to full replicas and once the > data is fully replicated it is dropped at the transient replicas. > Cheap quorums are a further set of optimizations on the write path to avoid > writing to transient replicas unless sufficient full replicas are available > as well as optimizations on the read path to prefer reading from transient > replicas. When writing at quorum to a table configured to use transient > replication the quorum will always prefer available full replicas over > transient replicas so that transient replicas don't have to process writes. > Rapid write protection (similar to rapid read protection) reduces tail > latency when full replicas are temporarily late to respond by sending writes > to additional replicas if necessary. > Transient replicas can generally service reads faster because they don't have > do anything beyond bloom filter checks if they have no data. With vnodes and > larger size clusters they will not have a large quantity of data even in > failure cases where transient replicas start to serve a steady amount of > write traffic for some of their transiently replicated ranges. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14167) IndexOutOfBoundsException when selecting column counter and consistency quorum
[ https://issues.apache.org/jira/browse/CASSANDRA-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447829#comment-16447829 ] Francisco Fernandez commented on CASSANDRA-14167: - [Here|https://circleci.com/gh/fcofdez/cassandra/8] it is. If you need something else, just tell me. Thanks for the review! > IndexOutOfBoundsException when selecting column counter and consistency quorum > -- > > Key: CASSANDRA-14167 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14167 > Project: Cassandra > Issue Type: Bug > Components: Coordination > Environment: Cassandra 3.11.1 > Ubuntu 14-04 >Reporter: Tristan Last >Assignee: Francisco Fernandez >Priority: Major > Fix For: 3.0.x, 3.11.x > > > This morning I upgraded my cluster from 3.11.0 to 3.11.1 and it appears when > I perform a query on a counter specifying the column name cassandra throws > the following exception: > {code:java} > WARN [ReadStage-1] 2018-01-15 10:58:30,121 > AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread > Thread[ReadStage-1,5,main]: {} > java.lang.IndexOutOfBoundsException: null > java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_144] > java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) ~[na:1.8.0_144] > org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:173) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:696) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractCell.digest(AbstractCell.java:126) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractRow.digest(AbstractRow.java:73) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:181) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:263) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:120) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.createDigestResponse(ReadResponse.java:87) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:345) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:50) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) > ~[apache-cassandra-3.11.1.jar:3.11.1] > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_144] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134) > [apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) > [apache-cassandra-3.11.1.jar:3.11.1] > java.lang.Thread.run(Thread.java:748) [na:1.8.0_144] > {code} > Query works completely find on consistency level ONE but not on QUORUM. > Is this possibly related to CASSANDRA-11726? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-5836) Seed nodes should be able to bootstrap without manual intervention
[ https://issues.apache.org/jira/browse/CASSANDRA-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447709#comment-16447709 ] Oleksandr Shulgin commented on CASSANDRA-5836: -- [~KurtG] What is SR? > Seed nodes should be able to bootstrap without manual intervention > -- > > Key: CASSANDRA-5836 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5836 > Project: Cassandra > Issue Type: Bug >Reporter: Bill Hathaway >Priority: Minor > > The current logic doesn't allow a seed node to be bootstrapped. If a user > wants to bootstrap a node configured as a seed (for example to replace a seed > node via replace_token), they first need to remove the node's own IP from the > seed list, and then start the bootstrap process. This seems like an > unnecessary step since a node never uses itself as a seed. > I think it would be a better experience if the logic was changed to allow a > seed node to bootstrap without manual intervention when there are other seed > nodes up in a ring. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14167) IndexOutOfBoundsException when selecting column counter and consistency quorum
[ https://issues.apache.org/jira/browse/CASSANDRA-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-14167: - Component/s: Coordination > IndexOutOfBoundsException when selecting column counter and consistency quorum > -- > > Key: CASSANDRA-14167 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14167 > Project: Cassandra > Issue Type: Bug > Components: Coordination > Environment: Cassandra 3.11.1 > Ubuntu 14-04 >Reporter: Tristan Last >Assignee: Francisco Fernandez >Priority: Major > Fix For: 3.0.x, 3.11.x > > > This morning I upgraded my cluster from 3.11.0 to 3.11.1 and it appears when > I perform a query on a counter specifying the column name cassandra throws > the following exception: > {code:java} > WARN [ReadStage-1] 2018-01-15 10:58:30,121 > AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread > Thread[ReadStage-1,5,main]: {} > java.lang.IndexOutOfBoundsException: null > java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_144] > java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) ~[na:1.8.0_144] > org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:173) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:696) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractCell.digest(AbstractCell.java:126) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractRow.digest(AbstractRow.java:73) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:181) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:263) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:120) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.createDigestResponse(ReadResponse.java:87) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:345) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:50) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) > ~[apache-cassandra-3.11.1.jar:3.11.1] > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_144] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134) > [apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) > [apache-cassandra-3.11.1.jar:3.11.1] > java.lang.Thread.run(Thread.java:748) [na:1.8.0_144] > {code} > Query works completely find on consistency level ONE but not on QUORUM. > Is this possibly related to CASSANDRA-11726? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14167) IndexOutOfBoundsException when selecting column counter and consistency quorum
[ https://issues.apache.org/jira/browse/CASSANDRA-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-14167: - Fix Version/s: 3.11.x 3.0.x Status: Patch Available (was: Open) > IndexOutOfBoundsException when selecting column counter and consistency quorum > -- > > Key: CASSANDRA-14167 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14167 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra 3.11.1 > Ubuntu 14-04 >Reporter: Tristan Last >Assignee: Francisco Fernandez >Priority: Major > Fix For: 3.0.x, 3.11.x > > > This morning I upgraded my cluster from 3.11.0 to 3.11.1 and it appears when > I perform a query on a counter specifying the column name cassandra throws > the following exception: > {code:java} > WARN [ReadStage-1] 2018-01-15 10:58:30,121 > AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread > Thread[ReadStage-1,5,main]: {} > java.lang.IndexOutOfBoundsException: null > java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_144] > java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) ~[na:1.8.0_144] > org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:173) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:696) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractCell.digest(AbstractCell.java:126) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractRow.digest(AbstractRow.java:73) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:181) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:263) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:120) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.createDigestResponse(ReadResponse.java:87) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:345) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:50) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) > ~[apache-cassandra-3.11.1.jar:3.11.1] > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_144] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134) > [apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) > [apache-cassandra-3.11.1.jar:3.11.1] > java.lang.Thread.run(Thread.java:748) [na:1.8.0_144] > {code} > Query works completely find on consistency level ONE but not on QUORUM. > Is this possibly related to CASSANDRA-11726? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14167) IndexOutOfBoundsException when selecting column counter and consistency quorum
[ https://issues.apache.org/jira/browse/CASSANDRA-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447701#comment-16447701 ] Sylvain Lebresne commented on CASSANDRA-14167: -- +1 on that patch. As mentioned by [~dhawalmody1] above, it is very similar to CASSANDRA-11726 (and indeed ultimately due to proper handling of CASSANDRA-10657). [~fcofdezc] Would you have time to quickly setup CircleCI (http://cassandra.apache.org/doc/latest/development/testing.html#circleci) and run it on this? I can take care of it though if you prefer. > IndexOutOfBoundsException when selecting column counter and consistency quorum > -- > > Key: CASSANDRA-14167 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14167 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra 3.11.1 > Ubuntu 14-04 >Reporter: Tristan Last >Assignee: Francisco Fernandez >Priority: Major > Fix For: 3.0.x, 3.11.x > > > This morning I upgraded my cluster from 3.11.0 to 3.11.1 and it appears when > I perform a query on a counter specifying the column name cassandra throws > the following exception: > {code:java} > WARN [ReadStage-1] 2018-01-15 10:58:30,121 > AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread > Thread[ReadStage-1,5,main]: {} > java.lang.IndexOutOfBoundsException: null > java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_144] > java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) ~[na:1.8.0_144] > org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:173) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:696) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractCell.digest(AbstractCell.java:126) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractRow.digest(AbstractRow.java:73) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:181) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:263) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:120) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.createDigestResponse(ReadResponse.java:87) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:345) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:50) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) > ~[apache-cassandra-3.11.1.jar:3.11.1] > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_144] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134) > [apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) > [apache-cassandra-3.11.1.jar:3.11.1] > java.lang.Thread.run(Thread.java:748) [na:1.8.0_144] > {code} > Query works completely find on consistency level ONE but not on QUORUM. > Is this possibly related to CASSANDRA-11726? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-14167) IndexOutOfBoundsException when selecting column counter and consistency quorum
[ https://issues.apache.org/jira/browse/CASSANDRA-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne reassigned CASSANDRA-14167: Assignee: Francisco Fernandez > IndexOutOfBoundsException when selecting column counter and consistency quorum > -- > > Key: CASSANDRA-14167 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14167 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra 3.11.1 > Ubuntu 14-04 >Reporter: Tristan Last >Assignee: Francisco Fernandez >Priority: Major > > This morning I upgraded my cluster from 3.11.0 to 3.11.1 and it appears when > I perform a query on a counter specifying the column name cassandra throws > the following exception: > {code:java} > WARN [ReadStage-1] 2018-01-15 10:58:30,121 > AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread > Thread[ReadStage-1,5,main]: {} > java.lang.IndexOutOfBoundsException: null > java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_144] > java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) ~[na:1.8.0_144] > org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:173) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:696) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractCell.digest(AbstractCell.java:126) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractRow.digest(AbstractRow.java:73) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:181) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:263) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:120) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.createDigestResponse(ReadResponse.java:87) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:345) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:50) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) > ~[apache-cassandra-3.11.1.jar:3.11.1] > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_144] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134) > [apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) > [apache-cassandra-3.11.1.jar:3.11.1] > java.lang.Thread.run(Thread.java:748) [na:1.8.0_144] > {code} > Query works completely find on consistency level ONE but not on QUORUM. > Is this possibly related to CASSANDRA-11726? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14167) IndexOutOfBoundsException when selecting column counter and consistency quorum
[ https://issues.apache.org/jira/browse/CASSANDRA-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-14167: - Reviewer: Sylvain Lebresne > IndexOutOfBoundsException when selecting column counter and consistency quorum > -- > > Key: CASSANDRA-14167 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14167 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra 3.11.1 > Ubuntu 14-04 >Reporter: Tristan Last >Assignee: Francisco Fernandez >Priority: Major > > This morning I upgraded my cluster from 3.11.0 to 3.11.1 and it appears when > I perform a query on a counter specifying the column name cassandra throws > the following exception: > {code:java} > WARN [ReadStage-1] 2018-01-15 10:58:30,121 > AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread > Thread[ReadStage-1,5,main]: {} > java.lang.IndexOutOfBoundsException: null > java.nio.Buffer.checkIndex(Buffer.java:546) ~[na:1.8.0_144] > java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:314) ~[na:1.8.0_144] > org.apache.cassandra.db.context.CounterContext.headerLength(CounterContext.java:173) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.context.CounterContext.updateDigest(CounterContext.java:696) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractCell.digest(AbstractCell.java:126) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.AbstractRow.digest(AbstractRow.java:73) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:181) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:263) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:120) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadResponse.createDigestResponse(ReadResponse.java:87) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:345) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:50) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) > ~[apache-cassandra-3.11.1.jar:3.11.1] > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_144] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) > ~[apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134) > [apache-cassandra-3.11.1.jar:3.11.1] > org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) > [apache-cassandra-3.11.1.jar:3.11.1] > java.lang.Thread.run(Thread.java:748) [na:1.8.0_144] > {code} > Query works completely find on consistency level ONE but not on QUORUM. > Is this possibly related to CASSANDRA-11726? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14411) Use Bounds instead of Range to represent sstable first/last token when checking how to anticompact sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447671#comment-16447671 ] Marcus Eriksson commented on CASSANDRA-14411: - minimal patches for 2.2 -> 3.11: https://github.com/krummas/cassandra/commits/marcuse/14411-2.2 https://github.com/krummas/cassandra/commits/marcuse/14411-3.0 https://github.com/krummas/cassandra/commits/marcuse/14411-3.11 tests for 3.11: https://circleci.com/gh/krummas/cassandra/tree/marcuse%2F14411-3%2E11 > Use Bounds instead of Range to represent sstable first/last token when > checking how to anticompact sstables > --- > > Key: CASSANDRA-14411 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14411 > Project: Cassandra > Issue Type: Improvement >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Major > > There is currently a chance of missing marking a token as repaired due to the > fact that we use Range which are (a, b] to represent first/last token in > sstables instead of Bounds which are [a, b]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-5836) Seed nodes should be able to bootstrap without manual intervention
[ https://issues.apache.org/jira/browse/CASSANDRA-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447629#comment-16447629 ] Kurt Greaves commented on CASSANDRA-5836: - {quote}OK, but this implies that you have to start the very first node differently from the rest of the cluster. If you want to have 3 seed nodes, what you do currently is just list all of them in configuration and deploy nodes one by one, starting with the seeds, with identical config and you're done. With your proposed approach, there are two extra steps: 1. Deploy the very first seed node with a different config, i.e. only itself in the seeds list. 2. After other seeds nodes are there (or all nodes are there), restart the first node with the complete seeds list. {quote} Getting back to this after being distracted for a while, actually not sure what I was thinking there. It actually doesn't matter how many seeds the first node has in its seed list so there is no special case there. If SR ends and no seed can be contacted and the node is currently uninitialised, but has itself as a seed, the node creates a cluster. This is the existing behaviour and should work perfectly fine with the other changes I've mentioned where any seed bootstraps. > Seed nodes should be able to bootstrap without manual intervention > -- > > Key: CASSANDRA-5836 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5836 > Project: Cassandra > Issue Type: Bug >Reporter: Bill Hathaway >Priority: Minor > > The current logic doesn't allow a seed node to be bootstrapped. If a user > wants to bootstrap a node configured as a seed (for example to replace a seed > node via replace_token), they first need to remove the node's own IP from the > seed list, and then start the bootstrap process. This seems like an > unnecessary step since a node never uses itself as a seed. > I think it would be a better experience if the logic was changed to allow a > seed node to bootstrap without manual intervention when there are other seed > nodes up in a ring. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14411) Use Bounds instead of Range to represent sstable first/last token when checking how to anticompact sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-14411: Reviewer: Blake Eggleston Status: Patch Available (was: Open) https://github.com/krummas/cassandra/commits/marcuse/14411 tests: https://circleci.com/gh/krummas/cassandra/tree/marcuse%2F14411 > Use Bounds instead of Range to represent sstable first/last token when > checking how to anticompact sstables > --- > > Key: CASSANDRA-14411 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14411 > Project: Cassandra > Issue Type: Improvement >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Major > > There is currently a chance of missing marking a token as repaired due to the > fact that we use Range which are (a, b] to represent first/last token in > sstables instead of Bounds which are [a, b]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14411) Use Bounds instead of Range to represent sstable first/last token when checking how to anticompact sstables
Marcus Eriksson created CASSANDRA-14411: --- Summary: Use Bounds instead of Range to represent sstable first/last token when checking how to anticompact sstables Key: CASSANDRA-14411 URL: https://issues.apache.org/jira/browse/CASSANDRA-14411 Project: Cassandra Issue Type: Improvement Reporter: Marcus Eriksson Assignee: Marcus Eriksson There is currently a chance of missing marking a token as repaired due to the fact that we use Range which are (a, b] to represent first/last token in sstables instead of Bounds which are [a, b]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org