Re: What is the safest way to enable authorization?
Hi, before changing the configuration from `AllowAllAuthorizer` to `CassandraAuthorizer`, you need to grant enough permissions to the user that allow all the accessed tables by that user. I think that should fix the problem. Thanks On Thu, May 9, 2019 at 12:02 PM Laxmikant Upadhyay wrote: > Let's say I have a 3 node cluster on 3.11.4 on which authentication is > enabled but authorization is disabled. It has one non-super login user > 'user1' and default super user 'cassandra' > In cassandra.yaml > authenticator: PasswordAuthenticator > authorizer: AllowAllAuthorizer > > So to enable authorization we change the cassandra.yaml of a node 'node1' > from > authorizer: AllowAllAuthorizer > TO > authorizer: CassandraAuthorizer > > You client application db operations on the node1 starts failing as soon > as the cassandra restarts on that nodewith below error until you run GRANT > operation for user1 after connecting with cassandra user: > UnauthorizedException: User user1 has no SELECT permission on testtable> > > Is there a way to avoid this error at all in the above situation? > > -- > > regards, > Laxmikant Upadhyay > >
Optimizing for connections
Hi Guys, Have a couple of questions regarding the connections to cassandra, 1. What are the recommended number of connections per cassandra node? 2. Is it a good idea to create coordinator nodes(with `num_token: 0`) and whitelisting only those hosts from client side? so that I can isolate main worker don't need to work on connection threads 3. does the request time on client side include connect time? 4. Is there any hard limit on number of connections that can be set on cassandra? Thanks a lot for your help
Help in understanding strange cassandra CPU usage
Hi Guys, Since the start of our org, cassandra used to be a SPOF, due to recent priorities we changed our code base so that cassandra won't be SPOF anymore, and during that process we made a kill switch within the code(PHP), this kill switch would ensure that no connection is made to the cassandra for any queries. During the testing phase of kill switch we have identified a strange behaviour that CPU and Load Average would go down from 400%(cpu), 14-20(load on a 16 core machine) to 20%(cpu), 2-3(load) and even if the kill switch is activated only for 30 secs, then cpu would go down from 400 to 20, and maintain at 20% for atleast 24 hrs before it starts to increase back to 400 and stay consistent from then. and this is for all the nodes but not just a few. Details: Cassandra Version: 2.2.4 Number of Nodes: 8 AWS Instance Type: c4.4xlarge Number of Open Files: 30k to 50k (depending on number of auto scaled php nodes) Would be grateful for any explanation regarding this strange behaviour Thanks & Regards Srinivas Devaki SRE/SDE at Zomato
Cassandra Upgrade Plan 2.2.4 to 3.11.3
Hi everyone, I have planned out our org's cassandra upgrade plan and want to make sure if it seems fine. Details Existing Cluster: * Cassandra 2.2.4 * 8 nodes with 32G ram and 12G max heap allocated to cassandra * 4 nodes in each rack 1. Ensured all clients to use LOCAL_* consistency levels and all traffic to "old" dc 2. Add new cluster as "new" dc with cassandra 2.2.4 2.1 update conf on all nodes in "old" dc 2.2 rolling restart the "old" dc 3. Alter tables with similar replication factor on the "new" dc 4. cassandra repair on all nodes in "new" dc 5. upgrade each node in "new" dc to cassandra 3.11.3 (and upgradesstables) 6. switch all clients to connect to new cluster 7. repair all new nodes once more 8. alter tables to replication only on new dc 9. remove "old" dc and I have some doubts on the same plan D1. can i just join 3.11.3 cluster as "new" dc in the 2.2.4 cluster? D2. how does rolling upgrade work, as in within the same cluster how can 2 versions coexist? Will be grateful if you could review this plan. PS: following this plan to ensure that I can revert back to old behaviour at any step Thanks Srinivas Devaki SRE/SDE at Zomato