Re: gc throughput

2021-11-17 Thread Kiran mk
G1GC would be the most suitable option and it has better control over pauses and optimal. Best Regards, Kiran M K On Wed, Nov 17, 2021, 10:27 PM Elliott Sims wrote: > CMS has a higher risk of a long stop-the-world full GC that will cause a > burst of timeouts, but if you're not getting that or

Re: Enabling SSL on a live cluster

2021-11-12 Thread Kiran mk
Hi Andy, Internode encryption is not possible without downtime prior to Apache Cassandra 4.0.As there is no optional option before 4.0 under server_encryption_options, If you try to enable it, cassandra running on version 3.x wouldn't start as the property isnt available. optional is only avai

Re: Cqlsh copy command on a larger data set

2020-07-13 Thread Kiran mk
I wouldn't say it's good approach for that size. But you can try dsbulk approach too. Try to split output into multiple files. Best Regards, Kiran M K On Tue, Jul 14, 2020, 5:17 AM Jai Bheemsen Rao Dhanwada < jaibheem...@gmail.com> wrote: > Hello, > > I would like to copy some data from one ca

Opscenter not opening due to GC Allocation error.

2020-04-08 Thread Kiran mk
Hi All, I started seeing the opscenter stopped opening and taking long time. It is throwing lot of GC allocation error as follows. I tried increasing the Xmx and Xms values but its not helping. Can someone suggest on this ? 2020-04-08T14:16:09.557-0700: 464.551: [GC (Allocation Failure) 2020-04

Re: How to find which table partitions having the more reads per sstables ?

2020-03-16 Thread Kiran mk
unt` should give you the value. >> >> >> In the future you should try to collect the cassandra metrics via jmx (or >> another method), but opscenter is probably able to do it for you. >> >> On Mon, Mar 16, 2020 at 10:12 AM Kiran mk >> wrote: >> >

To find top 10 tables with top 10 sstables per read and top 10 tables with top tombstones per read ?

2020-03-16 Thread Kiran mk
Hi All, Is there a way to find top 10 tables with top 10 sstables per read and top 10 tables with top tombstones per read in Cassandra? As In Opscenter everytime we have to select the tables to find whats the tombstones per read. There are chances that we might miss considering the tables which

Re: How to find which table partitions having the more reads per sstables ?

2020-03-16 Thread Kiran mk
ia jmx) > > You will have one metric per table, try to find the biggest one. You can find > more info here : > http://cassandra.apache.org/doc/latest/operating/metrics.html#table-metrics > > On Mon, Mar 16, 2020 at 9:11 AM Kiran mk wrote: >> >> Hi All, >> >&g

How to find which table partitions having the more reads per sstables ?

2020-03-16 Thread Kiran mk
Hi All, I am trying to understand reads per sstables. How to find which table partitions having the more reads per sstables in Cassandra? -- Best Regards, Kiran.M.K. - To unsubscribe, e-mail: user-unsubscr...@cassandra.apach

Re: Cassandra | Cross Data Centre Replication Status

2018-10-31 Thread Kiran mk
Run the repair with -pr option on each node which will repair only the parition range. nodetool repair -pr On Wed, Oct 31, 2018 at 7:04 PM Surbhi Gupta wrote: > > Nodetool repair will take way more time than nodetool rebuild. > How much data u have in your original data center? > Repair should be

Re: Data copy problem

2018-09-26 Thread Kiran mk
Please do try COPY TO commas to dump the data with csv or any other delimited format to dump. Then try Run COPY FROM on target cluster after copying the exported file. Best Regards, Kiran.M.K On Thu, 27 Sep 2018 at 4:05 AM, rajasekhar kommineni wrote: > Hi All, > > I have a requirement to cop

Re: Maximum and recommended storage per node

2017-07-28 Thread Kiran mk
Recommended is 4TB per node Best regards, Koran.M.K On 28-Jul-2017 1:57 PM, "CPC" wrote: > Hi all, > > Is there any recommended and maximum storage per node? In old articles > 1tb per node was maximum but is it still apply. Or is it just depends on > our latency requirements? Can you share you

Re: Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread Kiran mk
You have to download the Prometheus HTTP jmx dependencies jar and download the Cassandra yaml and mention the jmx port in the config (7199). Run the agent on specific port" on all the Cassandra nodes. After this go to your Prometheus server and make the scrape config to metrics from all clients.

Re: Quick question to config Prometheus to monitor Cassandra cluster

2017-07-20 Thread Kiran mk
You have to download the Prometheus HTTP jmx dependencies jar and download the Cassandra yaml and mention the jmx port in the config (7199). Run the agent on specific port" on all the Cassandra nodes. After this go to your Prometheus server and make the scrape config to metrics from all clients.

Re: Compaction And Write performance

2015-11-25 Thread Kiran mk
Yes to an extent if you have descent machines and but not making use of its resources. By default the compaction throughput is 16 MB/s which does performs compaction very slower and runs for hours together which will lag the compaction and makes more number of pending compaction jobs. You can inc

Re: java.lang.IllegalArgumentException: Mutation of X bytes is too large for the maxiumum size of Y

2015-10-06 Thread Kiran mk
Do you see more dropped mutation messages in nodetool tpstats output. On Oct 6, 2015 7:51 PM, "George Sigletos" wrote: > Hello, > > I have been frequently receiving those warnings: > > java.lang.IllegalArgumentException: Mutation of 35141120 bytes is too > large for the maxiumum size of 33554432

Is HEAP_NEWSIZE configuration is no more useful from cassandra 2.1 ?

2015-10-04 Thread Kiran mk
Is HEAP_NEWSIZE configuration is no more useful from cassandra 2.1 ? Best Regards, Kiran.M.K.

Compaction Error

2015-07-16 Thread Kiran mk
Hi All, I am trying to do compaction on the same node. But getting the below error any suggestion on this. 9160 Port is already opened nodetool -h testserv1 -p 9160 compact Error connecting to remote JMX agent! java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.CommunicationE

Re: Is there a way to remove a node with Opscenter?

2015-07-07 Thread Kiran mk
Yes, if your intension is to decommission a node. You can do that by clicking on the node and decommission. Best Regards, Kiran.M.K. On Jul 8, 2015 1:00 AM, "Sid Tantia" wrote: > I know you can use `nodetool removenode` from the command line but is > there a way to remove a node from a cluster

Re: Decommission datacenter - repair?

2015-06-05 Thread Kiran mk
ou correctly that a decommissioning node only will > hand over its data to a single node? If it would hand it over to all other > replica nodes, I see that essentially as an implicit repair. Am I wrong? > > Thanks, > Jens > > On Fri, Jun 5, 2015 at 2:27 PM, Kiran mk wrote: > &

Re: Decommission datacenter - repair?

2015-06-05 Thread Kiran mk
Hi Jens, If you decommission a data center, The data residing in the Data Center which you are planning for decommission has to be balanced to the nodes of the other data center satisfying RF. Hence Repair is required. Best Regards, Kiran.M.K. On Fri, Jun 5, 2015 at 5:45 PM, Jens Rantil wrote

Regarding JIRA

2015-06-01 Thread Kiran mk
Hi , I am using Apache Cassandra Community Edition for my learning and practicing, can I raise the doubts,issues and clarifications using JIRA ticket against Cassandra. Will there be any charges for that. As I know we can create free JIRA account, Can anyone suggest me on this. -- Best Regard

Re: After running nodetool clean up, the used disk space was increased

2015-05-15 Thread Kiran mk
run cleanup on all the nodes and wait till it completes. On May 15, 2015 10:47 PM, "Analia Lorenzatto" wrote: > Hello guys, > > I have a cassandra cluster = 2.1.0-2 comprised of 3 nodes. I successfully > added the third node last week. After that, I ran nodetool cleanup on one > of the other tw

Re: After running nodetool clean up, the used disk space was increased

2015-05-15 Thread Kiran mk
What is the Replication Factor ? What does ring status saying ? On May 16, 2015 12:32 AM, "Kiran mk" wrote: > What is the data distribution status across nodes ? What is the RP ? > On May 16, 2015 12:30 AM, "Analia Lorenzatto" > wrote: > >> Thanks Ki

Re: After running nodetool clean up, the used disk space was increased

2015-05-15 Thread Kiran mk
. > > > On Fri, May 15, 2015 at 3:37 PM, Kiran mk wrote: > >> Did you try running nodetool cleanup on all the nodes ? >> On May 15, 2015 10:47 PM, "Analia Lorenzatto" >> wrote: >> >>> Hello guys, >>> >>> I have a cassa

Re: After running nodetool clean up, the used disk space was increased

2015-05-15 Thread Kiran mk
Did you try running nodetool cleanup on all the nodes ? On May 15, 2015 10:47 PM, "Analia Lorenzatto" wrote: > Hello guys, > > I have a cassandra cluster = 2.1.0-2 comprised of 3 nodes. I successfully > added the third node last week. After that, I ran nodetool cleanup on one > of the other two

Re: Cluster imbalance caused due to #Num_Tokens

2015-04-22 Thread Kiran mk
Bring down the second node using nodetool removenode or decommission Add the node with num_tokens and run the nodetool repair. At last run the nodetool cleanup on both the nodes (one after the other) Observe after some time using nodetool status. On Apr 23, 2015 12:39 AM, "Robert Coli" wrote:

Re: COPY command to export a table to CSV file

2015-04-19 Thread Kiran mk
s > Neha > > On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk > wrote: > >> Hi, >> >> check the MAX_HEAP_SIZE configuration in cassandra-env.sh environment >> file >> >> Also HEAP_NEWSIZE ? >> >> What is the Consistency Level you are u

Re: COPY command to export a table to CSV file

2015-04-19 Thread Kiran mk
Hi, check the MAX_HEAP_SIZE configuration in cassandra-env.sh environment file Also HEAP_NEWSIZE ? What is the Consistency Level you are using ? Best REgards, Kiran.M.K. On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk wrote: > Seems like the is related to JAVA HEAP Memory. > > What is

Re: COPY command to export a table to CSV file

2015-04-19 Thread Kiran mk
Seems like the is related to JAVA HEAP Memory. What is the count of records in the column-family ? What is the Cassandra Version ? Best Regards, Kiran.M.K. On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi wrote: > Hello all, > > We are getting the OutOfMemoryError on one of the Node and the No