will ensure my table is empty. I could not truncate from CQL in the
first place as one of the node was not up.
Regards,
Kunal
On Tue, Aug 11, 2020 at 8:45 AM Jeff Jirsa wrote:
> The data probably came from either hints or commitlog replay.
>
> If you use `truncate` from CQL, it solve
appreciated.
Regards,
Kunal
Thanks for the responses. Appreciae it.
@Dor, so you are saying if we add "memlock unlimited" in limits.conf, the
entire heap (Xms=Xmx) can be locked at startup ? Will this be applied to
all Java processes ? We have couple of Java programs running with the same
owner.
Thanks
Kunal
O
check and
confirm?
Also, Can I set memlock parameter to unlimited (64kB default), so entire
Heap (Xms = Xmx) can be locked at node startup ? Will that help?
Or if you have any other suggestions, please let me know.
Regards,
Kunal
Any help is
appreciated.
Regards,
Kunal Vaid
Thanks Sandeep for your reply. Let me try out the steps you suggested. I
will let you know. Appreciate your help.
Regards,
Kunal Vaid
On Mon, Apr 22, 2019 at 4:18 PM Sandeep Nethi
wrote:
> Hi Kunal,
>
> The simple solution for this case would be as follows,
>
> 1. Run *Full re
to be the same.
.
On datacenter B , we are changing the seeds nodes. On datacenter A , we are
changing the seeds nodes in cassandra.yml but that will be picked up during
cassandra restart only but we can not have downtime for datacenter A. It
has to be up all the time.
Regards,
Kunal Vaid
On Mon, A
ks in advance.
Regards,
Kunal Vaid
Thanks everyone for your valuable suggestion. Really appreciate it
Regards,
Kunal Vaid
On Mon, Apr 8, 2019 at 7:41 PM Nitan Kainth wrote:
> Valid suggestion. Stick to the plan, avoid downtime of a node more than
> hinted handoff window. OR increase window to a larger value, if you k
to write code to
track the time and run repair when node comes back online after 3 hrs.
Thanks in anticipation.
Regards,
Kunal Vaid
Hi Dinesh,
We have very small setup and size of data is also very small. Max data size
is around 2gb. Latency expectations is around 10-15ms.
Regards,
Kunal
On Wed, Feb 6, 2019 at 11:27 PM dinesh.jo...@yahoo.com.INVALID
wrote:
> You also want to use Cassandra with a minimum of 3 no
.
Regards,
Kunal Vaid
No, this is a different cluster.
Kunal
On 13-Mar-2018 6:27 AM, "Kenneth Brotman"
wrote:
Kunal,
Is this the GCE cluster you are speaking of in the “Adding new DC?” thread?
Kenneth Brotman
*From:* Kunal Gangakhedkar [mailto:kgangakhed...@gmail.com]
*Sent:* Sunday, March 11,
Yes, that's correct. The customer wants us to migrate the cassandra setup
in their AWS account.
Thanks,
Kunal
On 13 March 2018 at 04:56, Kenneth Brotman
wrote:
> I didn’t understand something. Are you saying you are using one data
> center on Google and one on Amazon?
>
>
&g
On 13 March 2018 at 04:54, Kenneth Brotman
wrote:
> Kunal,
>
>
>
> Please provide the following setting from the yaml files you are using:
>
>
>
> seeds:
>
In GCE: seeds: "10.142.14.27"
In AWS (new node being added): seeds:
"35.196.96.247,35.227.127
has been serving the purpose.
Kunal
>
> Kenneth Brotman
>
>
>
> *From:* Durity, Sean R [mailto:sean_r_dur...@homedepot.com]
> *Sent:* Monday, March 12, 2018 11:36 AM
> *To:* user@cassandra.apache.org
> *Subject:* RE: [EXTERNAL] RE: Adding new DC?
>
>
>
> Yo
of nodes and size of data). The
> preferred method might depend on how much data needs to move. Is any
> application outage acceptable?
>
No. of nodes: 5
RF: 3
Data size (as reported by the load factor in nodetool status output): ~30GB
per node
Thanks,
Kunal
>
>
> Sean Durity
>
Hi Kenneth,
Replies inline below.
On 12-Mar-2018 3:40 AM, "Kenneth Brotman"
wrote:
Hi Kunal,
That version of Cassandra is too far before me so I’ll let others answer.
I was wonder why you wouldn’t want to end up on 3.0x if you’re going
through all the trouble of migrat
ing 2.1.20 - just in case if that's relevant.
Thanks,
Kunal
Finally, got a chance to work on it over the weekend.
It worked as advertised. :)
Thanks a lot, Chris.
Kunal
On 8 March 2018 at 10:47, Kunal Gangakhedkar
wrote:
> Thanks a lot, Chris.
>
> Will try it today/tomorrow and update here.
>
> Thanks,
> Kunal
>
> On 7 Ma
Thanks a lot, Chris.
Will try it today/tomorrow and update here.
Thanks,
Kunal
On 7 March 2018 at 00:25, Chris Lohfink wrote:
> While its off you can delete the files in the directory yeah
>
> Chris
>
>
> On Mar 6, 2018, at 2:35 AM, Kunal Gangakhedkar
> wrote:
>
>
Hi Chris,
I checked for snapshots and backups - none found.
Also, we're not using opscenter, hadoop or spark or any such tool.
So, do you think we can just remove the cf and restart the service?
Thanks,
Kunal
On 5 March 2018 at 21:52, Chris Lohfink wrote:
> Any chance space used by s
s chugging along - shows only 25MiB consumed
by size_estimates (du -sh output).
Any idea why this descripancy?
Is it safe to remove the size_estimates sstables from the affected node and
restart the service?
Thanks,
Kunal
Great, thanks a lot for the help, guys.
I just did the truncation + clearsnapshot just now - worked smoothly.. :)
Freed up 400GB, yay \o/
Really appreciate your help.
Thanks once again.
Kunal
On 21 April 2017 at 15:04, Nicolas Guyomar
wrote:
> Hi Kunal,
>
> Timeout usually occur
ssandra.apache.org/msg48958.html>
exception. Does that stop the truncate operation?
Is there any other safe way to clean up the CF?
Thanks,
Kunal
Hi all,
Is it safe to delete the backup folders from various CFs from 'system'
keyspace too?
I seem to have missed them in the last cleanup - and now, the
size_estimates and compactions_in_progress seem to have grown large ( >200G
and ~6G respectively).
Can I remove them too?
Than
This ended up freeing up almost 350GB on each node - yay :)
Again, thanks a lot for the help, guys.
Kunal
On 12 January 2017 at 21:15, Khaja, Raziuddin (NIH/NLM/NCBI) [C] <
raziuddin.kh...@nih.gov> wrote:
> snapshots are slightly different than backups.
>
>
>
> In my explana
t; directory?
Or, removing any files that are current hard-links inside backups can
potentially cause any issues?
Thanks,
Kunal
On 11 January 2017 at 01:06, Khaja, Raziuddin (NIH/NLM/NCBI) [C] <
raziuddin.kh...@nih.gov> wrote:
> Hello Kunal,
>
>
>
> I would take a loo
backups?
This is my first production deployment - so, still trying to learn.
Thanks,
Kunal
On 10 January 2017 at 21:36, Jonathan Haddad wrote:
> You can just delete them off the filesystem (rm)
>
> On Tue, Jan 10, 2017 at 8:02 AM Kunal Gangakhedkar <
> kgangakhed...@gmail.com>
tput shows the data volumes consuming around 850GB of
space.
I checked the keyspace directory structures - most of the space goes in
/data///backups.
We have never manually run snapshots.
What is the typical procedure to clear the backups?
Can it be done without taking the node offline?
Thanks,
Kunal
Unsubscribe
Regards,
Kunal Gaikwad
Hi,
I want to setup a Cassandra cluster of about 3-5 nodes cluster, can anyone
suggest me what hardware configuration should I consider considering the RF
as 3. The data size should be around 100 GB on the DT environment.
Regards,
Kunal Gaikwad
her 5k of corresponding secondary index) files. :(
Oh, did I mention I'm new to cassandra?
Thanks,
Kunal
Kunal
On 11 July 2015 at 03:29, Sebastian Estevez
wrote:
> #1
>
>> There is one table - daily_challenges - which shows compacted partition
>> max bytes as ~460M and
And here is my cassandra-env.sh
https://gist.github.com/kunalg/2c092cb2450c62be9a20
Kunal
On 11 July 2015 at 00:04, Kunal Gangakhedkar
wrote:
> From jhat output, top 10 entries for "Instance Count for All Classes
> (excluding platform)" shows:
>
> 2088
al of 8739510 instances occupying 193607512 bytes.
JFYI.
Kunal
On 10 July 2015 at 23:49, Kunal Gangakhedkar
wrote:
> Thanks for quick reply.
>
> 1. I don't know what are the thresholds that I should look for. So, to
> save this back-and-forth, I'm attaching the cfstats output
ive_retry = '99.0PERCENTILE';
CREATE INDEX idx_deleted ON app_10001.daily_challenges (deleted);
2. I don't know - how do I check? As I mentioned, I just installed the
dsc21 update from datastax's debian repo (ver 2.1.7).
Really appreciate your help.
Thanks,
Kunal
On 10 July
JVM
environment. So, please bare with me as I would need a lot of hand-holding.
Should I just copy+paste the settings you gave and try to restart the
failing cassandra server?
Thanks,
Kunal
On 10 July 2015 at 22:35, Sebastian Estevez
wrote:
> #1 You need more information.
>
> a) Take a loo
27;ve already restarted cassandra service 4 times with 8GB heap.
No clue what's going on.. :(
Kunal
On 10 July 2015 at 17:45, Jack Krupansky wrote:
> You, and only you, are responsible for knowing your data and data model.
>
> If columns per row or rows per partition can be large,
re segmentation data - with segment type
as partition key. That is again divided into two separate column families;
but they have similar structure.
Columns per row can be fairly large - each segment type as the row key and
associated user ids and timestamp as column value.
Thanks,
Kunal
On 10 Jul
Attaching the stack dump captured from the last OOM.
Kunal
On 10 July 2015 at 13:32, Kunal Gangakhedkar
wrote:
> Forgot to mention: the data size is not that big - it's barely 10GB in all.
>
> Kunal
>
> On 10 July 2015 at 13:29, Kunal Gangakhedkar
> wrote:
>
>>
Forgot to mention: the data size is not that big - it's barely 10GB in all.
Kunal
On 10 July 2015 at 13:29, Kunal Gangakhedkar
wrote:
> Hi,
>
> I have a 2 node setup on Azure (east us region) running Ubuntu server
> 14.04LTS.
> Both nodes have 8GB RAM.
>
> One of
o why it's happening?
Thanks,
Kunal
42 matches
Mail list logo