Hi Mike,
If you will, share your compaction settings. More than likely, your issue is
from 1 of 2 reasons:
1. You have read repair chance set to anything other than 0
2. You’re running repairs on the TWCS CF
Or both….
From: Mike Torra [mailto:mto...@salesforce.com.INVALID]
Sent: Friday, May
Hi Mike,
Have you checked to make sure you’re not a victim of timestamp overlap?
From: Mike Torra [mailto:mto...@salesforce.com.INVALID]
Sent: Thursday, May 02, 2019 11:09 AM
To: user@cassandra.apache.org
Subject: Re: TWCS sstables not dropping even though all data is expired
I'm pretty stumped
Just curious but, did you make sure to run the sstable upgrade after you
completed the move from 2.x to 3.x ?
From: Evgeny Inberg [mailto:evg...@gmail.com]
Sent: Thursday, May 02, 2019 1:31 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra taking very long to start and server under heavy
019-04-03 16:25:43,628
Gossiper.java:1029 - InetAddress /192.168.1.18<http://192.168.1.18> is now DOWN
INFO [RMI TCP Connection(10)-127.0.0.1] 2019-04-03 16:25:43,631
StorageService.java:2324 - Removing tokens [..] for
/192.168.1.18<http://192.168.1.18>
Le 03.04.2019 17:10, Nick Hatf
Run assassinate the old way. I works very well...
wget -q -O jmxterm.jar
http://downloads.sourceforge.net/cyclops-group/jmxterm-1.0-alpha-4-uber.jar
java -jar ./jmxterm.jar
$>open localhost:7199
$>bean org.apache.cassandra.net:type=Gossiper
$>run unsafeAssassinateEndpoint 192.168.1.18
Once that happens I suspect it'll happen quite quickly, but I'm not sure.
On Wed, Mar 27, 2019 at 7:30 AM Nick Hatfield
mailto:nick.hatfi...@metricly.com>> wrote:
Awesome, thank you Jeff. Sorry I had not seen this yet. So we have this
enabled, I guess it will just take time to finally chew thro
il.com>
http://cassandra.link
On Tue, Mar 26, 2019 at 8:01 AM Nick Hatfield
mailto:nick.hatfi...@metricly.com>> wrote:
How does one properly rid of sstables that have fallen victim to overlapping
timestamps? I realized that we had TWCS set in our CF which also had a
read_repair = 0.
.com<mailto:rahul.xavier.si...@gmail.com>
http://cassandra.link
On Tue, Mar 26, 2019 at 8:01 AM Nick Hatfield
mailto:nick.hatfi...@metricly.com>> wrote:
How does one properly rid of sstables that have fallen victim to overlapping
timestamps? I realized that we had TWCS set in our C
How does one properly rid of sstables that have fallen victim to overlapping
timestamps? I realized that we had TWCS set in our CF which also had a
read_repair = 0.1 and after correcting this to 0.0 I can clearly see the
affects over time on the new sstables. However, I still have old sstables
Maybe others will have a different or better solution but, in my experience to
accomplish HA we simply y write from our application to the new cluster. You
then export the data from the old cluster using cql2json or any method you
choose, to the new cluster. That will cover all live(now) data
Hey guys,
Can someone give me some idea or link some good material for determining a good
/ aggressive tombstone strategy? I want to make sure my tombstones are getting
purged as soon as possible to reclaim disk.
Thanks
!
Jon
On Fri, Mar 15, 2019 at 9:48 AM Nick Hatfield
mailto:nick.hatfi...@metricly.com>> wrote:
It seems that running a repair works really well, quickly and efficiently when
repairing a column family that does not use TWCS. Has anyone else had a similar
experience? Wondering if runnin
It seems that running a repair works really well, quickly and efficiently when
repairing a column family that does not use TWCS. Has anyone else had a similar
experience? Wondering if running TWCS is doing more harm than good as it chews
up a lot of cpu and for extended periods of time in
Awesome! Thank you!
On 3/14/19, 9:29 AM, "Jeff Jirsa" wrote:
>SSTableReader and CQLSSTableWriter if you’re comfortable with Java
>
>
>--
>Jeff Jirsa
>
>
>> On Mar 14, 2019, at 1:28 PM, Nick Hatfield
>>wrote:
>>
>> Bummer but, reaso
a
>
>The data gets an expiration time stamp when you write it. Changing the
>default only impacts newly written data
>
>If you need to change the expiration time on existing data, you must
>update it
>
>
>--
>Jeff Jirsa
>
>
>> On Mar 14, 2019, at 1:16 PM,
Hello,
Can anyone tell me if setting a default TTL will affect existing data? I would
like to enable a default TTL and have cassandra add that TTL to any rows that
don’t currently have a TTL set.
Thanks,
data on disk is filling up quickly because we are unable to
successfully evict this data. Is there a way to
1.
1. Cleanup what is currently taking up so much disk space
2.
2. Mitigate this entirely in the future
Any help would be greatly appreciated!!
Thanks,
Nick Hatfield
From: Surbhi
Use this email to get some insight on how to fix database issues in our cluster?
18 matches
Mail list logo