Hi Mike,
If you will, share your compaction settings. More than likely, your issue is
from 1 of 2 reasons:
1. You have read repair chance set to anything other than 0
2. You’re running repairs on the TWCS CF
Or both….
From: Mike Torra [mailto:mto...@salesforce.com.INVALID]
Sent: Friday, May 03,
Hi Mike,
Have you checked to make sure you’re not a victim of timestamp overlap?
From: Mike Torra [mailto:mto...@salesforce.com.INVALID]
Sent: Thursday, May 02, 2019 11:09 AM
To: user@cassandra.apache.org
Subject: Re: TWCS sstables not dropping even though all data is expired
I'm pretty stumped
Just curious but, did you make sure to run the sstable upgrade after you
completed the move from 2.x to 3.x ?
From: Evgeny Inberg [mailto:evg...@gmail.com]
Sent: Thursday, May 02, 2019 1:31 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra taking very long to start and server under heavy lo
Hello,
Scenario:
I am joining a new host to my cluster and while joining, one of the hosts that
is streaming data to it, fell over from diskspace filling up. Once the node was
marked DN by the rest of the cluster, I gracefully restarted the node. I can
see all appears to be fine when it came ba
ion(10)-127.0.0.1] 2019-04-03 16:25:43,628
Gossiper.java:1029 - InetAddress /192.168.1.18<http://192.168.1.18> is now DOWN
INFO [RMI TCP Connection(10)-127.0.0.1] 2019-04-03 16:25:43,631
StorageService.java:2324 - Removing tokens [..] for
/192.168.1.18<http://192.168.1.18>
Le 03.04.2
Run assassinate the old way. I works very well...
wget -q -O jmxterm.jar
http://downloads.sourceforge.net/cyclops-group/jmxterm-1.0-alpha-4-uber.jar
java -jar ./jmxterm.jar
$>open localhost:7199
$>bean org.apache.cassandra.net:type=Gossiper
$>run unsafeAssassinateEndpoint 192.168.1.18
$>quit
that happens I suspect it'll happen quite quickly, but I'm not sure.
On Wed, Mar 27, 2019 at 7:30 AM Nick Hatfield
mailto:nick.hatfi...@metricly.com>> wrote:
Awesome, thank you Jeff. Sorry I had not seen this yet. So we have this
enabled, I guess it will just take time to finally
.@gmail.com>
http://cassandra.link
On Tue, Mar 26, 2019 at 8:01 AM Nick Hatfield
mailto:nick.hatfi...@metricly.com>> wrote:
How does one properly rid of sstables that have fallen victim to overlapping
timestamps? I realized that we had TWCS set in our CF which also had a
read_repair = 0.1 a
tle
chance that it has any new data, you can just remove the SStables. You can do a
rolling restart -- take down a node, remove mc-254400-* and then start it up.
rahul.xavier.si...@gmail.com<mailto:rahul.xavier.si...@gmail.com>
http://cassandra.link
On Tue, Mar 26, 2019 at 8:01 AM Nick H
How does one properly rid of sstables that have fallen victim to overlapping
timestamps? I realized that we had TWCS set in our CF which also had a
read_repair = 0.1 and after correcting this to 0.0 I can clearly see the
affects over time on the new sstables. However, I still have old sstables t
Maybe others will have a different or better solution but, in my experience to
accomplish HA we simply y write from our application to the new cluster. You
then export the data from the old cluster using cql2json or any method you
choose, to the new cluster. That will cover all live(now) data vi
Hey guys,
Can someone give me some idea or link some good material for determining a good
/ aggressive tombstone strategy? I want to make sure my tombstones are getting
purged as soon as possible to reclaim disk.
Thanks
helps!
Jon
On Fri, Mar 15, 2019 at 9:48 AM Nick Hatfield
mailto:nick.hatfi...@metricly.com>> wrote:
It seems that running a repair works really well, quickly and efficiently when
repairing a column family that does not use TWCS. Has anyone else had a similar
experience? Wondering if runn
It seems that running a repair works really well, quickly and efficiently when
repairing a column family that does not use TWCS. Has anyone else had a similar
experience? Wondering if running TWCS is doing more harm than good as it chews
up a lot of cpu and for extended periods of time in compar
Awesome! Thank you!
On 3/14/19, 9:29 AM, "Jeff Jirsa" wrote:
>SSTableReader and CQLSSTableWriter if you’re comfortable with Java
>
>
>--
>Jeff Jirsa
>
>
>> On Mar 14, 2019, at 1:28 PM, Nick Hatfield
>>wrote:
>>
>> Bummer but, reaso
a
>
>The data gets an expiration time stamp when you write it. Changing the
>default only impacts newly written data
>
>If you need to change the expiration time on existing data, you must
>update it
>
>
>--
>Jeff Jirsa
>
>
>> On Mar 14, 2019, at 1:16 PM,
Hello,
Can anyone tell me if setting a default TTL will affect existing data? I would
like to enable a default TTL and have cassandra add that TTL to any rows that
don’t currently have a TTL set.
Thanks,
ax: 12/30/2018 Min: 12/29/2018 Estimated droppable tombstones:
0.61162079117707546.3G Mar 4 21:52 mc-230801-big-Data.db
Max: 12/31/2018 Min: 12/30/2018 Estimated droppable tombstones:
0.61564495923846196.6G Mar 5 09:48 mc-231332-big-Data.db
Currently our data on disk is filling up
Use this email to get some insight on how to fix database issues in our cluster?
19 matches
Mail list logo