We have seen the issue while using LCS that there were around 100K stables
got generated and compactions were not able to catch up and node became
unresponsive. The reason for that was one of the stable got corrupted and
compaction was kind of hanging on that sstable and further sstables were
flush
I have not read the entire thread so sorry if this is already mentioned.
You should review your logs, a potential problem could be a corrupted
sstable.
In a situation like this you will notice that the system is repeatedly
trying to compact a given sstable. The compaction fails and based on the
he
what are your disk hardware specs ?
On Tue, Oct 25, 2016 at 8:47 AM, Lahiru Gamathige
wrote:
> Hi Users,
>
> I have a single server code deployed with multiple environments (staging,
> dev etc) but they all use a single Cassandra cluster but keyspaces are
> prefixed with the environment name, so
+1 definitely upgrade to 2.1.16. You shouldn't see any compatibility issues
client side when upgrading from 2.1.0. If scrub removed 500 SSTables that's
quite worrying. If the mass SSTables are causing issues you can disconnect
the node from the cluster using:
nodetool disablegossip && nodetool disa
Hi Lahiru,
2.1.0 is also quite old (Sep 2014) - and just from my memory I
remembered that there was an issue whe had with cold_reads_to_omit:
http://grokbase.com/t/cassandra/user/1523sm4y0r/how-to-deal-with-too-many-sstables
https://www.mail-archive.com/search?l=user@cassandra.apache.org&q=sub
Hi Jan,
Thanks for the response. My SSTables are < 3MB and I have 3500+ SSTables in
the folder. When you say if they are small do you think my file sizes are
small ? I ran the nodetool compact nothing happened, then I ran nodetool
scrub it removed 500 SSTables then it stopped.
Thanks for that tip
Hi Lahiru,
maybe your node was running out of memory before. I saw this behaviour
if available heap is low forcing to flush out memtables to sstables
quite often.
If this is that what is hitting you, you should see that the sstables
are really small.
To cleanup, nodetool compact would do t
Hi Users,
I have a single server code deployed with multiple environments (staging,
dev etc) but they all use a single Cassandra cluster but keyspaces are
prefixed with the environment name, so each server has its own keyspace to
store data. I am using Cassandra 2.1.0 and using it to store timeser