Re: Large number of tiny sstables flushed constantly

2021-08-11 Thread Jiayong Sun
 Hi Erick,
The nodes have 4 SSD (1TB for each but we only use 2.4TB of space. Current disk 
usage is about 50%) with RAID0. Based on number of disks we increased 
memtable_flush_writers: 4 instead of default of 2.
For the following we set:- max heap size - 31GB- memtable_heap_space_in_mb (use 
default)- memtable_offheap_space_in_mb  (use default)
In the logs, we also noticed system.sstable_activity table has hundreds of MB 
or GB of data and constantly flushing:
DEBUG [NativePoolCleaner]  ColumnFamilyStore.java:932 - Enqueuing 
flush of sstable_activity: 0.293KiB (0%) on-heap, 0.107KiB (0%) off-heapDEBUG 
[NonPeriodicTasks:1]  SSTable.java:105 - Deleting sstable: 
/app/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/md-103645-bigDEBUG
 [NativePoolCleaner]  ColumnFamilyStore.java:1322 - Flushing largest 
CFS(Keyspace='system', ColumnFamily='sstable_activity') to free up room. Used 
total: 0.06/1.00, live: 0.00/0.00, flushing: 0.02/0.29, this: 0.00/0.00
Thanks,Jiayong SunOn Wednesday, August 11, 2021, 12:06:27 AM PDT, Erick 
Ramirez  wrote:  
 
 4 flush writers isn't bad since the default is 2. It doesn't make a difference 
if you have fast disks (like NVMe SSDs) because only 1 thread gets used.
But if flushes are slow, the work gets distributed to 4 flush writers so you 
end up with smaller flush sizes although it's difficult to tell how tiny the 
SSTables would be without analysing the logs and overall performance of your 
cluster.
Was there a specific reason you decided to bump it up to 4? I'm just trying to 
get a sense of why you did it since it might provide some clues. Out of 
curiosity, what do you have set for the following?- max heap size- 
memtable_heap_space_in_mb- memtable_offheap_space_in_mb
  
  

Re: Large number of tiny sstables flushed constantly

2021-08-11 Thread Jiayong Sun
 Hi Erick,
Thanks for your response.Actually we did not set the memtable_cleanup_threshold 
in the cassandra.yaml.However, we have memtable_flush_writers: 4 defined in the 
yaml, and VM node has 16-core.Any advice for this param's value?
Thanks again.Jiayong SunOn Tuesday, August 10, 2021, 09:20:12 PM PDT, Erick 
Ramirez  wrote:  
 
 Is it possible that you've got memtable_cleanup_threshold set in 
cassandra.yaml with a low value? It's been deprecated in C* 3.10 
(CASSANDRA-12228). If you do have it configured, I'd recommend removing it 
completely and restart C* when you can. Cheers!  

Re: Large number of tiny sstables flushed constantly

2021-08-11 Thread Erick Ramirez
4 flush writers isn't bad since the default is 2. It doesn't make a
difference if you have fast disks (like NVMe SSDs) because only 1 thread
gets used.

But if flushes are slow, the work gets distributed to 4 flush writers so
you end up with smaller flush sizes although it's difficult to tell how
tiny the SSTables would be without analysing the logs and overall
performance of your cluster.

Was there a specific reason you decided to bump it up to 4? I'm just trying
to get a sense of why you did it since it might provide some clues. Out of
curiosity, what do you have set for the following?
- max heap size
- memtable_heap_space_in_mb
- memtable_offheap_space_in_mb

>