yifan-c commented on code in PR #4399:
URL: https://github.com/apache/cassandra/pull/4399#discussion_r2414946121


##########
conf/cassandra.yaml:
##########
@@ -617,6 +617,54 @@ counter_cache_save_period: 7200s
 # Disabled by default, meaning all keys are going to be saved
 # counter_cache_keys_to_save: 100
 
+# Dictionary compression settings for ZSTD dictionary-based compression
+# These settings control the automatic training and caching of compression 
dictionaries
+# for tables that use ZSTD dictionary compression.
+
+# How often to refresh compression dictionaries across the cluster.
+# During refresh, nodes will check for newer dictionary versions and update 
their caches.
+# Min unit: s
+compression_dictionary_refresh_interval: 3600s
+
+# Initial delay before starting the first dictionary refresh cycle after node 
startup.
+# This prevents all nodes from refreshing simultaneously when the cluster 
starts.

Review Comment:
   I do not think starting all nodes at the same moment is a requirement. The 
refresh interval just defines how frequent each node should check the system 
table. When there is a new dictionary trained, the node will broadcast the 
message to all live nodes to notify the dictionary is available. This happens 
outside of the periodic polling. 
   Combining both, you can consider the interval as the upper bound of how 
dictionary is refreshed (when broadcasting fails at some nodes). 
   When some nodes are running with the prior dictionary or no dictionary, it 
is still not a huge issue, from the perspective of read and write SSTables. 
Eventually, all nodes will pick up the latest. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to