Hi, I have an unusual case here: I'm wondering what will happen if I
set min_index_interval to 1.

Here's the logic. Suppose I have a table where I really want to squeeze as
many reads/sec out of it as possible, and where the row data size is much
larger than the keys. E.g. the keys are a few bytes, the row data is ~500KB.

This table would be a great candidate for key caching. Let's suppose I have
enough memory to have every key cached. However, it's a lot of data, and
the reads are very random. So it would take a very long time for that cache
to warm up.

One solution is that I write a little app to go through every key to warm
it up manually, and ensure that Cassandra has key_cache_keys_to_save set to
save the whole thing on restart. (Anyone know of a better way of doing
this?)

Another was the crazy idea I started with of setting min_index_interval to
1. My guess was that this would cause it to read all index entries, and
effectively have them all cached permanently. And it would read them
straight out of the SSTables on every restart. Would this work? Other than
probably causing a really long startup time, are there issues with this?

Thanks,
-dan

Reply via email to