Hello guys.
I'm investigating the reasons of performance degradation for my case
scenario which follows:

- I do have a column family which is filled of thousands of columns inside
a unique row(varies between 10k ~ 200k). And I do have also thousands of
rows, not much more than 15k.
- This rows are constantly updated. But the write-load is not that
intensive. I estimate it as 100w/sec in the column family.
- Each column represents a message which is read and processed by another
process. After reading it, the column is marked for deletion in order to
keep it out from the next query on this row.

Ok, so, I've been figured out that after many insertions plus deletion
updates, my queries( column slice query ) are taking more time to be
performed. Even if there are only few columns, lower than 100.

So it looks like that the longer is the number of columns being deleted,
the longer is the time spent for a query.
-> Internally at C*, does column slice query ranges among deleted columns?
If so, how can I mitigate the impact in my queries? Or, how can I avoid
those deleted columns?

Reply via email to