If you increase the number of nodes to 3, with an RF of 3, then you should be able to read/delete utilizing a quorum consistency level, which I believe will help here. Also, make sure the time of your servers are in sync, utilizing NTP, as drifting time between you client and server could cause updates to be mistakenly dropped for being old.

Also, make sure you are running with a gc_grace period that is high enough. The default is 10 days.

Hope this helps,
-Mike

On 2/15/2013 1:13 PM, Víctor Hugo Oliveira Molinar wrote:
hello everyone!

I have a column family filled with event objects which need to be processed by query threads. Once each thread query for those objects(spread among columns bellow a row), it performs a delete operation for each object in cassandra.
It's done in order to ensure that these events wont be processed again.
Some tests has showed me that it works, but sometimes i'm not getting those events deleted. I checked it through cassandra-cli,etc.

So, reading it (http://wiki.apache.org/cassandra/DistributedDeletes) I came to a conclusion that I may be reading old data.
My cluster is currently configured as: 2 nodes, RF1, CL 1.
In that case, what should I do?

- Increase the consistency level for the write operations( in that case, the deletions ). In order to ensure that those deletions are stored in all nodes.
or
- Increase the consistency level for the read operations. In order to ensure that I'm reading only those yet processed events(deleted).

?

-
Thanks in advance



Reply via email to