> Hi,
>
> Assume I have two memcached nodes (node A, B) at the beginning, and when I 
> added a new node C, portion of keys are remapped and thanks to consistent 
> hashing, only some of them.
>
> Let assume a value with key "foo" originally at server A, is now being to 
> mapped to server C.
>
> When I finally removed node C, the key should be mapped to node A again, but 
> at that time, node A only contain stale data.
>
> So, is flushing the data only way to solve this issue?

If you're quickly adding/removing servers, that's presently the only way
to ensure it.

You can mitigate the issue by ensuring your cache times aren't too long.
Then either the stale entries go away soon, or you wait at least the width
of expiration time before reverting the list.

For example:

Key "foo" has an expiration time of 10 minutes. It gets set on A.

You add node C to nodes A, B.

"foo" is re-stored on C.

You wait at least ten minutes after adding node C.

"foo" on A is now expired.

Remove node C.

A now has a cache miss for "foo".

Assuming you're doing some cloud use case where you're frequently
adding/removing servers? Most memcached server lists should stay fairly
static.

Reply via email to