To be a little more clear, a simplified version of what I'm asking is:

Let's say you add 1K columns with timestamps 1 to 1000. Then, at an arbitrarily 
distant point in the future, if you call remove on that CF with timestamp 500 
(so the timestamps are logically out of order), will it delete exactly half of 
it or is there stuff that might go on under the covers that makes this not work 
as you might expect?

-Jeffrey

-----Original Message-----
From: Jeffrey Wang [mailto:jw...@palantir.com] 
Sent: Thursday, February 03, 2011 3:03 PM
To: user@cassandra.apache.org
Subject: RE: rolling window of data

Thanks for the response, but unfortunately a TTL is not enough for us. We would 
like to be able to dynamically control the window in case there is an unusually 
large amount of data or something so we don't run out of disk space.

One question I have in particular is: if I use the timestamp of my log entries 
(not necessarily correlated at all with the timestamp of insert) as the 
timestamp on my mutations will Cassandra do the right thing when I delete? We 
don't have any need for conflict resolution, so we are currently just using the 
current time.

It seems like there is a possibility, depending on the implementation details 
of Cassandra, that I could call a remove with a timestamp for which everything 
before that should get deleted. Like I said before, this seems a bit hacky to 
me, but would it get the job done?

-Jeffrey

-----Original Message-----
From: sc...@scode.org [mailto:sc...@scode.org] On Behalf Of Peter Schuller
Sent: Thursday, February 03, 2011 8:48 AM
To: user@cassandra.apache.org
Subject: Re: rolling window of data

> The correct way to accomplish what you describe is the new (in 0.7)
> per-column TTL.  Simply set this to 60 * 60 * 24 * 90 (90 day's worth of
> seconds) and your columns will magically disappear after that length of
> time.

Although that assumes it's okay to loose data or that there is some
other method in place to prevent loss of it should the data not be
processed to whatever extent is required.

TTL:s would be a great way to efficiently achieve the windowing, but
it does remove the ability to explicitly control exactly when data is
removed (such as after certain batch processing of it has completed).

-- 
/ Peter Schuller

Reply via email to