JC wrote:
any better chance for the 1.3 if I make it as a possible 0 extra cost
feature like the CAS? ;-)
I would say that it is too late for inclusion in the initial release of
1.3. I would say that we should only apply bugfixes to the 1.3 right now
and get it into a stable state so that
any better chance for the 1.3 if I make it as a possible 0 extra cost
feature like the CAS? ;-)
Jean-Charles
On Mar 20, 8:20 am, dormando wrote:
> We're going to pass on the patch for now. It's a bit much of a corner case
> for the general release.
>
> thanks to folks for taking the time to pro
We're going to pass on the patch for now. It's a bit much of a corner case
for the general release.
thanks to folks for taking the time to propose the patch and discuss it
though - it's certainly going to be kept in mind.
-Dormando
On Mon, 9 Mar 2009, Colin Pitrat wrote:
> So what's the final
So what's the final status on this patch ?
2009/2/23 Jean-Charles Redoutey
> this looks like a pretty efficient summary of the whole thread !
>
> To come back on Dormando's point: any global consistency issue can be
> solved by full flush, so I can't honestly say it is an absolutely necessary
>
this looks like a pretty efficient summary of the whole thread !
To come back on Dormando's point: any global consistency issue can be solved
by full flush, so I can't honestly say it is an absolutely necessary
feature. However, by dramatically reducing the DB cost of such flush
operations, (you
On Feb 22, 5:02 pm, dormando wrote:
> It feels excessive if the only real benefit is being able to do a full
> data flush in less time? Is there anything I'm missing?
This is kind of how I see it:
Pros:
* It's consistent with flush_all [n] for positive values of n if you
consider flush_al
Not totally stopped, but are able to lose one or two instances and
continue to function. That's a defined requirement for all subsystems of
an operation. Redundant DB's so you can lose one, several gearmand's so
you can lose one, enough memcached's so losing one or two is fine, etc.
Anything else
You mean something like "In n seconds, remove all items that have not been
updated until then" ? That's the way I first thought flush with ttl would
work, and I was quite disappointed when I understood that it wasn't the
case. When you give it a thought, that's the same thing as waiting n seconds
a
I like the idea, but probably would never use it on my production systems,
because if I'm making extensive architecture changes (which is the only time
this would be useful really), I like to do a complete restart of the
memcached process just to ensure that I dont have slabs of memory allocated
fo
On Mon, Feb 23, 2009 at 10:02 AM, dormando wrote:
>
> Yo,
>
> I'm a little confused by this thread... It appears that the point is to
> reduce pain or reduce the time required in a full restart of a memcached
> cluster.
>
> This request looks like it would encourage folks to get themselves into
>
Yo,
I'm a little confused by this thread... It appears that the point is to
reduce pain or reduce the time required in a full restart of a memcached
cluster.
This request looks like it would encourage folks to get themselves into
positions where a full restart of a memcached instance is too much
ok, if you put the future flush in the same basket, I am not *offended* ;-)
imho, the main bone of contention we have is we don't consider the same way
the age of an item.
As I understand, for you, this is somehow the "content age", i.e. the time
the oldest part used to construct the item has bee
On Feb 20, 10:29 am, Jean-Charles Redoutey
wrote:
> If we go for 2, the *right* way to use the delayed flush would be something
> like flush +10 on server a and flush +20 on server b.
I've also argued for the removal of flush with delay. It was
semantically confusing with delete with reserv
I have the slight impression we are entering a non ending discussion ;-)
Anyway, what you say is right ... as long as there is only one server in the
memcached cluster.
If we go for 2, the *right* way to use the delayed flush would be something
like flush +10 on server a and flush +20 on server b
On Feb 20, 2009, at 4:27, colin.pit...@gmail.com wrote:
flush_all effectively removes everything from the cache.
> flush_all +10 <=> flush_all executed in 10 seconds
Effectively removes everything from the cache as a sort of time-bomb.
> flush_all -10 <=> flush_all executed 10
I don't understand why you say the semantic would change:
flush_all +10 <=> flush_all executed in 10 seconds
flush_all -10 <=> flush_all executed 10 seconds ago
Or am I missing a subtil difference ?
On Feb 14, 5:32 pm, Didier wrote:
> IMO, usage of the negative value is debatable, since it mean
IMO, usage of the negative value is debatable, since it means the
semantic
of this parameter will change with its sign. But the feature itself is
definitely useful.
Let's suppose the "flush_all -num" operation only deprecates a low
number of items,
and it is required to guarantee that no item is
I find this feature a must have !
Currently, if you need for any reason to ensure your cache only
contains fresh data (say for the final phase of a modification in your
data format), you have to possibilities:
- having set a TTL when updating items in your application at the
very first version, a
On Wed, Feb 11, 2009 at 02:46, Dustin wrote:
>
> By overlapping, I meant building things from cache based on other
> things found from cache. Where today's flush destroys everything, the
> negative flush would destroy *some* of everything.
>
> It seems like a cache generation might be more eas
On Jan 26, 1:36 am, Jean-Charles Redoutey
wrote:
> On Sat, Jan 24, 2009 at 02:49, Dustin wrote:
> > That is, it feels like it could lead to a lot of confusion when the
> > time ends up overlapping due to values newer than the flush timeline
> > being built upon items that are older than the
I agree that the behavior has to be clearly documented, negative TTL may not
be obvious for everyone, but I don't feel this is too misleading: this is
really exactly the same behavior than the flush, only reference time for
item drop is in the past.
Also, if functionally this does not change a lot
On Jan 23, 7:31 am, JC wrote:
> The main idea is to flush the cache in a smooth way: i.e. you ensure
> that after this flush operation, no data older than what you defined
> is in the cache but you didn’t go through the empty cache step.
> Basically, this can be done with a flush command that o
Hi,
When I first read too fast the TTL feature of the flush, I didn’t
understand it as it actually is but as something that could be a nice
feature.
The main idea is to flush the cache in a smooth way: i.e. you ensure
that after this flush operation, no data older than what you defined
is in the
23 matches
Mail list logo