Ha ha ha ...

To be honest, I couldn't say off the top of my head, but it's not something
you can depend on.

Another thing you can think about doing is using the backend instances.
Those are more or less guaranteed to stick around, though you might not be
able to store as much data in them.

Ikai Lan
Developer Programs Engineer, Google App Engine
Blog: http://googleappengine.blogspot.com
Twitter: http://twitter.com/app_engine
Reddit: http://www.reddit.com/r/appengine



On Fri, Jul 22, 2011 at 2:08 PM, MiuMeet Support <r...@miumeet.com> wrote:

> Thanks for getting back to me so quickly!
>
> RE: Your last email, you wrote:
> "B can have a value, but A would be unset."
>
> That wouldnt be a problem since i only want the implication A=>B ("If A
> exists then B exists").
> But I assume you meant:
> A can have a value, but B would be unset. Right?
>
> Can you (*really roughly*) say how many memcache instances you are running?
> Let's say I would create 1000 "A"s, would I hit all memcaches (with a high
> probabilty)? Then the assumption "if all A's exist, then B exists' would
> hold...
> Or can I influence the hash somehow to end up in the same instance
> (something else then finding hash collisions :)) ?
>
>
> Thanks again, for getting back to me so quickly.
>
> -Andrin
>
> On Fri, Jul 22, 2011 at 7:55 PM, Ikai Lan (Google) <ika...@google.com>wrote:
>
>> One more thing to be aware of: there are times when Memcache needs to be
>> flushed. If a flush happens sometime, B can have a value, but A would be
>> unset.
>>
>> Ikai Lan
>> Developer Programs Engineer, Google App Engine
>> Blog: http://googleappengine.blogspot.com
>> Twitter: http://twitter.com/app_engine
>> Reddit: http://www.reddit.com/r/appengine
>>
>>
>>
>> On Fri, Jul 22, 2011 at 1:53 PM, Ikai Lan (Google) <ika...@google.com>wrote:
>>
>>> Oh, I see what you're getting at: you're asking me if B will still be in
>>> the cache if A is still in the cache. That depends on whether or not the
>>> keys hash to the same Memcache instance.
>>>
>>> FYI - in general, we don't make any guarantees of this behavior, so it
>>> potentially can be problematic down the line if this changes.
>>>
>>>
>>> Ikai Lan
>>> Developer Programs Engineer, Google App Engine
>>> Blog: http://googleappengine.blogspot.com
>>> Twitter: http://twitter.com/app_engine
>>> Reddit: http://www.reddit.com/r/appengine
>>>
>>>
>>>
>>> On Fri, Jul 22, 2011 at 1:52 PM, Ikai Lan (Google) <ika...@google.com>wrote:
>>>
>>>> Memcache works like an LRU cache, but I don't see why a would force out
>>>> b unless you ran out of space.
>>>>
>>>> Also, App Engine's Memcache has 2 LRU structures: an app specific LRU
>>>> and a global LRU for that Memcache instance.
>>>>
>>>> Ikai Lan
>>>> Developer Programs Engineer, Google App Engine
>>>> Blog: http://googleappengine.blogspot.com
>>>> Twitter: http://twitter.com/app_engine
>>>> Reddit: http://www.reddit.com/r/appengine
>>>>
>>>>
>>>>
>>>> On Fri, Jul 22, 2011 at 1:05 PM, Andrin von Rechenberg <
>>>> andri...@gmail.com> wrote:
>>>>
>>>>> Hey there
>>>>>
>>>>> I'm building something like "Google Analytics" for Appengine but in
>>>>> real time
>>>>> (Including qps, hourly & daily graphs, backend counters, monitors with
>>>>> alerts, etc...)
>>>>> The cool thing is that it only uses memcache to increase counters/stats
>>>>> so its
>>>>> really quick to use in prod code. Every minute I gather all counters
>>>>> and write them to datastore.
>>>>> It seems to work perfectly for my app (~250qps with about 1000
>>>>> different counters, and about
>>>>> 1000 counter increases per second)
>>>>> I can also measure how correct my data is (if stuff flushed in
>>>>> memcache, but so far that never happened),
>>>>> but it's all based on one assumption:
>>>>>
>>>>> If I call:
>>>>>
>>>>> memcache.incr("a", intitail_value=0)
>>>>> ...
>>>>> memcache.incr("b", initial_value=0)
>>>>> ....
>>>>> memcache.incr("b", initial_value=0)
>>>>> ....
>>>>>
>>>>> *if "a" is still in the memcache "b" will also be in the memcache and
>>>>> wont have been flushed, correct?*
>>>>> *
>>>>> *
>>>>> or in other words: If the entity size for two items in the memcache is
>>>>> the same,
>>>>> does the memcache work like either a LRU or FIFO cache?
>>>>>
>>>>> Any response is greatly appreciated...
>>>>>
>>>>> -Andrin
>>>>>
>>>>>  --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Google App Engine" group.
>>>>> To post to this group, send email to google-appengine@googlegroups.com
>>>>> .
>>>>> To unsubscribe from this group, send email to
>>>>> google-appengine+unsubscr...@googlegroups.com.
>>>>> For more options, visit this group at
>>>>> http://groups.google.com/group/google-appengine?hl=en.
>>>>>
>>>>
>>>>
>>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine" group.
>> To post to this group, send email to google-appengine@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine+unsubscr...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to