Can you give me more background on your workload?

Correct me if I'm wrong but your aim was to use a local cache but check
the memcached server on every fetch to find out if it's valid?

If your objects are less than... like a kilobyte that's actually slower
than not having the local cache at all. Think you need to be in the 32k+
range to start seeing benefits.

Best is to time bound it, at any rate. Max 1 validations per second for
hot keys that get X requests per second each.

I'm going to draft out two possible mget flags to see if they'd not be
overcomplex:

- Return value if CAS match
- Return value if CAS not match.

... the latter of which would do what you want, but lets just take a quick
look at your workload first so you don't end up with something more
complicated and slower :) I'm not sure I can get the flags to make sense
but I'll try.

On Sat, 21 Sep 2019, John Reilly wrote:

> Hi dormando,Sorry for the delayed response.  I finally got a chance to read 
> through https://github.com/memcached/memcached/pull/484 .  It sounds great.  
>
> In my case, I was thinking about using a local cache to mitigate the network 
> impact of hot keys rather than per-request performance reasons, but I was 
> hoping to do that without
> the clients potentially using stale data from their local cache.  It might 
> still be nice to have a flag on mget to fetch the value if it does not match 
> a provided cas, but in
> the absence of this flag I think it would work fine using mget to only get 
> the cas, and doing a full fetch on cas mismatch.
>
> Cheers,
> John
>
>
>
> On Tue, Sep 17, 2019 at 5:43 PM dormando <dorma...@rydia.net> wrote:
>       Hey,
>
>       Check this out: https://github.com/memcached/memcached/pull/484
>
>       You can't quite do this with the way metaget is now, though it's 
> feasible
>       to add some "value if cas match on mget" flag. I'd have to think it
>       through first.
>
>       For local caches though, unless your object is huge, simply waiting on a
>       round trip to memcached to see if it's up to date removes most of the
>       value of having the local cache. With a local cache you have to check it
>       first, then check if it's fresh, then use it. It's likely the same speed
>       to just not have the local cache at that point so you can avoid the CPU
>       burn of the initial hash/test or trade it for CPU/network used in 
> pulling
>       in the value and having a simple system.
>
>       However! If you have a limited size "hot cache" and you can 
> asynchronously
>       test if they need to update, you could (say once per second or whatever
>       makes sense for how hot your objects are), kick off an async test which
>       runs mget with options for no-bump (optionally), no value, and cas (no
>       flags, size, etc) for a lightweight response of just the cas value.
>
>       If the cas doesn't match, re-issue for a full fetch. This works okay for
>       high frequency items since an update would only leave them out of sync
>       briefly. Polling kind of sucks but you'd only do this when it would 
> reduce
>       the number of requests back to origin anyway :)
>
>       I'm hoping to get metaget in mainline ASAP. Been hunting around for
>       feedback :) Should be finishing up the code very soon, with merge once a
>       bit more confident.
>
>       On Tue, 17 Sep 2019, John Reilly wrote:
>
>       > Hi all,I was just thinking it would be great to be able to cache the 
> most used items in a local cache on the client side and I think this would be 
> possible if
>       there was a way
>       > for the client to send a request to get a key, but only if the cas 
> value is not the same as the cas token of the value I already know about 
> locally in the client. 
>       I don't think
>       > this is possible with either protocol today, but would be happy if 
> told otherwise :)
>       >
>       > Also, can anyone think of a reason why this would not work - if it is 
> not supported today, it might be a nice feature to add. 
>       >
>       > Thanks,
>       > John
>       >
>       > --
>       >
>       > ---
>       > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>       > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memcached+unsubscr...@googlegroups.com.
>       > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/CAJ__CS_0CaWLU-fqTV%2BeYRU6V3Pg6D8Rix%2B7Lbg_YyDs5tjxPg%40mail.gmail.com.
>       >
>       >
>
>       --
>
>       ---
>       You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>       To unsubscribe from this group and stop receiving emails from it, send 
> an email to memcached+unsubscr...@googlegroups.com.
>       To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.1909171732430.1888%40dskull.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/CAJ__CS_%3Db8rudQ2v11NjhO5Ldccy5txVo5YrUptdhBuZMKfHGw%40mail.gmail.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.1909220016480.620%40dskull.

Reply via email to