Re: Why memcached doesn't give an option or have the capability to ignore older values with same key based on some version.

2018-04-25 Thread dormando
You're way way over thinking this.

I originally gave two examples, one was literally just the ADD and SET
commands, and the other is a wiki page that solves further issues, which
you should ignore for now.

Again, memcached server is *atomic* for key manipulations. the server
works exactly as I described, no matter what you do or how you access it.

On Wed, 25 Apr 2018, sachin shetty wrote:

> Thank you.
> Does this memcached 'add' lock or 'set' lock defined in memecached client or
> memcached server? The reason I asked because, thread 1 from one process and 
> thread 2
> from another process tries to add and set simultaneously, will this lock 
> happens at the
> memcached server or at the individual clients?
>
> On Thursday, 26 April 2018 09:26:30 UTC+5:30, Dormando wrote:
>   Memcached is internally atomic on key operations. If you add and set at
>   the same time, the set will effectively always win since they are
>   serialized.
>
>   1) add goes first. set overwrites it.
>   2) set goes first. add will fail.
>
>   On Wed, 25 Apr 2018, sachin shetty wrote:
>
>   > Cool.
>   > So let me assume the below scenario and correct me if I'm wrong here.
>   >
>   > Say thread 1 always does add and thread 2 always does set. Will there 
> be
>   any race conditions when both these threads do add and set 
> simultaneously?
>   What
>   > I mean is say thread1 does add and holds 'add' lock and if at the same
>   time thread 2 comes for the set operation, how 'set' lock and 'add' 
> lock is
>   > handled here?
>   >
>   >
>   > On Thursday, 26 April 2018 06:58:27 UTC+5:30, Dormando wrote:
>   >       Hey,
>   >
>   >       ADD sets an item *only if it doesn't currently exist*.
>   >
>   >       If you want thread 2 to be authoritative after updating the DB, 
> you
>   need
>   >       to use a SET. If you don't care and only ever want the first 
> thread
>   to
>   >       win, you can always use ADD.
>   >
>   >       On Wed, 25 Apr 2018, sachin shetty wrote:
>   >
>   >       > Thank you for the reply.
>   >       > Can this add be used always, I mean during an update as well?
>   >       > What could be the potential disadvantage of this?
>   >       > So if two thread does an update using add, still lock hold 
> well in
>   this sceanrio?
>   >       >
>   >       > Thanks,
>   >       > Sachin
>   >       >
>   >       >
>   >       > On Wednesday, 25 April 2018 14:13:40 UTC+5:30, Dormando wrote:
>   >       >       Hey,
>   >       >
>   >       >       Two short answers:
>   >       >
>   >       >       1) thread 1 uses 'add' instead of 'set'
>   >       >       2) thread 2 uses 'set'.
>   >       >
>   >       >       via add, a thread recaching an object can't overwrite 
> one
>   already there.
>   >       >
>   >       >      
>   
> https://github.com/memcached/memcached/wiki/ProgrammingTricks#avoiding-stampeding-herd
>   >       >
>   >       >       for related issues. using an advisory lock would change 
> the
>   flow:
>   >       >
>   >       >       a. thread 1 gets a miss.
>   >       >       b. thread 1 runs 'add lock:key'
>   >       >       c. thread 1 wins, goes to db
>   >       >       d. thread 2 updates db. tries to grab key lock
>   >       >       e. thread 2 fails to grab key lock, waits and retries
>   >       >
>   >       >       etc. bit more chatter but with added benefit of reducing
>   stampeding herd
>   >       >       if that's an issue.
>   >       >
>   >       >       On Wed, 25 Apr 2018, sachin shetty wrote:
>   >       >
>   >       >       > There is a scenario where a cache gets updated by two
>   threads like the instance
>   >       >       > mentioned below
>   >       >       >
>   >       >       >  a. thread 1 looks at the memcache key and gets a miss
>   >       >       >   b. thread 1 falls back to the database
>   >       >       >   c. thread 2 changes the database value
>   >       >       >   d. thread 2 updates the memcache key with the new 
> value
>   >       >       >   e. thread 1 sets the old database value into 
> memcache  
>   >       >       >
>   >       >       > I know this scenario is application specific. But the
>   question I have is if possible
>   >       >       > there is an option to say the current value's 
> timestamp is
>   older than the one already in
>   >       >       > cache, then memcached should ignore the new entry. 
> This
>   could solve race condition as
>   >       >       > mentioned above. Suppose I say take the timestamp as 
> the
>   version then memcached server
>   >       >       > could make use of this to verify whether new entry 
> coming
>   is older than th

Re: Why memcached doesn't give an option or have the capability to ignore older values with same key based on some version.

2018-04-25 Thread sachin shetty
Thank you.

Does this memcached 'add' lock or 'set' lock defined in memecached client 
or memcached server? The reason I asked because, thread 1 from one process 
and thread 2 from another process tries to add and set simultaneously, will 
this lock happens at the memcached server or at the individual clients?

On Thursday, 26 April 2018 09:26:30 UTC+5:30, Dormando wrote:
>
> Memcached is internally atomic on key operations. If you add and set at 
> the same time, the set will effectively always win since they are 
> serialized. 
>
> 1) add goes first. set overwrites it. 
> 2) set goes first. add will fail. 
>
> On Wed, 25 Apr 2018, sachin shetty wrote: 
>
> > Cool. 
> > So let me assume the below scenario and correct me if I'm wrong here. 
> > 
> > Say thread 1 always does add and thread 2 always does set. Will there be 
> any race conditions when both these threads do add and set simultaneously? 
> What 
> > I mean is say thread1 does add and holds 'add' lock and if at the same 
> time thread 2 comes for the set operation, how 'set' lock and 'add' lock is 
> > handled here? 
> > 
> > 
> > On Thursday, 26 April 2018 06:58:27 UTC+5:30, Dormando wrote: 
> >   Hey, 
> > 
> >   ADD sets an item *only if it doesn't currently exist*. 
> > 
> >   If you want thread 2 to be authoritative after updating the DB, 
> you need 
> >   to use a SET. If you don't care and only ever want the first 
> thread to 
> >   win, you can always use ADD. 
> > 
> >   On Wed, 25 Apr 2018, sachin shetty wrote: 
> > 
> >   > Thank you for the reply. 
> >   > Can this add be used always, I mean during an update as well? 
> >   > What could be the potential disadvantage of this? 
> >   > So if two thread does an update using add, still lock hold well 
> in this sceanrio? 
> >   > 
> >   > Thanks, 
> >   > Sachin 
> >   > 
> >   > 
> >   > On Wednesday, 25 April 2018 14:13:40 UTC+5:30, Dormando wrote: 
> >   >   Hey, 
> >   > 
> >   >   Two short answers: 
> >   > 
> >   >   1) thread 1 uses 'add' instead of 'set' 
> >   >   2) thread 2 uses 'set'. 
> >   > 
> >   >   via add, a thread recaching an object can't overwrite one 
> already there. 
> >   > 
> >   >   
> https://github.com/memcached/memcached/wiki/ProgrammingTricks#avoiding-stampeding-herd
>  
> >   > 
> >   >   for related issues. using an advisory lock would change 
> the flow: 
> >   > 
> >   >   a. thread 1 gets a miss. 
> >   >   b. thread 1 runs 'add lock:key' 
> >   >   c. thread 1 wins, goes to db 
> >   >   d. thread 2 updates db. tries to grab key lock 
> >   >   e. thread 2 fails to grab key lock, waits and retries 
> >   > 
> >   >   etc. bit more chatter but with added benefit of reducing 
> stampeding herd 
> >   >   if that's an issue. 
> >   > 
> >   >   On Wed, 25 Apr 2018, sachin shetty wrote: 
> >   > 
> >   >   > There is a scenario where a cache gets updated by two 
> threads like the instance 
> >   >   > mentioned below 
> >   >   > 
> >   >   >  a. thread 1 looks at the memcache key and gets a miss 
> >   >   >   b. thread 1 falls back to the database 
> >   >   >   c. thread 2 changes the database value 
> >   >   >   d. thread 2 updates the memcache key with the new 
> value 
> >   >   >   e. thread 1 sets the old database value into 
> memcache   
> >   >   > 
> >   >   > I know this scenario is application specific. But the 
> question I have is if possible 
> >   >   > there is an option to say the current value's timestamp 
> is older than the one already in 
> >   >   > cache, then memcached should ignore the new entry. This 
> could solve race condition as 
> >   >   > mentioned above. Suppose I say take the timestamp as the 
> version then memcached server 
> >   >   > could make use of this to verify whether new entry 
> coming is older than the already 
> >   >   > current one present. 
> >   >   > 
> >   >   > Handling at the client would be performance intensive 
> because of every time fetching an 
> >   >   > existing value from the cache to check the timestamp. 
> >   >   > 
> >   >   > Are there any handlers for this to solve. Would be very 
> helpful if you could provide any 
> >   >   > inputs on this. 
> >   >   > 
> >   >   > 
> >   >   > Thanks, 
> >   >   > Sachin 
> >   >   > 
> >   >   > 
> >   >   > -- 
> >   >   > 
> >   >   > --- 
> >   >   > You received this message because you are subscribed to 
> the Google Groups "memcached" 
> >   >   > group. 
> >   >   > To unsubscribe from this group and stop receiving emails 
> from it, send an email to 
> >

Re: Why memcached doesn't give an option or have the capability to ignore older values with same key based on some version.

2018-04-25 Thread sachin shetty
Thank you.

Does this memcached 'add' lock or 'set' lock defined in memecached client 
or memcached server? The reason I asked because, thread 1 from one process 
and thread 2 from another process tries to add and set simultaneously, will 
this lock happens at the memcached server or at the individual clients?

On Thursday, 26 April 2018 09:26:30 UTC+5:30, Dormando wrote:
>
> Memcached is internally atomic on key operations. If you add and set at 
> the same time, the set will effectively always win since they are 
> serialized. 
>
> 1) add goes first. set overwrites it. 
> 2) set goes first. add will fail. 
>
> On Wed, 25 Apr 2018, sachin shetty wrote: 
>
> > Cool. 
> > So let me assume the below scenario and correct me if I'm wrong here. 
> > 
> > Say thread 1 always does add and thread 2 always does set. Will there be 
> any race conditions when both these threads do add and set simultaneously? 
> What 
> > I mean is say thread1 does add and holds 'add' lock and if at the same 
> time thread 2 comes for the set operation, how 'set' lock and 'add' lock is 
> > handled here? 
> > 
> > 
> > On Thursday, 26 April 2018 06:58:27 UTC+5:30, Dormando wrote: 
> >   Hey, 
> > 
> >   ADD sets an item *only if it doesn't currently exist*. 
> > 
> >   If you want thread 2 to be authoritative after updating the DB, 
> you need 
> >   to use a SET. If you don't care and only ever want the first 
> thread to 
> >   win, you can always use ADD. 
> > 
> >   On Wed, 25 Apr 2018, sachin shetty wrote: 
> > 
> >   > Thank you for the reply. 
> >   > Can this add be used always, I mean during an update as well? 
> >   > What could be the potential disadvantage of this? 
> >   > So if two thread does an update using add, still lock hold well 
> in this sceanrio? 
> >   > 
> >   > Thanks, 
> >   > Sachin 
> >   > 
> >   > 
> >   > On Wednesday, 25 April 2018 14:13:40 UTC+5:30, Dormando wrote: 
> >   >   Hey, 
> >   > 
> >   >   Two short answers: 
> >   > 
> >   >   1) thread 1 uses 'add' instead of 'set' 
> >   >   2) thread 2 uses 'set'. 
> >   > 
> >   >   via add, a thread recaching an object can't overwrite one 
> already there. 
> >   > 
> >   >   
> https://github.com/memcached/memcached/wiki/ProgrammingTricks#avoiding-stampeding-herd
>  
> >   > 
> >   >   for related issues. using an advisory lock would change 
> the flow: 
> >   > 
> >   >   a. thread 1 gets a miss. 
> >   >   b. thread 1 runs 'add lock:key' 
> >   >   c. thread 1 wins, goes to db 
> >   >   d. thread 2 updates db. tries to grab key lock 
> >   >   e. thread 2 fails to grab key lock, waits and retries 
> >   > 
> >   >   etc. bit more chatter but with added benefit of reducing 
> stampeding herd 
> >   >   if that's an issue. 
> >   > 
> >   >   On Wed, 25 Apr 2018, sachin shetty wrote: 
> >   > 
> >   >   > There is a scenario where a cache gets updated by two 
> threads like the instance 
> >   >   > mentioned below 
> >   >   > 
> >   >   >  a. thread 1 looks at the memcache key and gets a miss 
> >   >   >   b. thread 1 falls back to the database 
> >   >   >   c. thread 2 changes the database value 
> >   >   >   d. thread 2 updates the memcache key with the new 
> value 
> >   >   >   e. thread 1 sets the old database value into 
> memcache   
> >   >   > 
> >   >   > I know this scenario is application specific. But the 
> question I have is if possible 
> >   >   > there is an option to say the current value's timestamp 
> is older than the one already in 
> >   >   > cache, then memcached should ignore the new entry. This 
> could solve race condition as 
> >   >   > mentioned above. Suppose I say take the timestamp as the 
> version then memcached server 
> >   >   > could make use of this to verify whether new entry 
> coming is older than the already 
> >   >   > current one present. 
> >   >   > 
> >   >   > Handling at the client would be performance intensive 
> because of every time fetching an 
> >   >   > existing value from the cache to check the timestamp. 
> >   >   > 
> >   >   > Are there any handlers for this to solve. Would be very 
> helpful if you could provide any 
> >   >   > inputs on this. 
> >   >   > 
> >   >   > 
> >   >   > Thanks, 
> >   >   > Sachin 
> >   >   > 
> >   >   > 
> >   >   > -- 
> >   >   > 
> >   >   > --- 
> >   >   > You received this message because you are subscribed to 
> the Google Groups "memcached" 
> >   >   > group. 
> >   >   > To unsubscribe from this group and stop receiving emails 
> from it, send an email to 
> >

Re: Why memcached doesn't give an option or have the capability to ignore older values with same key based on some version.

2018-04-25 Thread sachin shetty
Thank you.

Does this memcached 'add' lock or 'set' lock defined in memecached client
or memcached server? The reason I asked because, thread 1 from one process
and thread 2 from another process tries to add and set simultaneously, will
this lock happens at the memcached server or at the individual clients?

On Thu, Apr 26, 2018 at 9:26 AM dormando  wrote:

> Memcached is internally atomic on key operations. If you add and set at
> the same time, the set will effectively always win since they are
> serialized.
>
> 1) add goes first. set overwrites it.
> 2) set goes first. add will fail.
>
> On Wed, 25 Apr 2018, sachin shetty wrote:
>
> > Cool.
> > So let me assume the below scenario and correct me if I'm wrong here.
> >
> > Say thread 1 always does add and thread 2 always does set. Will there be
> any race conditions when both these threads do add and set simultaneously?
> What
> > I mean is say thread1 does add and holds 'add' lock and if at the same
> time thread 2 comes for the set operation, how 'set' lock and 'add' lock is
> > handled here?
> >
> >
> > On Thursday, 26 April 2018 06:58:27 UTC+5:30, Dormando wrote:
> >   Hey,
> >
> >   ADD sets an item *only if it doesn't currently exist*.
> >
> >   If you want thread 2 to be authoritative after updating the DB,
> you need
> >   to use a SET. If you don't care and only ever want the first
> thread to
> >   win, you can always use ADD.
> >
> >   On Wed, 25 Apr 2018, sachin shetty wrote:
> >
> >   > Thank you for the reply.
> >   > Can this add be used always, I mean during an update as well?
> >   > What could be the potential disadvantage of this?
> >   > So if two thread does an update using add, still lock hold well
> in this sceanrio?
> >   >
> >   > Thanks,
> >   > Sachin
> >   >
> >   >
> >   > On Wednesday, 25 April 2018 14:13:40 UTC+5:30, Dormando wrote:
> >   >   Hey,
> >   >
> >   >   Two short answers:
> >   >
> >   >   1) thread 1 uses 'add' instead of 'set'
> >   >   2) thread 2 uses 'set'.
> >   >
> >   >   via add, a thread recaching an object can't overwrite one
> already there.
> >   >
> >   >
> https://github.com/memcached/memcached/wiki/ProgrammingTricks#avoiding-stampeding-herd
> >   >
> >   >   for related issues. using an advisory lock would change
> the flow:
> >   >
> >   >   a. thread 1 gets a miss.
> >   >   b. thread 1 runs 'add lock:key'
> >   >   c. thread 1 wins, goes to db
> >   >   d. thread 2 updates db. tries to grab key lock
> >   >   e. thread 2 fails to grab key lock, waits and retries
> >   >
> >   >   etc. bit more chatter but with added benefit of reducing
> stampeding herd
> >   >   if that's an issue.
> >   >
> >   >   On Wed, 25 Apr 2018, sachin shetty wrote:
> >   >
> >   >   > There is a scenario where a cache gets updated by two
> threads like the instance
> >   >   > mentioned below
> >   >   >
> >   >   >  a. thread 1 looks at the memcache key and gets a miss
> >   >   >   b. thread 1 falls back to the database
> >   >   >   c. thread 2 changes the database value
> >   >   >   d. thread 2 updates the memcache key with the new value
> >   >   >   e. thread 1 sets the old database value into memcache
> >   >   >
> >   >   > I know this scenario is application specific. But the
> question I have is if possible
> >   >   > there is an option to say the current value's timestamp
> is older than the one already in
> >   >   > cache, then memcached should ignore the new entry. This
> could solve race condition as
> >   >   > mentioned above. Suppose I say take the timestamp as the
> version then memcached server
> >   >   > could make use of this to verify whether new entry
> coming is older than the already
> >   >   > current one present.
> >   >   >
> >   >   > Handling at the client would be performance intensive
> because of every time fetching an
> >   >   > existing value from the cache to check the timestamp.
> >   >   >
> >   >   > Are there any handlers for this to solve. Would be very
> helpful if you could provide any
> >   >   > inputs on this.
> >   >   >
> >   >   >
> >   >   > Thanks,
> >   >   > Sachin
> >   >   >
> >   >   >
> >   >   > --
> >   >   >
> >   >   > ---
> >   >   > You received this message because you are subscribed to
> the Google Groups "memcached"
> >   >   > group.
> >   >   > To unsubscribe from this group and stop receiving emails
> from it, send an email to
> >   >   > memcached+...@googlegroups.com.
> >   >   > For more options, visit
> https://groups.google.com/d/optout.
> >   >

Re: Why memcached doesn't give an option or have the capability to ignore older values with same key based on some version.

2018-04-25 Thread dormando
Memcached is internally atomic on key operations. If you add and set at
the same time, the set will effectively always win since they are
serialized.

1) add goes first. set overwrites it.
2) set goes first. add will fail.

On Wed, 25 Apr 2018, sachin shetty wrote:

> Cool.
> So let me assume the below scenario and correct me if I'm wrong here.
>
> Say thread 1 always does add and thread 2 always does set. Will there be any 
> race conditions when both these threads do add and set simultaneously? What
> I mean is say thread1 does add and holds 'add' lock and if at the same time 
> thread 2 comes for the set operation, how 'set' lock and 'add' lock is
> handled here?
>
>
> On Thursday, 26 April 2018 06:58:27 UTC+5:30, Dormando wrote:
>   Hey,
>
>   ADD sets an item *only if it doesn't currently exist*.
>
>   If you want thread 2 to be authoritative after updating the DB, you need
>   to use a SET. If you don't care and only ever want the first thread to
>   win, you can always use ADD.
>
>   On Wed, 25 Apr 2018, sachin shetty wrote:
>
>   > Thank you for the reply.
>   > Can this add be used always, I mean during an update as well?
>   > What could be the potential disadvantage of this?
>   > So if two thread does an update using add, still lock hold well in 
> this sceanrio?
>   >
>   > Thanks,
>   > Sachin
>   >
>   >
>   > On Wednesday, 25 April 2018 14:13:40 UTC+5:30, Dormando wrote:
>   >       Hey,
>   >
>   >       Two short answers:
>   >
>   >       1) thread 1 uses 'add' instead of 'set'
>   >       2) thread 2 uses 'set'.
>   >
>   >       via add, a thread recaching an object can't overwrite one 
> already there.
>   >
>   >       
> https://github.com/memcached/memcached/wiki/ProgrammingTricks#avoiding-stampeding-herd
>   >
>   >       for related issues. using an advisory lock would change the 
> flow:
>   >
>   >       a. thread 1 gets a miss.
>   >       b. thread 1 runs 'add lock:key'
>   >       c. thread 1 wins, goes to db
>   >       d. thread 2 updates db. tries to grab key lock
>   >       e. thread 2 fails to grab key lock, waits and retries
>   >
>   >       etc. bit more chatter but with added benefit of reducing 
> stampeding herd
>   >       if that's an issue.
>   >
>   >       On Wed, 25 Apr 2018, sachin shetty wrote:
>   >
>   >       > There is a scenario where a cache gets updated by two threads 
> like the instance
>   >       > mentioned below
>   >       >
>   >       >  a. thread 1 looks at the memcache key and gets a miss
>   >       >   b. thread 1 falls back to the database
>   >       >   c. thread 2 changes the database value
>   >       >   d. thread 2 updates the memcache key with the new value
>   >       >   e. thread 1 sets the old database value into memcache  
>   >       >
>   >       > I know this scenario is application specific. But the 
> question I have is if possible
>   >       > there is an option to say the current value's timestamp is 
> older than the one already in
>   >       > cache, then memcached should ignore the new entry. This could 
> solve race condition as
>   >       > mentioned above. Suppose I say take the timestamp as the 
> version then memcached server
>   >       > could make use of this to verify whether new entry coming is 
> older than the already
>   >       > current one present.
>   >       >
>   >       > Handling at the client would be performance intensive because 
> of every time fetching an
>   >       > existing value from the cache to check the timestamp.
>   >       >
>   >       > Are there any handlers for this to solve. Would be very 
> helpful if you could provide any
>   >       > inputs on this.
>   >       >
>   >       >
>   >       > Thanks,
>   >       > Sachin
>   >       >
>   >       >
>   >       > --
>   >       >
>   >       > ---
>   >       > You received this message because you are subscribed to the 
> Google Groups "memcached"
>   >       > group.
>   >       > To unsubscribe from this group and stop receiving emails from 
> it, send an email to
>   >       > memcached+...@googlegroups.com.
>   >       > For more options, visit https://groups.google.com/d/optout.
>   >       >
>   >       >
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memcached+...@googlegroups.com.
>   > For more options, visit https://groups.google.com/d/optout.
>   >
>   >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this gr

Re: Why memcached doesn't give an option or have the capability to ignore older values with same key based on some version.

2018-04-25 Thread sachin shetty
Cool.

So let me assume the below scenario and correct me if I'm wrong here.

Say thread 1 always does add and thread 2 always does set. Will there be 
any race conditions when both these threads do add and set simultaneously? 
What I mean is say thread1 does add and holds 'add' lock and if at the same 
time thread 2 comes for the set operation, how 'set' lock and 'add' lock is 
handled here?


On Thursday, 26 April 2018 06:58:27 UTC+5:30, Dormando wrote:
>
> Hey, 
>
> ADD sets an item *only if it doesn't currently exist*. 
>
> If you want thread 2 to be authoritative after updating the DB, you need 
> to use a SET. If you don't care and only ever want the first thread to 
> win, you can always use ADD. 
>
> On Wed, 25 Apr 2018, sachin shetty wrote: 
>
> > Thank you for the reply. 
> > Can this add be used always, I mean during an update as well? 
> > What could be the potential disadvantage of this? 
> > So if two thread does an update using add, still lock hold well in this 
> sceanrio? 
> > 
> > Thanks, 
> > Sachin 
> > 
> > 
> > On Wednesday, 25 April 2018 14:13:40 UTC+5:30, Dormando wrote: 
> >   Hey, 
> > 
> >   Two short answers: 
> > 
> >   1) thread 1 uses 'add' instead of 'set' 
> >   2) thread 2 uses 'set'. 
> > 
> >   via add, a thread recaching an object can't overwrite one already 
> there. 
> > 
> >   
> https://github.com/memcached/memcached/wiki/ProgrammingTricks#avoiding-stampeding-herd
>  
> > 
> >   for related issues. using an advisory lock would change the flow: 
> > 
> >   a. thread 1 gets a miss. 
> >   b. thread 1 runs 'add lock:key' 
> >   c. thread 1 wins, goes to db 
> >   d. thread 2 updates db. tries to grab key lock 
> >   e. thread 2 fails to grab key lock, waits and retries 
> > 
> >   etc. bit more chatter but with added benefit of reducing 
> stampeding herd 
> >   if that's an issue. 
> > 
> >   On Wed, 25 Apr 2018, sachin shetty wrote: 
> > 
> >   > There is a scenario where a cache gets updated by two threads 
> like the instance 
> >   > mentioned below 
> >   > 
> >   >  a. thread 1 looks at the memcache key and gets a miss 
> >   >   b. thread 1 falls back to the database 
> >   >   c. thread 2 changes the database value 
> >   >   d. thread 2 updates the memcache key with the new value 
> >   >   e. thread 1 sets the old database value into memcache   
> >   > 
> >   > I know this scenario is application specific. But the question I 
> have is if possible 
> >   > there is an option to say the current value's timestamp is older 
> than the one already in 
> >   > cache, then memcached should ignore the new entry. This could 
> solve race condition as 
> >   > mentioned above. Suppose I say take the timestamp as the version 
> then memcached server 
> >   > could make use of this to verify whether new entry coming is 
> older than the already 
> >   > current one present. 
> >   > 
> >   > Handling at the client would be performance intensive because of 
> every time fetching an 
> >   > existing value from the cache to check the timestamp. 
> >   > 
> >   > Are there any handlers for this to solve. Would be very helpful 
> if you could provide any 
> >   > inputs on this. 
> >   > 
> >   > 
> >   > Thanks, 
> >   > Sachin 
> >   > 
> >   > 
> >   > -- 
> >   > 
> >   > --- 
> >   > You received this message because you are subscribed to the 
> Google Groups "memcached" 
> >   > group. 
> >   > To unsubscribe from this group and stop receiving emails from 
> it, send an email to 
> >   > memcached+...@googlegroups.com. 
> >   > For more options, visit https://groups.google.com/d/optout. 
> >   > 
> >   > 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "memcached" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to memcached+...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
> > 
> >

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Why memcached doesn't give an option or have the capability to ignore older values with same key based on some version.

2018-04-25 Thread dormando
Hey,

ADD sets an item *only if it doesn't currently exist*.

If you want thread 2 to be authoritative after updating the DB, you need
to use a SET. If you don't care and only ever want the first thread to
win, you can always use ADD.

On Wed, 25 Apr 2018, sachin shetty wrote:

> Thank you for the reply.
> Can this add be used always, I mean during an update as well?
> What could be the potential disadvantage of this?
> So if two thread does an update using add, still lock hold well in this 
> sceanrio?
>
> Thanks,
> Sachin
>
>
> On Wednesday, 25 April 2018 14:13:40 UTC+5:30, Dormando wrote:
>   Hey,
>
>   Two short answers:
>
>   1) thread 1 uses 'add' instead of 'set'
>   2) thread 2 uses 'set'.
>
>   via add, a thread recaching an object can't overwrite one already there.
>
>   
> https://github.com/memcached/memcached/wiki/ProgrammingTricks#avoiding-stampeding-herd
>
>   for related issues. using an advisory lock would change the flow:
>
>   a. thread 1 gets a miss.
>   b. thread 1 runs 'add lock:key'
>   c. thread 1 wins, goes to db
>   d. thread 2 updates db. tries to grab key lock
>   e. thread 2 fails to grab key lock, waits and retries
>
>   etc. bit more chatter but with added benefit of reducing stampeding herd
>   if that's an issue.
>
>   On Wed, 25 Apr 2018, sachin shetty wrote:
>
>   > There is a scenario where a cache gets updated by two threads like 
> the instance
>   > mentioned below
>   >
>   >  a. thread 1 looks at the memcache key and gets a miss
>   >   b. thread 1 falls back to the database
>   >   c. thread 2 changes the database value
>   >   d. thread 2 updates the memcache key with the new value
>   >   e. thread 1 sets the old database value into memcache  
>   >
>   > I know this scenario is application specific. But the question I have 
> is if possible
>   > there is an option to say the current value's timestamp is older than 
> the one already in
>   > cache, then memcached should ignore the new entry. This could solve 
> race condition as
>   > mentioned above. Suppose I say take the timestamp as the version then 
> memcached server
>   > could make use of this to verify whether new entry coming is older 
> than the already
>   > current one present.
>   >
>   > Handling at the client would be performance intensive because of 
> every time fetching an
>   > existing value from the cache to check the timestamp.
>   >
>   > Are there any handlers for this to solve. Would be very helpful if 
> you could provide any
>   > inputs on this.
>   >
>   >
>   > Thanks,
>   > Sachin
>   >
>   >
>   > --
>   >
>   > ---
>   > You received this message because you are subscribed to the Google 
> Groups "memcached"
>   > group.
>   > To unsubscribe from this group and stop receiving emails from it, 
> send an email to
>   > memcached+...@googlegroups.com.
>   > For more options, visit https://groups.google.com/d/optout.
>   >
>   >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Why memcached doesn't give an option or have the capability to ignore older values with same key based on some version.

2018-04-25 Thread suneel singh
i want to add this group . plz add on this group

On Wednesday, April 25, 2018 at 2:05:41 PM UTC+5:30, sachin shetty wrote:
>
> There is a scenario where a cache gets updated by two threads like the 
> instance mentioned below
>
>  a. thread 1 looks at the memcache key and gets a miss
>   b. thread 1 falls back to the database
>   c. thread 2 changes the database value
>   d. thread 2 updates the memcache key with the new value
>   e. thread 1 sets the old database value into memcache  
>
> I know this scenario is application specific. But the question I have is 
> if possible there is an option to say the current value's timestamp is 
> older than the one already in cache, then memcached should ignore the new 
> entry. This could solve race condition as mentioned above. Suppose I say 
> take the timestamp as the version then memcached server could make use of 
> this to verify whether new entry coming is older than the already current 
> one present.
>
> Handling at the client would be performance intensive because of every 
> time fetching an existing value from the cache to check the timestamp.
>
> Are there any handlers for this to solve. Would be very helpful if you 
> could provide any inputs on this.
>
>
> Thanks,
> Sachin
>
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Why memcached doesn't give an option or have the capability to ignore older values with same key based on some version.

2018-04-25 Thread sachin shetty
Thank you for the reply.

Can this add be used always, I mean during an update as well?
What could be the potential disadvantage of this?
So if two thread does an update using add, still lock hold well in this 
sceanrio?

Thanks,
Sachin


On Wednesday, 25 April 2018 14:13:40 UTC+5:30, Dormando wrote:
>
> Hey, 
>
> Two short answers: 
>
> 1) thread 1 uses 'add' instead of 'set' 
> 2) thread 2 uses 'set'. 
>
> via add, a thread recaching an object can't overwrite one already there. 
>
>
> https://github.com/memcached/memcached/wiki/ProgrammingTricks#avoiding-stampeding-herd
>  
>
> for related issues. using an advisory lock would change the flow: 
>
> a. thread 1 gets a miss. 
> b. thread 1 runs 'add lock:key' 
> c. thread 1 wins, goes to db 
> d. thread 2 updates db. tries to grab key lock 
> e. thread 2 fails to grab key lock, waits and retries 
>
> etc. bit more chatter but with added benefit of reducing stampeding herd 
> if that's an issue. 
>
> On Wed, 25 Apr 2018, sachin shetty wrote: 
>
> > There is a scenario where a cache gets updated by two threads like the 
> instance 
> > mentioned below 
> > 
> >  a. thread 1 looks at the memcache key and gets a miss 
> >   b. thread 1 falls back to the database 
> >   c. thread 2 changes the database value 
> >   d. thread 2 updates the memcache key with the new value 
> >   e. thread 1 sets the old database value into memcache   
> > 
> > I know this scenario is application specific. But the question I have is 
> if possible 
> > there is an option to say the current value's timestamp is older than 
> the one already in 
> > cache, then memcached should ignore the new entry. This could solve race 
> condition as 
> > mentioned above. Suppose I say take the timestamp as the version then 
> memcached server 
> > could make use of this to verify whether new entry coming is older than 
> the already 
> > current one present. 
> > 
> > Handling at the client would be performance intensive because of every 
> time fetching an 
> > existing value from the cache to check the timestamp. 
> > 
> > Are there any handlers for this to solve. Would be very helpful if you 
> could provide any 
> > inputs on this. 
> > 
> > 
> > Thanks, 
> > Sachin 
> > 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "memcached" 
> > group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to 
> > memcached+...@googlegroups.com . 
> > For more options, visit https://groups.google.com/d/optout. 
> > 
> >

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Why memcached doesn't give an option or have the capability to ignore older values with same key based on some version.

2018-04-25 Thread dormando
Hey,

Two short answers:

1) thread 1 uses 'add' instead of 'set'
2) thread 2 uses 'set'.

via add, a thread recaching an object can't overwrite one already there.

https://github.com/memcached/memcached/wiki/ProgrammingTricks#avoiding-stampeding-herd

for related issues. using an advisory lock would change the flow:

a. thread 1 gets a miss.
b. thread 1 runs 'add lock:key'
c. thread 1 wins, goes to db
d. thread 2 updates db. tries to grab key lock
e. thread 2 fails to grab key lock, waits and retries

etc. bit more chatter but with added benefit of reducing stampeding herd
if that's an issue.

On Wed, 25 Apr 2018, sachin shetty wrote:

> There is a scenario where a cache gets updated by two threads like the 
> instance
> mentioned below
>
>  a. thread 1 looks at the memcache key and gets a miss
>   b. thread 1 falls back to the database
>   c. thread 2 changes the database value
>   d. thread 2 updates the memcache key with the new value
>   e. thread 1 sets the old database value into memcache  
>
> I know this scenario is application specific. But the question I have is if 
> possible
> there is an option to say the current value's timestamp is older than the one 
> already in
> cache, then memcached should ignore the new entry. This could solve race 
> condition as
> mentioned above. Suppose I say take the timestamp as the version then 
> memcached server
> could make use of this to verify whether new entry coming is older than the 
> already
> current one present.
>
> Handling at the client would be performance intensive because of every time 
> fetching an
> existing value from the cache to check the timestamp.
>
> Are there any handlers for this to solve. Would be very helpful if you could 
> provide any
> inputs on this.
>
>
> Thanks,
> Sachin
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached"
> group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Why memcached doesn't give an option or have the capability to ignore older values with same key based on some version.

2018-04-25 Thread sachin shetty
There is a scenario where a cache gets updated by two threads like the 
instance mentioned below

 a. thread 1 looks at the memcache key and gets a miss
  b. thread 1 falls back to the database
  c. thread 2 changes the database value
  d. thread 2 updates the memcache key with the new value
  e. thread 1 sets the old database value into memcache  

I know this scenario is application specific. But the question I have is if 
possible there is an option to say the current value's timestamp is older 
than the one already in cache, then memcached should ignore the new entry. 
This could solve race condition as mentioned above. Suppose I say take the 
timestamp as the version then memcached server could make use of this to 
verify whether new entry coming is older than the already current one 
present.

Handling at the client would be performance intensive because of every time 
fetching an existing value from the cache to check the timestamp.

Are there any handlers for this to solve. Would be very helpful if you 
could provide any inputs on this.


Thanks,
Sachin


-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.