Every operation by transferring value between memcache and datastore
is a non atomic operation. The original design is cache value from
datastore to memcache. The equivalent implementation is just throwing
random exception from the transaction.

I feel the non sharded counter plus a memcache will be more robust
solution.

Because a reasonable frequency update from memcache to datastore
completely eliminated the necessity of sharded implementation. A high
frequency memcache to datastore implementation will be very unreliable
due to high frequency non atomic operations.

On Nov 11, 4:12 pm, Bill <[EMAIL PROTECTED]> wrote:
> I probably should've piped up on the thread earlier.  I'm currently
> looking at yejun's fork and will merge pending some questions I have
> on his optimizations.
>
> Here's one:
> My old code stored the counter name in each shard so I could get all
> shards with a single fetch.  If you have 20 shards, you could have any
> number of actually created shards.  In a very high transaction system,
> probably all 20 shards exist.
> In yejun's optimization, he's iterating through each shard using
> get_by_key_name and checking if it exists.  Which is likely to be
> faster?
>
> A nice optimization by yejun is making count a TextProperty.  This
> will prevent indexing and would probably save some cycles.
>
> Josh, you said "lines 144-145 should be under the try: on line 137.
> That way, the delayed counter count won't get reset to zero even in
> the case of a failed transaction."
>
> I thought any datastore errors are handled via the db.Error exception
> which forces a return before the delayed counter count is reset.
> (db.Error is defined in appengine/api/datastore_errors.py)
>
> On Josh's memcached buffering scheme, I can definitely see the utility
> if you're willing to sacrifice some reliability (we're down to one
> point of failure -- memcache) for possibly a lot of speed.  Using
> memcache buffers for counters makes sense because it's easy to
> accumulate requests while for other model puts, like comments or other
> text input, the amount of memcache buffer could grow pretty large
> quickly.
>
> Does it make sense to use a sharded backend to a memcache buffer?
> Depends on the frequency of the final datastore writes, as mentioned
> above.  (I'm not as concerned with complexity as yejun because I think
> this buffering is reasonably simple for counters.)   So I think this
> would be a good thing to add onto the sharded counter code through a
> fork.  Then people who want that kind of speed can opt for it.
>
> -Bill
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to