Not sure by your wording if this is the same bug you noticed, but as I
see it lines 144-145 should be under the try: on line 137.  That way,
the delayed counter count won't get reset to zero even in the case of
a failed transaction.

I think I'm going to rewrite this a bit, possibly from scratch, to
bring in the concept of forcing a delayed/memcache counter if
requested.  So you'd pass the increment method not only the increment
amount, but also the minimum delay integer before it even attempts to
write to a shard.  Under the assumption your memcache object will
stick around for a few hundred increments, you could set this to a
relatively safe low number of 10 or so, and then it would only even
attempt to write to the datastore shard (incrementing it by ~10) if
the delayed count was >= 10.  This would drastically reduce the number
of attempted datastore writes, and thus the average response time
(especially for situations like mine, when I'm working with multiple
counters per http request).

The issue here is that even moreso than Bill's code, you're relying on
memcache not disappearing all the time.  In general, I think this is a
safe bet.  And if it happens just occasionally (eg once every few days
even), then I don't care - it doesn't matter to me if my total counts
are off by a tiny fraction of a percent (and of course for counters
that are more important, just set the minimum delay integer lower).
More of an issue, however, is your counter memcache disappearing
because it's outdated due to tons of other memcache items having been
created since your original counter was created.  This is because I
_think_ the memcache items are removed based on first created, first
out - and not on last modified, last out.  To solve this, I'm also
planning to destroy and recreate the memcache object upon successful
datastore write (and associated memcache delay being reset to zero).

Any comments on this approach are appreciated.

  -Josh


On Nov 1, 11:39 pm, yejun <[EMAIL PROTECTED]> wrote:
> I found a bug in that code. If you do an increment after memcache
> exhaustion and without read value, it will reset value to 0.
>
> On Nov 2, 1:11 am, Bill <[EMAIL PROTECTED]> wrote:
>
> > > As I see it, any code that uses your new counter object will need to
> > > deal with instantiating it somewhere
> > ...
> > > others might have a
> > > similar question when wondering how best to actually implement it and
> > > ensure the variable name they wish to use for the counter is not
> > > taken, and that they're not unnecessarily instantiating multiple
> > > Counter objects for no reason.
>
> > The way I'm using counters, they are hardwired into the code right at
> > the time counter info is needed to be retrieved or set.  I do the
> > instantiation right at that point.  For my app, I guarantee variable
> > names are OK just via code review and making each group of counters
> > has a special prefix name so there can't be conflict across groups of
> > counters.
>
> > I'm still relatively new to python web apps, so I can use educating
> > here as well, but my understanding was a request created a limited
> > lifecycle for the request handler and instantiated models, unless you
> > put it in the global space.  So if you really want to be efficient,
> > you could create the counters globally so they are kept across
> > requests some times.  But even if you are dynamically creating
> > multiple instances of Counter objects, there shouldn't be much of a
> > performance penalty or access issue because these objects are so
> > lightweight and the very nature of a web app where the whole response
> > gets recreated each request.
>
> > Best,
> > Bill
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to