Re: RE: RE: RE: RE: [google-appengine] Poor performance since the past 2-3 days

2011-05-24 Thread Denis Volokhovskiy
Hi Sarang,


This definitely looks like something goes wrong with Django initialization,


may be you have too much registered applications in settings.py they taking 
ages to initialize,

Or some cyclic bug in middleware?

You may want to remove all middleware /apps from settings.py and see what 
happens then


 Or, doing some cyclic imports? (But in this case some error reporting 
recursion should be raised...)

What Django version do you use?  Or this is Django-nonrel? app-engine-patch?

(Django have very huge code base  I'd now personally go with lighter 
frameworks like Tipfy with some GAE services isolation layer.)


Or, you problem could be solved by postponing some of imports in your code - 
moving it 

from global scope of the module into function body - so they'll be up on use 
time, not initialization.


Try to add logging to Django core manually - as it initializes,

then examine it and see what actually going on.

May be you just may strip/disable some parts of Django distro etc.


 

*
*

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Rudimentary New Pricing Analysis

2011-05-31 Thread Denis Volokhovskiy
- "Also remember a GAE instance allows you to handle exactly one
request at a
time."
It is not true for JVM application instance - it handles requests
concurrently.

Besides, GAE memcache is globally visible to all instances, but in AWS
you need to setup your own per instance or 3rd party thru REST.
If you have 8 instances up, then you'll have to have 8x more memory
for your
caching - separate record for each instance.
Plus, with global cache, you may store temporary flags/signals/locks
etc.
Memcache requests will be faster inside GAE than to 3rd party.




On May 31, 9:10 am, "Raymond C."  wrote:
> By "GAE provides you much more", you mean much more *limitation* right?
>
> The cost on setup is trivial for a long run application when compared to the
> hosting cost.  I dont think the high price worth that cost for long term.
>
> Also remember a GAE instance allows you to handle exactly one request at a
> time.  The cost is not just 4X or 8X, but 40X or 80X if one instance on EC2
> can handle 10 requests at a time, which is normally much higher than this
> number.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Updated App Engine Pricing FAQ!

2011-06-27 Thread Denis Volokhovskiy
Hi Sylvain

If your critical place in pricing is only 1GB  of extra datastore,
you may to shift some data into Amazon S3 or any other storage service 
provider,
and then use GAE DS only as "index" to that one (btw, this way Amazon 
recommends to use their SimpleDB - as index for S3)
Of course, unless you have simply much amount of small records.

I think that $9 is intentional, providing free quota, Google wants a healthy 
eco-system but at the same time do want to have deal with real businesses,
but not becoming almost-free hosting.
Indeed, it is too expensive to run small free hosting on such complex 
scalable infrastructure, may be in several times against some share hosting 
on mere server.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/xe95gsa5LYkJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Since Monday, these simple SQL operations translate to expensive DataStore operations, why?

2011-11-14 Thread Denis Volokhovskiy
Hi,

You should use sharded counter rather then count() function.
count() simply traverses all entities of the kind until reaches the last, 
so scales linearly with you data size.
I suppose you entities count simply raised much since Monday, so you have 
noticed that.

datastore is different...

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/IYxaLQwJMjcJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Sudden redirect to Turing anti-bot test during remote api session

2011-01-25 Thread Denis Volokhovskiy
Hi Google Team,

I have performed datastore maintenance from the remote api console for
python app (likeourselvesapp),
much records were processed in cycle by single requests (I think about
100),
and suddenly have encountered automatic redirect (HTTP 302) to Turing
test page (with captcha entering)
right from the remote shell.
Then I exited and tried to deploy, and here traces for the same
denial:

Application: likeourselvesapp; version: v132-2.
Server: appengine.google.com.
Scanning files on local disk.
Scanned 500 files.
Initiating update.
Error 302: --- begin server output ---

302 Moved
302 Moved
The document has moved
http://sorry.google.com/sorry/?continue=http://
appengine.google.com/api/appversion/create%3Fversion%3Dv132-2%26app_id
%3Dlikeourselvesapp">here.

--- end server output ---

I tried to open specified URL with Turing test and entering captcha -
but it still abandoned my further requests.

After several minutes problem gone,
but such delays is harmful for normal maintenance from remote console.

We are billing-enabled app, and I'm registered developer for a long
time.
Why such problem happens, and is any solution to avoid it?

Thank you

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Sudden redirect to Turing anti-bot test during remote api session

2011-01-25 Thread Denis Volokhovskiy
After some time, have retried the same session - without this issue.

Looks like it is rare case.






On Jan 25, 6:15 pm, Denis Volokhovskiy 
wrote:
> Hi Google Team,
>
> I have performed datastore maintenance from the remote api console for
> python app (likeourselvesapp),
> much records were processed in cycle by single requests (I think about
> 100),
> and suddenly have encountered automatic redirect (HTTP 302) to Turing
> test page (with captcha entering)
> right from the remote shell.
> Then I exited and tried to deploy, and here traces for the same
> denial:
>
> Application: likeourselvesapp; version: v132-2.
> Server: appengine.google.com.
> Scanning files on local disk.
> Scanned 500 files.
> Initiating update.
> Error 302: --- begin server output ---
> 
> 302 Moved
> 302 Moved
> The document has moved
> http://sorry.google.com/sorry/?continue=http://
> appengine.google.com/api/appversion/create%3Fversion%3Dv132-2%26app_id
> %3Dlikeourselvesapp">here.
> 
> --- end server output ---
>
> I tried to open specified URL with Turing test and entering captcha -
> but it still abandoned my further requests.
>
> After several minutes problem gone,
> but such delays is harmful for normal maintenance from remote console.
>
> We are billing-enabled app, and I'm registered developer for a long
> time.
> Why such problem happens, and is any solution to avoid it?
>
> Thank you

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Server error 500 and Error 139 on admin site

2011-01-26 Thread Denis Volokhovskiy
Dear Google Team!

App Engine admin site (appengine.google.com) becomes
very unresponsive last days.

I have to refresh up to 10 times to get one page refreshed.

Server error 500 and
Error 139 (net::ERR_TEMPORARILY_THROTTLED): Requests to the server
have been temporarily throttled
appeared.

It is very harmful to the work, mainly because logs unavailability.

Is this issue temporary?

Could I access at least traces in some other way - say via some
backend library hosted on my app instance?
This may be good temporary solution.

Very thanks,
Denis.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Server error 500 and Error 139 on admin site

2011-01-26 Thread Denis Volokhovskiy
I have noticed that such errors
happen only on my app with high development activity and massive
history (likeourselvesapp),
my almost inactive apps are ok - admin works for all views




On Jan 26, 2:14 pm, Denis Volokhovskiy 
wrote:
> Dear Google Team!
>
> App Engine admin site (appengine.google.com) becomes
> very unresponsive last days.
>
> I have to refresh up to 10 times to get one page refreshed.
>
> Server error 500 and
> Error 139 (net::ERR_TEMPORARILY_THROTTLED): Requests to the server
> have been temporarily throttled
> appeared.
>
> It is very harmful to the work, mainly because logs unavailability.
>
> Is this issue temporary?
>
> Could I access at least traces in some other way - say via some
> backend library hosted on my app instance?
> This may be good temporary solution.
>
> Very thanks,
> Denis.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: GAE Python: Passing variables from one handler to another without use of 'global'

2011-02-25 Thread Denis Volokhovskiy
Hi outlaw

If you want just a common constant, you may
add python module like properties.py:
...
FOO = 'bar'
...

and then at the place your handlers defined do

from properties import FOO


but if you want, as Philip said, to pass some variable between
requests,
you have 2 options
a) Pass it to user side with response from one handler,
   and then get it from request inside another handler -
   This may be achieved by:
   a1) What Philip has proposed - hidden form field, but then you
should use post()
   a2) via cookies
b)You may put the value into memcache, or datastore
memcache is not reliable though, it may be suddenly cleaned up in
hours or even minutes.

you may NOT use global variables between requests at all (probably you
are aware of this already)
because request handlers may be located inside different App Engine
instances,
this is not scalable.

Best regards,
Denis




On Feb 25, 8:14 pm, Philip  wrote:
> I might misunderstood your question but take a look at 
> this:http://pastebin.com/58c1Ltxr
>
> Best Regards
> Philip
>
> On Feb 25, 3:41 pm, outlaw  wrote:
>
>
>
>
>
>
>
> > Greetings,
> > Simple newb question wrt Python. Brief Sample:
>
> > class Congo(webapp.RequestHandler):
> >     def get(self):
> >        # snip
> >         FOO = "bar"
> >         self.response.out.write("""
> >                 
> >                 
> >                 
> >                 """)
>
> > class Guando(webapp.RequestHandler):
> >     def get(self):
> >         # I want to use FOO here without using 'global'
>
> > def main():
> >     run_wsgi_app(application)
>
> > if __name__ == "__main__":
> >     main()
>
> > I would like to see FOO in Guando, what's a clean way of doing this?
>
> > With thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: High Replication Datastore and consistency problem.

2011-02-25 Thread Denis Volokhovskiy
Hello,

High Replication DS solves the issue of latency
between requested get/put/delete operation and its actual completion,

but it still has limitations on how many put requests
you can do per second - currently only 1 guaranteed write per second
per entity group.

(here at the bottom : 
http://code.google.com/appengine/docs/python/datastore/hr/overview.html)

any entity may be considered as single-item entity group, so this
applied to
series of puts to the same entity.

As I understand from your example, particular user are supposed to
write into the same entity instance
several times - so this may be potentially 'Contention Error'
I have encountered such situation on frequently updated entity groups

May be you may consider several shards for Email entity?
Then , you may examine update times and sum chunks in proper order
(provided you have some field mtime =
db.DateTimeProperty(auto_now=True))

Or even better - collect some intermediate results in memcache
instance
and,say, every 1 or 2 seconds do actual write into datastore?

Best regards,
Den



On Feb 25, 11:43 am, de Witte  wrote:
> Hello,
>
> We are developing the following datastore for a big application.
>
> One table with 10.000.000 records.
>
> The application has at any given time 40.000 active users.
> These users make frequently adjustments to these records.
> In rare cases, more than one user writes to a single record.
>
> What happens if a specific user does four requests in a row to adjust a
> single record.
>
> For example:
>
> Entity Email with property content;
>
> Request 1: get Email, add 'hello' to content, put Email.
> Request 2: get Email, add ',how' to content, put Email.
> Request 3: get Email, add ' are y' to content, put Email.
> Request 4: get Email, add ' doing' to content, put Email.
>
> How can I ensure that the content property has the value 'hello, how are y
> doing' after the fourth request?
>
> The entity group Email, has no parent.
>
> Is this scenario strongly consistent?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: How to debug python script?

2011-03-01 Thread Denis Volokhovskiy
There is additional option.

If you are using some remote API command shell
implementation (e.g. app-engine-patch and Django-nonrel have such)
you may mix remote calls with debugging of
local code.
This is achieving by calling remote shell script
under some Python debugger

My choice - install winpdb - really cool though minimalistic.
then
winpdb 
(for app engine patch: winpdb ./manage.py shell --remote)

then, winpdb window will be raised,
and you may set some breakpoints in local code
then continue execution.
After continue, you'll be dropped into a remote shell

then you should do required imports and call some functions manually
to trigger places with breakpoints.

(Note, that you may need to place some additional __init__.py files
in some root directories of your project,
to do
import src.foo.bar
rather than
import foo.bar
to force a debugger to get clue where your sources are)




On Mar 1, 7:44 am, Calvin  wrote:
> The simplest way to debug a GAE app is to just use tons of logging 
> calls:http://docs.python.org/library/logging.html
>
> These will show up in the logs area of your app's dashboard.
>
> If you are running a local dev_appserver.py you can use an IDE like PyDev to
> set up breakpoints and watch variables.  It's not too hard to set up and
> will save you lots of time in the long run.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: How to debug python script?

2011-03-01 Thread Denis Volokhovskiy
Also you may of course run GAE local test server under debugger,
but local server only simulates datastore , memcache and other
services,
and you're working with fake copy of real data on production server,
this is not always enough, and may skew some results.

The advantage of running remote API console under debugger is
that you working exactly with production server datastore, real
memcache etc.




On Mar 1, 7:37 pm, Denis Volokhovskiy 
wrote:
> There is additional option.
>
> If you are using some remote API command shell
> implementation (e.g. app-engine-patch and Django-nonrel have such)
> you may mix remote calls with debugging of
> local code.
> This is achieving by calling remote shell script
> under some Python debugger
>
> My choice - install winpdb - really cool though minimalistic.
> then
> winpdb 
> (for app engine patch: winpdb ./manage.py shell --remote)
>
> then, winpdb window will be raised,
> and you may set some breakpoints in local code
> then continue execution.
> After continue, you'll be dropped into a remote shell
>
> then you should do required imports and call some functions manually
> to trigger places with breakpoints.
>
> (Note, that you may need to place some additional __init__.py files
> in some root directories of your project,
> to do
> import src.foo.bar
> rather than
> import foo.bar
> to force a debugger to get clue where your sources are)
>
> On Mar 1, 7:44 am, Calvin  wrote:
>
>
>
>
>
>
>
> > The simplest way to debug a GAE app is to just use tons of logging 
> > calls:http://docs.python.org/library/logging.html
>
> > These will show up in the logs area of your app's dashboard.
>
> > If you are running a local dev_appserver.py you can use an IDE like PyDev to
> > set up breakpoints and watch variables.  It's not too hard to set up and
> > will save you lots of time in the long run.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Is the 1 write per second to an entity group limitation only for HR datastore or for Master/slave too?

2011-03-03 Thread Denis Volokhovskiy
Also you may consider "sharding in time" for some cases-
when much of tasks are to be generated at some point,
you may add to *countdown* parameter some random amount of time in,
say, 0..2 seconds interval
so you "shard" tasks starting times within those 2 seconds.




On Mar 3, 5:22 am, mcilrain  wrote:
> For counting, you should either use a sharded counter or the taskqueue to
> increment it.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Is the 1 write per second to an entity group limitation only for HR datastore or for Master/slave too?

2011-03-03 Thread Denis Volokhovskiy
"distributing in time" - better wording here




On Mar 3, 4:33 pm, Denis Volokhovskiy 
wrote:
> Also you may consider "sharding in time" for some cases-
> when much of tasks are to be generated at some point,
> you may add to *countdown* parameter some random amount of time in,
> say, 0..2 seconds interval
> so you "shard" tasks starting times within those 2 seconds.
>
> On Mar 3, 5:22 am, mcilrain  wrote:
>
>
>
>
>
>
>
> > For counting, you should either use a sharded counter or the taskqueue to
> > increment it.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Solutions to an Exploding Index Problem

2011-03-11 Thread Denis Volokhovskiy
Hi Aaron,

Your may roll down Keywords model
to KeywordsOwnerTopic:

class KeywordsOwnerTopic(db.Model):
owner_saved = db.ReferenceProperty()  #though I prefer to use
StringProperty and store key_name, and query by it
topic = db.ReferenceProperty()
sort1property = db.IntegerProperty

Does original "Keywords" store some more data?

Then you may avoid duplication by referencing to it:

class KeywordsOwnerTopic(db.Model):
owner_saved = db.ReferenceProperty()  #though I prefer to use
StringProperty and store key_name, and query by it
topic = db.ReferenceProperty()
sort1property = db.IntegerProperty()
keywords_ref = db.ReferenceProperty() #Keywords model key

class Keywords(db.Model):
some_other_data = db.SomeProperty()
...


Also if you querying some model only by properties, not the keys/
key_names,
you have an option to store short data (500 bytes) in key_name (+
adding some unique number as prefix)
and querying with keys_only=True - this will be much faster
This may be used E.g. if Keywords particular entity represent only one
keyword, and its length < 500
(may be I misunderstood your data model though)

Best regards,
Den




On Mar 11, 11:23 pm, Aaron  wrote:
> Hi Jay,
>
> Exactly, I need to query for keywords based on user and topic.
>
> That's an interesting suggestion, but the list of saved keywords can
> get longer than 5000.  I also need to be able to sort by a number of
> different integer properties that I didn't explicitly include in the
> model.
>
> On Mar 11, 9:40 am, Jay  wrote:
>
>
>
>
>
>
>
> > So you have the user and the topic, and want the key words? Do I have that
> > right?
>
> > What about moving the saved keywords to the user? You might do something
> > like cat them together in a list on user. How big do you expect the list to
> > get per user?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.