[google-appengine] DeadlineExceeded on cold hits.
Hi, In the past week, I've seen an alarming number of DeadlineExceeded exceptions on cold hits to my applications. Most of the stack traces are shallow -- things blow up well before my code is hit. See http://pastie.org/988269 for a stack trace. The `bootstrap.py` file is more-or-less a direct copy of the `main.py` from Rietveld. Can someone on the App Engine team please point me in the right direction here? This is a big change in GAE's behavior in the past week, and it is affecting many of my applications (citygoround which has been in production for half a year; code-doctor which is under development, etc.) Cheers, Dave Peck -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: DeadlineExceeded on cold hits.
Can someone from the AppEngine team help us understand whether these issues are related to recent data store problems? In theory, I don't think they should be, because at least for me I'm seeing DeadlineExceeded very early on -- somewhere in Django before any of my code is run. But that's theory... I was hoping for a comment on practice. ;-) Cheers, Dave On Jun 2, 2:19 pm, scarlac wrote: > +1 > We've been having way too many of these errors for two weeks. Downtime > is acceptable but this is ridiculous. This can't keep up or we'll have > to rewrite our app and stop using appengine. We choose Google and > expected stability and scalability so I'm very disappointed at the > moment. > > On Jun 2, 12:16 am, Dave Peck wrote: > > > > > Hi, > > > In the past week, I've seen an alarming number of DeadlineExceeded > > exceptions on cold hits to my applications. > > > Most of the stack traces are shallow -- things blow up well before my > > code is hit. Seehttp://pastie.org/988269fora stack trace. > > > The `bootstrap.py` file is more-or-less a direct copy of the `main.py` > > from Rietveld. > > > Can someone on the App Engine team please point me in the right > > direction here? This is a big change in GAE's behavior in the past > > week, and it is affecting many of my applications (citygoround which > > has been in production for half a year; code-doctor which is under > > development, etc.) > > > Cheers, > > Dave Peck -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: DeadlineExceeded on cold hits.
Ping! Keeping this thread alive -- seems this has hit several people. Anyone have answers? Thanks, Dave On Jun 3, 7:03 pm, nischalshetty wrote: > +1 > > The deadline exceptions are beyond me. It's definitely not the 30 sec > limit thingy happening. > > Seriously, Google App Engine is an excellent platform. I started using > it because it was free to start with. But now I'm paying for usage and > I think GAE should have some provision for paid users. A different > stack for paid users would really help. > > My argument is, till users don't enter billing,l they really aren't > getting enough traffic to be bothered about these exceptions. But once > you start getting billed, it obviously means your traffic has > increased which means the app is doing pretty well and you definitely > do not want these errors! > > -Nischal > > On Jun 4, 2:39 am, Dave Peck wrote: > > > > > Can someone from the AppEngine team help us understand whether these > > issues are related to recent data store problems? > > > In theory, I don't think they should be, because at least for me I'm > > seeingDeadlineExceededvery early on -- somewhere in Django before > > any of my code is run. > > > But that's theory... I was hoping for a comment on practice. ;-) > > > Cheers, > > Dave > > > On Jun 2, 2:19 pm, scarlac wrote: > > > > +1 > > > We've been having way too many of these errors for two weeks. Downtime > > > is acceptable but this is ridiculous. This can't keep up or we'll have > > > to rewrite our app and stop using appengine. We choose Google and > > > expected stability and scalability so I'm very disappointed at the > > > moment. > > > > On Jun 2, 12:16 am, Dave Peck wrote: > > > > > Hi, > > > > > In the past week, I've seen an alarming number ofDeadlineExceeded > > > > exceptions on cold hits to my applications. > > > > > Most of the stack traces are shallow -- things blow up well before my > > > > code is hit. Seehttp://pastie.org/988269forastacktrace. > > > > > The `bootstrap.py` file is more-or-less a direct copy of the `main.py` > > > > from Rietveld. > > > > > Can someone on the App Engine team please point me in the right > > > > direction here? This is a big change in GAE's behavior in the past > > > > week, and it is affecting many of my applications (citygoround which > > > > has been in production for half a year; code-doctor which is under > > > > development, etc.) > > > > > Cheers, > > > > Dave Peck -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Tracking down soft memory limit errors.
I have an app that regularly logs "critical" Soft Memory errors after roughly 1k requests to a given process. I've looked at all the obvious potential causes (global variables, etc.) but see nothing in my code that should lead to a memory leak. All cross-request state ends up either in memcache or the data store. I have a few questions: 1. My understanding is that this is about leaks. But can a single request that consumes a lot of memory at once cause this error? 2. App Engine logs this as "critical" but it seems to me to be "warning"-level information: we're basically looking at a performance issue, right? 3. Most importantly, can anyone recommend tools to track down memory leaks on my local `dev_appserver.py` instance? I've been looking at both Heapy (part of Guppy) and Dowser, but getting them in a running dev_appserver instance seems tricky -- they both require native extensions. Thanks, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Tracking down soft memory limit errors.
Python. Raw WebApp. On Aug 16, 3:26 pm, Jeff Schwartz wrote: > it would probably help if you provided language, framework etc. > > > > > > On Mon, Aug 16, 2010 at 5:37 PM, Dave Peck wrote: > > I have an app that regularly logs "critical" Soft Memory errors after > > roughly 1k requests to a given process. > > > I've looked at all the obvious potential causes (global variables, > > etc.) but see nothing in my code that should lead to a memory leak. > > All cross-request state ends up either in memcache or the data store. > > > I have a few questions: > > > 1. My understanding is that this is about leaks. But can a single > > request that consumes a lot of memory at once cause this error? > > > 2. App Engine logs this as "critical" but it seems to me to be > > "warning"-level information: we're basically looking at a performance > > issue, right? > > > 3. Most importantly, can anyone recommend tools to track down memory > > leaks on my local `dev_appserver.py` instance? I've been looking at > > both Heapy (part of Guppy) and Dowser, but getting them in a running > > dev_appserver instance seems tricky -- they both require native > > extensions. > > > Thanks, > > Dave > > > -- > > You received this message because you are subscribed to the Google Groups > > "Google App Engine" group. > > To post to this group, send email to google-appeng...@googlegroups.com. > > To unsubscribe from this group, send email to > > google-appengine+unsubscr...@googlegroups.com > e...@googlegroups.com> > > . > > For more options, visit this group at > >http://groups.google.com/group/google-appengine?hl=en. > > -- > -- > Jeff -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: 500 Server Error on https://appengine.google.com
Me too. 500 errors and can't update my app. Google team? -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
Re: [google-appengine] Re: 500 Server Error on https://appengine.google.com
I'd like to add: as much as I admire the App Engine team for being so transparent with their status dashboard, it doesn't seem like it always reflects on-the-ground reality. It would be good to have the current outage reflected, for example. Thanks, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] HTTPS CPU cost?
Does using HTTPS imply higher CPU cost? Or is the hard work done outside of metering? (Also, the documentation about secure quotas seems a little vague. My impression is: at least up front, they're the same as the non-secure quotas, and "secure bandwidth" costs the same as regular?) Thanks, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Transactions and user accounts.
I have a User model and I've placed the email address in the key_name. This makes it easy to ensure uniqueness without race conditions. _Changing_ emails becomes a problem. It involves creating a new account and switching all references. Which makes me wonder if the email address really should be key. How have others approached this simple problem? You want transactions on all sides if you can get them... Cheers, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] "Request was aborted after waiting too long" followed by random DeadlineExceededError on import.
Hello, I have an app (citygoround.org) that, especially in the morning, often has 10-15 minutes of outright downtime due to server errors. Looking into it, I see that right before the downtime starts, a few requests log the following warning message: > Request was aborted after waiting too long to attempt to service your request. > Most likely, this indicates that you have reached your simultaneous dynamic request limit. I'm certainly not over my limit, but I can believe that the request in question could take a while. (I'll get to the details of that request in a moment.) Immediately after these warnings, my app has a large amount of time (10+ minutes) where *all requests* -- no matter how unthreatening -- raise a DeadlineExceededError. Usually this is raised during the import of an innocuous module like "re" or "time" or perhaps a Django 1.1 module. (We use use_library.) My best theory at the moment is that: 1. It's a cold start, so nothing is cached. 2. App Engine encounters the high latency request and bails. 3. We probably inadvertently catch the DeadlineExceededError, so the runtime doesn't clean up properly. 4. Future requests are left in a busted state. Does this sound at all reasonable? I see a few related issues (2396, 2266, and 1409) but no firm/completely clear discussion of what's happening in any of them. Thanks, Dave PS: The specifics about our high latency request are *not* strictly relevant to the larger problem I'm having, but I will include them because I have a second "side" question to ask about it. The "high latency" request is serving an image. Our app lets users upload images and we store them in the data store. When serving an image, our handler: 1. Checks to see if the bytes for the image are in memcache, and if so returns them immediately. 2. Otherwise grabs the image from the datastore, and if it is smaller than 64K, adds the bytes to the memcache 3. Returns the result I'm wondering if using memcache in this way is a smart idea -- it may very well be the cause of our latency issues. It's hard to tell. Alternatively, the issue could be: we have a page that shows a large number (~100) of such images. If someone requests this page, we may have a lot of simultaneous image-producing requests happening at the same time. Perhaps _this_ is the root cause of the original "Request was aborted" issue? Just not sure here... -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: "Request was aborted after waiting too long" followed by random DeadlineExceededError on import.
Hi Ikai, The app id is "citygoround". We had a number of stretches of "badness" this morning. An example stretch: 6:07AM 33.867 ("Request was aborted...") 6:07AM 49.672 through 7:12AM 24.470 ("DeadlineExceededError" and/or "ImproperlyConfiguredError" -- looks like it depends on which imports fail.) And another: 8:17AM 37.620 ("Request was aborted...") 8:17AM 54.348 through 8:46AM 51.478 ("DeadlineExceededError" and/or "ImproperlyConfiguredError") One last thing: the app is open source. If it helps, you can find the exact code that we're running in production at: http://github.com/davepeck/CityGoRound/ The screenshot handler in question is found in ./citygoround/views/ app.py Line 115. Cheers, Dave On Dec 14, 1:32 pm, "Ikai L (Google)" wrote: > Do you see that it's consistent at the same times? What's your application > ID? I'll look into it. > > > > > > On Mon, Dec 14, 2009 at 11:28 AM, Dave Peck wrote: > > Hello, > > > I have an app (citygoround.org) that, especially in the morning, often > > has 10-15 minutes of outright downtime due to server errors. > > > Looking into it, I see that right before the downtime starts, a few > > requests log the following warning message: > > > > Request was aborted after waiting too long to attempt to service > > your request. > > > Most likely, this indicates that you have reached your > > simultaneous dynamic request limit. > > > I'm certainly not over my limit, but I can believe that the request in > > question could take a while. (I'll get to the details of that request > > in a moment.) > > > Immediately after these warnings, my app has a large amount of time > > (10+ minutes) where *all requests* -- no matter how unthreatening -- > > raise a DeadlineExceededError. Usually this is raised during the > > import of an innocuous module like "re" or "time" or perhaps a Django > > 1.1 module. (We use use_library.) > > > My best theory at the moment is that: > > > 1. It's a cold start, so nothing is cached. > > 2. App Engine encounters the high latency request and bails. > > 3. We probably inadvertently catch the DeadlineExceededError, so the > > runtime doesn't clean up properly. > > 4. Future requests are left in a busted state. > > > Does this sound at all reasonable? I see a few related issues (2396, > > 2266, and 1409) but no firm/completely clear discussion of what's > > happening in any of them. > > > Thanks, > > Dave > > > PS: > > > The specifics about our high latency request are *not* strictly > > relevant to the larger problem I'm having, but I will include them > > because I have a second "side" question to ask about it. > > > The "high latency" request is serving an image. Our app lets users > > upload images and we store them in the data store. When serving an > > image, our handler: > > > 1. Checks to see if the bytes for the image are in memcache, and if so > > returns them immediately. > > 2. Otherwise grabs the image from the datastore, and if it is smaller > > than 64K, adds the bytes to the memcache > > 3. Returns the result > > > I'm wondering if using memcache in this way is a smart idea -- it may > > very well be the cause of our latency issues. It's hard to tell. > > > Alternatively, the issue could be: we have a page that shows a large > > number (~100) of such images. If someone requests this page, we may > > have a lot of simultaneous image-producing requests happening at the > > same time. Perhaps _this_ is the root cause of the original "Request > > was aborted" issue? > > > Just not sure here... > > > -- > > > You received this message because you are subscribed to the Google Groups > > "Google App Engine" group. > > To post to this group, send email to google-appeng...@googlegroups.com. > > To unsubscribe from this group, send email to > > google-appengine+unsubscr...@googlegroups.com > e...@googlegroups.com> > > . > > For more options, visit this group at > >http://groups.google.com/group/google-appengine?hl=en. > > -- > Ikai Lan > Developer Programs Engineer, Google App Engine -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: "Request was aborted after waiting too long" followed by random DeadlineExceededError on import.
Hi Ikai, Any further details on your end? I get the feeling we're not the only ones, and we've experienced very serious downtime in the last ~48 hours. This is a critical issue for us to resolve, but at the same time we lack key pieces of data that would help us solve it on our own... Thanks, Dave On Dec 15, 9:14 am, Jason C wrote: > Ikai, > > We see daily DeadlineExceededErrors on app id 'steprep' from 6.30am to > 7.30am (log time). > > Can you look into that as well? > > Thanks, > j > > On Dec 14, 3:32 pm, "Ikai L (Google)" wrote: > > > > > Do you see that it's consistent at the same times? What's your application > > ID? I'll look into it. > > > On Mon, Dec 14, 2009 at 11:28 AM, Dave Peck wrote: > > > Hello, > > > > I have an app (citygoround.org) that, especially in the morning, often > > > has 10-15 minutes of outright downtime due to server errors. > > > > Looking into it, I see that right before the downtime starts, a few > > > requests log the following warning message: > > > > > Request was aborted after waiting too long to attempt to service > > > your request. > > > > Most likely, this indicates that you have reached your > > > simultaneous dynamic request limit. > > > > I'm certainly not over my limit, but I can believe that the request in > > > question could take a while. (I'll get to the details of that request > > > in a moment.) > > > > Immediately after these warnings, my app has a large amount of time > > > (10+ minutes) where *all requests* -- no matter how unthreatening -- > > > raise a DeadlineExceededError. Usually this is raised during the > > > import of an innocuous module like "re" or "time" or perhaps a Django > > > 1.1 module. (We use use_library.) > > > > My best theory at the moment is that: > > > > 1. It's a cold start, so nothing is cached. > > > 2. App Engine encounters the high latency request and bails. > > > 3. We probably inadvertently catch the DeadlineExceededError, so the > > > runtime doesn't clean up properly. > > > 4. Future requests are left in a busted state. > > > > Does this sound at all reasonable? I see a few related issues (2396, > > > 2266, and 1409) but no firm/completely clear discussion of what's > > > happening in any of them. > > > > Thanks, > > > Dave > > > > PS: > > > > The specifics about our high latency request are *not* strictly > > > relevant to the larger problem I'm having, but I will include them > > > because I have a second "side" question to ask about it. > > > > The "high latency" request is serving an image. Our app lets users > > > upload images and we store them in the data store. When serving an > > > image, our handler: > > > > 1. Checks to see if the bytes for the image are in memcache, and if so > > > returns them immediately. > > > 2. Otherwise grabs the image from the datastore, and if it is smaller > > > than 64K, adds the bytes to the memcache > > > 3. Returns the result > > > > I'm wondering if using memcache in this way is a smart idea -- it may > > > very well be the cause of our latency issues. It's hard to tell. > > > > Alternatively, the issue could be: we have a page that shows a large > > > number (~100) of such images. If someone requests this page, we may > > > have a lot of simultaneous image-producing requests happening at the > > > same time. Perhaps _this_ is the root cause of the original "Request > > > was aborted" issue? > > > > Just not sure here... > > > > -- > > > > You received this message because you are subscribed to the Google Groups > > > "Google App Engine" group. > > > To post to this group, send email to google-appeng...@googlegroups.com. > > > To unsubscribe from this group, send email to > > > google-appengine+unsubscr...@googlegroups.com > > e...@googlegroups.com> > > > . > > > For more options, visit this group at > > >http://groups.google.com/group/google-appengine?hl=en. > > > -- > > Ikai Lan > > Developer Programs Engineer, Google App Engine -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: "Request was aborted after waiting too long" followed by random DeadlineExceededError on import.
Ikai, We'll keep an eye on our app for the next ~24 hours and report back. At what time did you make the changes to our instance? We had substantial downtime earlier today, alas. Can you provide any details about what sort of change was made? Thanks, Dave On Dec 15, 11:26 am, "Ikai L (Google)" wrote: > Dave, > > You're correct that this is likely affecting other applications, but it's > not a global issue. There are hotspots in the cloud that we notice are being > especially impacted during certain times of the day. We're actively working > on addressing these issues, but in the meantime, there are manual steps we > can try to prevent your applications from becoming resource starved. We do > these on a one-off basis and reserve them only for applications that seem to > exhibit the behavior of seeing DeadlineExceeded on simple actions (not > initial JVM startup), and at fairly predictable intervals during the day. > I've taken these steps to try to remedy your application. Can you let us > know if these seem to help? If not, they may indicate that something is > going on with your application code, though that does not seem like the case > here. > > > > > > On Tue, Dec 15, 2009 at 10:54 AM, Dave Peck wrote: > > Hi Ikai, > > > Any further details on your end? I get the feeling we're not the only > > ones, and we've experienced very serious downtime in the last ~48 > > hours. > > > This is a critical issue for us to resolve, but at the same time we > > lack key pieces of data that would help us solve it on our own... > > > Thanks, > > Dave > > > On Dec 15, 9:14 am, Jason C wrote: > > > Ikai, > > > > We see daily DeadlineExceededErrors on app id 'steprep' from 6.30am to > > > 7.30am (log time). > > > > Can you look into that as well? > > > > Thanks, > > > j > > > > On Dec 14, 3:32 pm, "Ikai L (Google)" wrote: > > > > > Do you see that it's consistent at the same times? What's your > > application > > > > ID? I'll look into it. > > > > > On Mon, Dec 14, 2009 at 11:28 AM, Dave Peck > > wrote: > > > > > Hello, > > > > > > I have an app (citygoround.org) that, especially in the morning, > > often > > > > > has 10-15 minutes of outright downtime due to server errors. > > > > > > Looking into it, I see that right before the downtime starts, a few > > > > > requests log the following warning message: > > > > > > > Request was aborted after waiting too long to attempt to service > > > > > your request. > > > > > > Most likely, this indicates that you have reached your > > > > > simultaneous dynamic request limit. > > > > > > I'm certainly not over my limit, but I can believe that the request > > in > > > > > question could take a while. (I'll get to the details of that request > > > > > in a moment.) > > > > > > Immediately after these warnings, my app has a large amount of time > > > > > (10+ minutes) where *all requests* -- no matter how unthreatening -- > > > > > raise a DeadlineExceededError. Usually this is raised during the > > > > > import of an innocuous module like "re" or "time" or perhaps a Django > > > > > 1.1 module. (We use use_library.) > > > > > > My best theory at the moment is that: > > > > > > 1. It's a cold start, so nothing is cached. > > > > > 2. App Engine encounters the high latency request and bails. > > > > > 3. We probably inadvertently catch the DeadlineExceededError, so the > > > > > runtime doesn't clean up properly. > > > > > 4. Future requests are left in a busted state. > > > > > > Does this sound at all reasonable? I see a few related issues (2396, > > > > > 2266, and 1409) but no firm/completely clear discussion of what's > > > > > happening in any of them. > > > > > > Thanks, > > > > > Dave > > > > > > PS: > > > > > > The specifics about our high latency request are *not* strictly > > > > > relevant to the larger problem I'm having, but I will include them > > > > > because I have a second "side" question to ask about it. > > > > > > The "high latency" request is serving an image. Our app lets users > > > > > uplo
[google-appengine] Re: Introducing App Engine SDK 1.3.0
Well, that gets us partway there. Looking at the docs, it looks like the output image must still be less than 1MB -- certainly fine for thumbnailing, but possibly for not all types of tasks. Also: right now (unless I've missed an API somewhere) to validate images you must pass them to the Image API with a "no-op" transform and see if execute_transforms() succeeds. So if I want to validate a >1MB image, I still have the issue with the output side of the image API. It would be great if we could execute_transforms() directly back to a blob and get a BlobInfo back? Cheers, Dave On Dec 15, 11:18 am, Matthew Blain wrote: > While the limit for passing data directly to the Images (or other) > APIs has not changed, you can pass a Blob key to the Images API to do > exactly what you want: convert a 50MB uploaded image to a smaller > image. > > More information here: > http://code.google.com/appengine/docs/python/images/overview.html#Tra... > http://code.google.com/appengine/docs/java/images/overview.html#Trans... > > --Matthew > > On Dec 15, 10:14 am, trung wrote: > > > > > This is awesome. > > > But the image API limit is still capped at 1MB!!! > > > I still rather be able to resize a 50MB uploaded image down to 1MB or > > less to cut down the time and bandwidth. > > > I assume that increasing the Image API limit is the next logical > > step. :) > > > On Dec 14, 8:00 pm, "Jason (Google)" wrote: > > > > Hi Everyone. We just released version 1.3.0 of the App Engine SDK for > > > both Python and Java. The most notable change is the new experimental > > > Blobstore API which allows billed apps to store files up to 50 MB. The > > > release also includes some performance tweaks to the Java runtime. > > > > Blog > > > post:http://googleappengine.blogspot.com/2009/12/app-engine-sdk-130-releas... > > > > Release notes: > > > Python:http://code.google.com/p/googleappengine/wiki/SdkReleaseNotes > > > Java:http://code.google.com/p/googleappengine/wiki/SdkForJavaReleaseNotes > > > > Cheers! > > > - Jason -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Introducing App Engine SDK 1.3.0
> and see if execute_transforms() succeeds. So if I want to validate a 1++MB > image, > I still have the issue with the output side of the image. (I realize that, when validating, you can always resize the image so that it's likely to be less than 1MB when finished. I just wish there were a straightforward "is_valid_image" API, too...) Cheers, Dave On Dec 15, 11:50 am, Dave Peck wrote: > Well, that gets us partway there. Looking at the docs, it looks like > the output image must still be less than 1MB -- certainly fine for > thumbnailing, but possibly for not all types of tasks. > > Also: right now (unless I've missed an API somewhere) to validate > images you must pass them to the Image API with a "no-op" transform > and see if execute_transforms() succeeds. So if I want to validate a>1MB > image, I still have the issue with the output side of the image > > API. > > It would be great if we could execute_transforms() directly back to a > blob and get a BlobInfo back? > > Cheers, > Dave > > On Dec 15, 11:18 am, Matthew Blain wrote: > > > > > While the limit for passing data directly to the Images (or other) > > APIs has not changed, you can pass a Blob key to the Images API to do > > exactly what you want: convert a 50MB uploaded image to a smaller > > image. > > > More information here: > > http://code.google.com/appengine/docs/python/images/overview.html#Tra... > > http://code.google.com/appengine/docs/java/images/overview.html#Trans... > > > --Matthew > > > On Dec 15, 10:14 am, trung wrote: > > > > This is awesome. > > > > But the image API limit is still capped at 1MB!!! > > > > I still rather be able to resize a 50MB uploaded image down to 1MB or > > > less to cut down the time and bandwidth. > > > > I assume that increasing the Image API limit is the next logical > > > step. :) > > > > On Dec 14, 8:00 pm, "Jason (Google)" wrote: > > > > > Hi Everyone. We just released version 1.3.0 of the App Engine SDK for > > > > both Python and Java. The most notable change is the new experimental > > > > Blobstore API which allows billed apps to store files up to 50 MB. The > > > > release also includes some performance tweaks to the Java runtime. > > > > > Blog > > > > post:http://googleappengine.blogspot.com/2009/12/app-engine-sdk-130-releas... > > > > > Release notes: > > > > Python:http://code.google.com/p/googleappengine/wiki/SdkReleaseNotes > > > > Java:http://code.google.com/p/googleappengine/wiki/SdkForJavaReleaseNotes > > > > > Cheers! > > > > - Jason -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Blob Store Post/Redirect/Get & Django Forms
Hi, This morning I started to modify the code to CityGoRound to use the blobstore for user-uploaded screenshots. We use Django forms in our app. One of our forms (http:// citygoround.org/apps/add/) allows users to upload a new "transit app" to our app gallery. They must include one screenshot; they can include up to five. Blob store handlers must issue a 30x-series redirect once they're done with their work. Understandable. Unfortunately, PGR makes handling blobs in the context of Django forms fairly tricky. Especially for a complex form like ours, we want to provide good feedback if the user does something wrong elsewhere in the form. It appears that, in order to do this, we must now redirect to a GET URL with form contents part of the URL string itself. If you're familiar with Django, you'll see what this doesn't fit into the typical form pattern. Does anyone have suggestions about how this can be cleanly handled? PS: I notice that if a user fails to attach a file, a zero-length blob with mimetype 'text/plain' is created anyway. Is this really desirable? I'm just going to turn around and delete that blob... Thanks, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Blob Store Post/Redirect/Get & Django Forms
I suppose the clean way to do this is to add stuff to the user's session in the blob handler, and then pick it up in the GET request that we redirect to. Not too hard, though from the perspective of the Django forms API not a natural fit. Cheers, Dave On Dec 15, 2:17 pm, Dave Peck wrote: > Hi, > > This morning I started to modify the code to CityGoRound to use the > blobstore for user-uploaded screenshots. > > We use Django forms in our app. One of our forms (http:// > citygoround.org/apps/add/) allows users to upload a new "transit app" > to our app gallery. They must include one screenshot; they can include > up to five. > > Blob store handlers must issue a 30x-series redirect once they're done > with their work. Understandable. > > Unfortunately, PGR makes handling blobs in the context of Django forms > fairly tricky. Especially for a complex form like ours, we want to > provide good feedback if the user does something wrong elsewhere in > the form. It appears that, in order to do this, we must now redirect > to a GET URL with form contents part of the URL string itself. > > If you're familiar with Django, you'll see what this doesn't fit into > the typical form pattern. Does anyone have suggestions about how this > can be cleanly handled? > > PS: I notice that if a user fails to attach a file, a zero-length blob > with mimetype 'text/plain' is created anyway. Is this really > desirable? I'm just going to turn around and delete that blob... > > Thanks, > Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Bug with blobstore internal redirect in dev_appserver.py
On the local server, when the blobstore code performs the internal redirect to whatever URL you specified in create_upload_url(), the POST contents are not properly encoded. According to the RFCs, you must end lines with CRLF, but dev_appserver (and, perhaps, the production environment?) ends lines only with LF. This causes Django 1.1's multipart parser to fail (in parse_boundary_stream), since it is hardcoded to look for \r\n\r\n at the end of each part's header. As a result, I'm blocked on django+blobstore integration work... I've logged this as issue 2515. Thanks, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] PyCrypto and user passwords.
PyCrypto offers the blowfish cipher, but not the bcrypt hash. What's the best way to store passwords on App Engine with PyCrypto? Thanks, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: PyCrypto and user passwords.
Really? No suggestions? I settled on multiple iterations of SHA256. But I note that the version of PyCrypto on app engine doesn't include the Crypto.Random submodule, so it's impossible to even generate a cryptographically satisfactory salt. Can someone on the App Engine team comment on whether Crypto.Random will ever be available? Thanks, Dave On Mar 10, 4:39 pm, Dave Peck wrote: > PyCrypto offers the blowfish cipher, but not thebcrypthash. > > What's the best way to store passwords on App Engine with PyCrypto? > > Thanks, > Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
Re: [google-appengine] Re: PyCrypto and user passwords.
> Can someone on the App Engine team comment on whether Crypto.Random > will ever be available? > Did you try to include your own PyCrypto version? No, because you can't. PyCrypto requires native code, and thus must be supported directly by the App Engine team. App Engine does provide PyCrypto on production, but it does not provide the Crypto.Random submodule. Thus my question: might we ever expect to see it? Thanks, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] SPF Records for App Engine?
My sign-up validation emails are ending up in users' junk mail folders. What's the scoop with SPF records for App Engine? I found a few older posts on this group about it, but nothing from a GOOG employee or pointing to official GOOG documentation. Could someone point me in the right direction here? Thanks, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: SPF Records for App Engine?
I've seen some claims from potentially reliable sources that the correct SPF record is in fact: v=spf1 include:aspmx.googlemail.com ~all But this seems wrong given the link you just pointed me to? Thanks! Cheers, Dave On Mar 27, 2:36 pm, Chris Copeland wrote: > If you are sending from an address f...@yourdomain.com and yourdomain.com is > setup with Google Apps, then here are the > instructions:http://www.google.com/support/a/bin/answer.py?answer=178723 > > If that doesn't help (Yahoo may still be a problem for you), then just use > Postmark or Amazon. > > -Chris > > > > > > > > On Sun, Mar 27, 2011 at 4:28 PM, Dave Peck wrote: > > My sign-up validation emails are ending up in users' junk mail > > folders. > > > What's the scoop with SPF records for App Engine? I found a few older > > posts on this group about it, but nothing from a GOOG employee or > > pointing to official GOOG documentation. > > > Could someone point me in the right direction here? > > > Thanks, > > Dave > > > -- > > You received this message because you are subscribed to the Google Groups > > "Google App Engine" group. > > To post to this group, send email to google-appengine@googlegroups.com. > > To unsubscribe from this group, send email to > > google-appengine+unsubscr...@googlegroups.com. > > For more options, visit this group at > >http://groups.google.com/group/google-appengine?hl=en. -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] TypeError in urlfetch_stub.py for SDK 1.4.3?
I just updgraded to 1.4.3, and now see this error when performing URL fetches: TypeError Exception Value: escape_encode() argument 1 must be string, not unicode Exception Location: /Applications/GoogleAppEngineLauncher.app/Contents/ Resources/GoogleAppEngine-default.bundle/Contents/Resources/ google_appengine/google/appengine/api/urlfetch_stub.py in _RetrieveURL, line 283 This is happening inside app engine SDK code, which is in turn being tickled by Braintree's payment API. This was not a problem in previous versions of the App Engine SDK, though potentially it is a problem with Braintree, not the SDK. Could someone investigate and advise? Thanks, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: TypeError in urlfetch_stub.py for SDK 1.4.3?
Looking at this further, it looks like Braintree is using httplib.HTTPSConnection().request() with a unicode body. My read of the documentation is that the body should be bytes by the time you call request, so I think this is a Braintree API error rather than an App Engine SDK error. Does this sound like a reasonable conclusion? Thanks, Dave On Mar 30, 4:54 pm, Dave Peck wrote: > I just updgraded to 1.4.3, and now see this error when performing URL > fetches: > > TypeError > Exception Value: > escape_encode() argument 1 must be string, not unicode > Exception Location: /Applications/GoogleAppEngineLauncher.app/Contents/ > Resources/GoogleAppEngine-default.bundle/Contents/Resources/ > google_appengine/google/appengine/api/urlfetch_stub.py in > _RetrieveURL, line 283 > > This is happening inside app engine SDK code, which is in turn being > tickled by Braintree's payment API. This was not a problem in previous > versions of the App Engine SDK, though potentially it is a problem > with Braintree, not the SDK. > > Could someone investigate and advise? > > Thanks, > Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Data Store Down?
Title says it all. I have numerous apps. The datastore appears to be failing to write, but doing so silently: no exceptions, no nothing. What's up? Thanks, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Data Store Down?
Well, we're back to working again. But I have logs that pretty clearly show that (1) the datastore was failing to write my entities, and (2) it was failing silently. Not good. -Dave On May 1, 6:30 pm, Dave Peck wrote: > Title says it all. > > I have numerous apps. The datastore appears to be failing to write, > but doing so silently: no exceptions, no nothing. > > What's up? > > Thanks, > Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Data Store Down... writes offline.
I just launched my new app (www.getcloak.com). And, wouldn't you know it, just as I'm sending out invite codes App Engine's datastore goes down. Hard. Users can't sign up because writes appear to be disabled. So, um, is this going to be fixed soon? This is Murphy's law in action with App Engine! I can't even appcfg update my app to give users a prettier error message... Cheers, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Data Store Down... writes offline.
Ah, I see that scheduled downtime got moved. So I assume this is planned and we'll be back soon? We've been down for a while now... On May 3, 6:09 pm, Dave Peck wrote: > I just launched my new app (www.getcloak.com). > > And, wouldn't you know it, just as I'm sending out invite codes App > Engine's datastore goes down. Hard. Users can't sign up because writes > appear to be disabled. > > So, um, is this going to be fixed soon? This is Murphy's law in action > with App Engine! I can't even appcfg update my app to give users a > prettier error message... > > Cheers, > Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Huge number of datastore reads?
My production application went over its daily budget today. Let's just say that my daily budget is roughly 3x what I've ever actually needed on my busiest day. Today was not my busiest day by a long shot. It appears that I serviced a small number of requests today. However, the dashboard claims that I performed roughly 10,000x the number of datastore read ops as requests, at a cost of $LOTS_OF_MONEY_FOR_ONE_DAY This seems like nonsense. My code hasn't changed in a while. I've never reached a quota quite like this. And there's no way my average request requires 10,000 read ops. Just no. Google team -- is there someone I can speak with? I'd like to understand in detail what happened and how to prevent it going forward. -Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Huge number of datastore reads?
The ID is get-cloak-live. No remote queries; everything would have to have been generated by handling requests (some of which were our cron jobs.) Thanks, Dave On Mar 6, 4:53 pm, Alfred Fuller wrote: > What is your app id? > > Did you perform a lot of queries using remote api? > > > > > > > > On Tue, Mar 6, 2012 at 4:40 PM, Dave Peck wrote: > > My production application went over its daily budget today. Let's just > > say that my daily budget is roughly 3x what I've ever actually needed > > on my busiest day. Today was not my busiest day by a long shot. > > > It appears that I serviced a small number of requests today. However, > > the dashboard claims that I performed roughly 10,000x the number of > > datastore read ops as requests, at a cost of > > $LOTS_OF_MONEY_FOR_ONE_DAY > > > This seems like nonsense. My code hasn't changed in a while. I've > > never reached a quota quite like this. And there's no way my average > > request requires 10,000 read ops. Just no. > > > Google team -- is there someone I can speak with? I'd like to > > understand in detail what happened and how to prevent it going > > forward. > > > -Dave > > > -- > > You received this message because you are subscribed to the Google Groups > > "Google App Engine" group. > > To post to this group, send email to google-appengine@googlegroups.com. > > To unsubscribe from this group, send email to > > google-appengine+unsubscr...@googlegroups.com. > > For more options, visit this group at > >http://groups.google.com/group/google-appengine?hl=en. -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Huge number of datastore reads?
To be clear: I lost real customers and real money today. The problem was compounded by apparent issues with Google Checkout on the iPhone. I got a downtime notification and quickly determined the root cause. I was on the road, so I pulled over and immediately logged in with my iPhone to update the billing information. I more-than- doubled our daily budget and submitted it. It looked like everything worked. (That is to say: the form let me type in a new budget and submit it.) Billing was "frozen" for 30 minutes. I kept refreshing, only to discover that for whatever reason _it hadn't worked_. 30 minutes later, I tried again. This doomed us to another 30 minutes of downtime. It wasn't until I raced home to my laptop that I was able to successfully update the billing. This needs to be fixed. The worst of it: we looked like rank amateurs in several ways, but perhaps none greater than our 500 page. App Engine decided to display a generic Google-logo'd "Over Quota" 500 page instead of our custom 500 error page. What customer in their right mind, after seeing such an embarrassment, would think that we're serious about our business? -Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Huge number of datastore reads?
Although I was hoping for useful, non-judgmental replies too. -Dave On Mar 6, 10:08 pm, "Brandon Wirtz" wrote: > > The worst of it: we looked like rank amateurs in several ways, but perhaps > > I'd say "no offense" but I'd be lying. > > "More than doubled" > > My Daily budget sits at 50x my biggest recorded day on almost every app I > have. If your model is to make money, and you are pretty sure you don't > have a bug that is going to cost you millions of dollars. There is no reason > to not set your budget sky high. If you have only 30 minutes to get your app > back with in quota, you are rank amateurs. Your Scale is only limited by > your budget. If you aren't going to set you budget to as large as you > imagine you would want to scale you are doing it wrong. > > If you are going to run with the training wheels on, don't be surprised when > you hit a pot hole and your back tire just spins with you going nowhere. > > -Brandon -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Huge number of datastore reads?
> Should be possible:http://code.google.com/appengine/docs/java/ config/appconfig.html#Cust... Ah, nice feature. Thanks; curious when this was added. Cheers, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Huge number of datastore reads?
Hi Chris, Thanks! > Sorry that you had issues with Checkout. We continue to work with them on > improvements that they should make to the service (and I've forward this > thread on to them). Good to hear. If I can provide further details, just let me know. In the final analysis, it seemed to us that the billing update issued from my iPhone worked... but it didn't. More accurate feedback would have been helpful. > Did you figure out why the DS reads increased? Is this still an issue? The operating theory at the moment is that a change to how our back- end services report back to App Engine triggered a latent (and really nasty) performance bug. We're still tracking down the details now; Alfred has been immensely helpful offline! > Setting your budget higher is probably not a bad idea, but totally up to > you. I think the pros and cons have been outlined (in one way or another) > here :-) Indeed they were. ;-) Cheers, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] download_data performance?
I want to download the full database for one of my apps. Entities consume a relatively modest 2GB. (Indexes all told consume 19GB, but my impression is that download_data won't download these?) appcfg.py download_data has been running now for 12+ hours on a very fast downstream connection. Is there a faster way to download all data from a GAE app? Some settings I can tweak that might help move things along? Thanks, Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: download_data performance?
I'm not sure how I missed that! Thanks. -Dave On Jun 5, 10:53 am, c h wrote: > by default download_data is throttled. read the docs for appcfg.py to see > the settings and change them. > > > > > > > > On Tuesday, June 5, 2012 10:40:55 AM UTC-7, Dave Peck wrote: > > > I want to download the full database for one of my apps. Entities > > consume a relatively modest 2GB. (Indexes all told consume 19GB, but > > my impression is that download_data won't download these?) > > > appcfg.py download_data has been running now for 12+ hours on a very > > fast downstream connection. > > > Is there a faster way to download all data from a GAE app? Some > > settings I can tweak that might help move things along? > > > Thanks, > > Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: download_data performance?
Wait, hang on. 2GB of entities. Default bandwidth limit of 250,000 bytes/sec. So, assuming we exactly saturate that limit, it should take ~2 hours 43 minutes. Now, we'll never saturate, so maybe estimate a 2x or even 3x multiple of that time? That's still far less than the 14+ hours my download job has been running. -Dave On Jun 5, 11:03 am, Dave Peck wrote: > I'm not sure how I missed that! Thanks. > > -Dave > > On Jun 5, 10:53 am, c h wrote: > > > > > > > > > by default download_data is throttled. read the docs for appcfg.py to see > > the settings and change them. > > > On Tuesday, June 5, 2012 10:40:55 AM UTC-7, Dave Peck wrote: > > > > I want to download the full database for one of my apps. Entities > > > consume a relatively modest 2GB. (Indexes all told consume 19GB, but > > > my impression is that download_data won't download these?) > > > > appcfg.py download_data has been running now for 12+ hours on a very > > > fast downstream connection. > > > > Is there a faster way to download all data from a GAE app? Some > > > settings I can tweak that might help move things along? > > > > Thanks, > > > Dave -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: download_data performance?
~970MB right now. On Jun 5, 11:33 am, Barry Hunter wrote: > How big is the data you have already downloaded? > > You should be able to see the size of the file being written to. > > (or find the temporally file its been written to) -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.