[google-appengine] Re: Channel API - Channels Created Costs
We've been using the Channel API for a while and since March 2014 pricing changes (http://googlecloudplatform.blogspot.fr/2014/03/google-cloud-platform-live-blending-iaas-and-paas-moores-law-for-the-cloud.html) Channel created are free. They used to be priced at 1 cent for 100 created, which appstats still seems to reflect. On Sunday, October 5, 2014 10:01:20 PM UTC+2, Mihail Russu wrote: You can use appstats https://cloud.google.com/appengine/docs/python/tools/appstats to find out and calculate costs for any of your GAE handlers/functions. A quick test reveals that creating a channel costs 10,000 micro-pennies (1 dollar equals 100 pennies, 1 penny equals 1 million micropennies), which is considered expensive since, if compared, 1 datastore read by key costs only 100 micropennies, so 1 cent USD would let you create 100 channels. Also, I am not sure what is the 90,040 (or 95,040 in my case) limit is about since considering that there are 1,440 minutes per day and according to the like you provided where it says that GAE allows 60 channels created per minute, that calculates to 86,400 MAX channels that can be created, no matter what the budget is (which is not that much if you think about it). It would be great if anyone could clarify this or point to any errors in my calculations. Thanks, Mihail. On Saturday, October 4, 2014 6:33:57 PM UTC-5, cr...@portical.com.au wrote: Hi, I have a paid app and can see the quota is 90,040 per day for channels created. https://console.developers.google.com/project/PROJECTNAME/appengine/quotadetails I have looked at all the pricing pages from cloud services and developer pricing pages and can not see any prices regarding the cost per channel created after the 100 free limit. https://cloud.google.com/appengine/docs/quotas#Channel this page only says the daily limit is Based on your budget but I do not see any costs anywhere on the web. Am I to assume that I get 90,040 free per day for having a paid app? -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/d/optout.
[google-appengine] Trouble reading a pile of stuff from the datastore
I have an app that once a day does a big data processing task. Every now and then it would throw a datastore timeout error. But now it's throwing them constantly. I thought maybe my data had tripped over some limit on how much you can read, but I just added some instrumentation and it's only reading less than half of the entities. If I was tripping over an undocumented limit, I'd think it would read all most all of them (since only a few get added each day). Basically, the code is simply: for h in HitModel.all(): (do collect up info about h) and there are about 85K HitModel objects in the database. It's dying after reading 35,000 of them (which takes about a minute). It's on HR data store. Still on Python 2.5. App ID is kaon-log The error I'm getting is: 2014-10-07 11:16:46.925 The datastore operation timed out, or the data was temporarily unavailable. Traceback (most recent call last): File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py, line 714, in __call__ handler.get(*groups) File /base/data/home/apps/s~kaon-log/33.379217403803985923/main.py, line 648, in get for h in HitModel.all(): File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/ext/db/__init__.py, line 2326, in next return self.__model_class.from_entity(self.__iterator.next()) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py, line 3091, in next next_batch = self.__batcher.next_batch(Batcher.AT_LEAST_OFFSET) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py, line 2977, in next_batch batch = self.__next_batch.get_result() File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py, line 612, in get_result return self.__get_result_hook(self) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py, line 2710, in __query_result_hook self._batch_shared.conn.check_rpc_success(rpc) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_rpc.py, line 1333, in check_rpc_success raise _ToDatastoreError(err) Timeout: The datastore operation timed out, or the data was temporarily unavailable. Any ideas? (Breaking this up into multiple tasks would be really hard.) -Joshua -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/d/optout.
Re: [google-appengine] Trouble reading a pile of stuff from the datastore
I have seen this error as well and had to change my code. My situation is python 2.7/HRD/db.Query() in a module with manual scaling. q = db.Query(…) … for ent in q.run(): do stuff The iteration goes well for a large number of entities and then gives up in a similar way. Something seems to timeout for long lived queries. Needless to say it works fine on the dev server but there of course I do not have so much data. Breaking it down with cursors or equivalent works fine and this is what I am doing as a work around. I did not even bother to see if there is an issue for it but I would happily star it if there is one. PK http://www.gae123.com On October 7, 2014 at 8:26:11 AM, Joshua Smith (mrjoshuaesm...@gmail.com) wrote: I have an app that once a day does a big data processing task. Every now and then it would throw a datastore timeout error. But now it’s throwing them constantly. I thought maybe my data had tripped over some limit on how much you can read, but I just added some instrumentation and it’s only reading less than half of the entities. If I was tripping over an undocumented limit, I’d think it would read all most all of them (since only a few get added each day). Basically, the code is simply: for h in HitModel.all(): (do collect up info about h) and there are about 85K HitModel objects in the database. It’s dying after reading 35,000 of them (which takes about a minute). It’s on HR data store. Still on Python 2.5. App ID is “kaon-log” The error I’m getting is: 2014-10-07 11:16:46.925 The datastore operation timed out, or the data was temporarily unavailable. Traceback (most recent call last): File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py, line 714, in __call__ handler.get(*groups) File /base/data/home/apps/s~kaon-log/33.379217403803985923/main.py, line 648, in get for h in HitModel.all(): File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/ext/db/__init__.py, line 2326, in next return self.__model_class.from_entity(self.__iterator.next()) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py, line 3091, in next next_batch = self.__batcher.next_batch(Batcher.AT_LEAST_OFFSET) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py, line 2977, in next_batch batch = self.__next_batch.get_result() File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py, line 612, in get_result return self.__get_result_hook(self) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py, line 2710, in __query_result_hook self._batch_shared.conn.check_rpc_success(rpc) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_rpc.py, line 1333, in check_rpc_success raise _ToDatastoreError(err) Timeout: The datastore operation timed out, or the data was temporarily unavailable. Any ideas? (Breaking this up into multiple tasks would be really hard.) -Joshua -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/d/optout.
Re: [google-appengine] Trouble reading a pile of stuff from the datastore
Yup, cursors are a good workaround. Here's my fix... hcur = None keepGoing = True while keepGoing: count = 0 hall = HitModel.all() if hcur: hall.with_cursor(start_cursor=hcur) keepGoing = False for h in hall: ...my processing stuff... count += 1 if count == 2: hcur = hall.cursor() keepGoing = True break On Oct 7, 2014, at 11:38 AM, PK p...@gae123.com wrote: I have seen this error as well and had to change my code. My situation is python 2.7/HRD/db.Query() in a module with manual scaling. q = db.Query(...) ... for ent in q.run(): do stuff The iteration goes well for a large number of entities and then gives up in a similar way. Something seems to timeout for long lived queries. Needless to say it works fine on the dev server but there of course I do not have so much data. Breaking it down with cursors or equivalent works fine and this is what I am doing as a work around. I did not even bother to see if there is an issue for it but I would happily star it if there is one. PK http://www.gae123.com On October 7, 2014 at 8:26:11 AM, Joshua Smith (mrjoshuaesm...@gmail.com) wrote: I have an app that once a day does a big data processing task. Every now and then it would throw a datastore timeout error. But now it's throwing them constantly. I thought maybe my data had tripped over some limit on how much you can read, but I just added some instrumentation and it's only reading less than half of the entities. If I was tripping over an undocumented limit, I'd think it would read all most all of them (since only a few get added each day). Basically, the code is simply: for h in HitModel.all(): (do collect up info about h) and there are about 85K HitModel objects in the database. It's dying after reading 35,000 of them (which takes about a minute). It's on HR data store. Still on Python 2.5. App ID is kaon-log The error I'm getting is: 2014-10-07 11:16:46.925 The datastore operation timed out, or the data was temporarily unavailable. Traceback (most recent call last): File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py, line 714, in __call__ handler.get(*groups) File /base/data/home/apps/s~kaon-log/33.379217403803985923/main.py, line 648, in get for h in HitModel.all(): File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/ext/db/__init__.py, line 2326, in next return self.__model_class.from_entity(self.__iterator.next()) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py, line 3091, in next next_batch = self.__batcher.next_batch(Batcher.AT_LEAST_OFFSET) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py, line 2977, in next_batch batch = self.__next_batch.get_result() File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py, line 612, in get_result return self.__get_result_hook(self) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py, line 2710, in __query_result_hook self._batch_shared.conn.check_rpc_success(rpc) File /base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_rpc.py, line 1333, in check_rpc_success raise _ToDatastoreError(err) Timeout: The datastore operation timed out, or the data was temporarily unavailable. Any ideas? (Breaking this up into multiple tasks would be really hard.) -Joshua -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/d/optout.
[google-appengine] Re: 1.9.13 causing some requests to crash instances?
looks like this has stopped as of around 1pm PST today. app engine team, if this was a 1.9.13 bug that got fixed, thank you! On Monday, October 6, 2014 8:54:04 AM UTC-7, Ryan Barrett wrote: hi all! starting around 3PM PST yesterday, my instances have started crashing on a small fraction of requests, much more often and consistently than before. (app id s~brid-gy.) the log message is the usual, A problem was encountered with the process that handled this request, causing it to exit... (Error code 204) when this started, i hadn't deployed any new changes since 10/2, and i haven't found anything suspicious and new in my workload or usage pattern. my instances are on 1.9.13, which hasn't been announced yet, so i wonder if the 1.9.13 rollout got to my app yesterday and may have caused this. has anyone seen a similar bump in instance crashes in their apps since yesterday? -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/d/optout.