[google-appengine] update record using ndb.expando in python
please help -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] how reduce memory usage on mapreduce controller callback
Hi Jason, Thanks for the detailed answer. I'm very surprised that no one else is talking about these issues. I'm using ndb and my appstats are off. I could see a incredible great improvement in my app when I turned off the stats. So, I recommend people only use stats for testing or debug. As you mentioned I'll try projection queries. Could you explain something more about NDB in-memory cache? Thanks again. Saludos. Moisés Belchín. 2013/10/12 Jason Galea ja...@lecstor.com Hi Moises, we're currently trying to deal with this issue too. Not in mapreduce, just regular handlers. same here - Python 2.7 and F2 instances with 256MB Early on I found that fetching 1000 entities and looping through them to update a property would blow the instance. Reducing this to say 100 fixed the issue. (than when create another task do do the next 100) After handling this request - as far as I understand this isn't so bad unless you're blowing it on every request and starting a new instance is detrimental. While handling this request - this concerns us most as the request does not complete and breaks stuff.. and we see far too many of them. I've spent more than a little time trying to work out what is causing the blowouts but as far as I've been able to work out, memory usage and what causes it is near impossible (or just very, very hard). Are you using NDB? if so.. - you could try disabling the in-memory cache. As I see it, even though you only access one entity at a time, NDB's in memory cache will store them all until the request is completed. - you could try projection queries if you don't need the complete object (or possibly even if you do). Projection queries get their data from an index and the entities returned cannot be put() so I assume they are not cached at all. We're trialling some fixes with these atm. ** If anyone knows any of this is incorrect, please let me know.. I'm actually surprised there is not more discussion of these issues from what we have experienced so maybe we're doing something fundamentally wrong, but I don't believe so. oh, is appstats turned on? I believe the most noticeable improvement we've seen was when we turned it off.. regards, Jason On Fri, Oct 11, 2013 at 5:51 PM, Moises Belchin moisesbelc...@gmail.comwrote: Hi Vinny, Thanks for the tips, but actually I'm not loading a file. I'm only using mapreduce lib for read all the entities for one of my kinds, work with them (I only read some properties to compose the csv line format) and then I write to CSV file on cloud storage using mapreduce FileOutputWriter. Any idea why I'm getting this Criticals memory errors? Thanks all again. Saludos. Moisés Belchín. 2013/10/10 Vinny P vinny...@gmail.com On Thu, Oct 10, 2013 at 5:47 AM, Moises Belchin moisesbelc...@gmail.com wrote: I'm getting a lot of memory limit critical errors in my app when I use mapreduce library. I'm working with Python 2.7, app engine 1.8.5 and F2 instances with 256MB. Has anyone got these critical errors? Hello Moises, How large is your input file? Are you loading the entire file into memory at once? If so, try moving to a BlobstoreLineInputReader which reads in one line at a time - it reduces the memory being used during file processing. - -Vinny P Technology Media Advisor Chicago, IL App Engine Code Samples: http://www.learntogoogleit.com -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- Jason Galea lecstor.com -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails
[google-appengine] Error : Key kind string must be a non-empty string up to 500bytes
help -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
[google-appengine] Re: Error : Key kind string must be a non-empty string up to 500bytes
help what ? You need to provide a lot more information here if you want some help On Monday, October 14, 2013 4:53:59 PM UTC+8, Vijay Kumbhani wrote: help -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] how reduce memory usage on mapreduce controller callback
Hi Moises, you can find all the details here.. https://developers.google.com/appengine/docs/python/ndb/cache The In-Context Cache The in-context cache persists only for the duration of a single incoming HTTP request and is visible only to the code that handles that request. It's fast; this cache lives in memory. When an NDB function writes to the Datastore, it also writes to the in-context cache. When an NDB function reads an entity, it checks the in-context cache first. If the entity is found there, no Datastore interaction takes place. My take-away is that the in-context cache is handy when different parts of your code are calling get() on the same entities and would certainly make things faster, but comes with the trade-off that all entities you touch are staying in memory until the request completes, even if you don't need them any more. With queries you're going to be loading all entities regardless so disabling the in-context cache alone likely won't help much. If you do a keys_only query, though, and get() each entity in turn, then disabling the in-context cache should reduce memory usage (assuming that some/all of the memory used by previous entities is able to be re-used..). Once again, though, you'll likely be sacrificing speed for less memory usage. This is mostly based on how I believe the different parts would/should work, I have no hard evidence.. Jason On Mon, Oct 14, 2013 at 5:36 PM, Moises Belchin moisesbelc...@gmail.comwrote: Hi Jason, Thanks for the detailed answer. I'm very surprised that no one else is talking about these issues. I'm using ndb and my appstats are off. I could see a incredible great improvement in my app when I turned off the stats. So, I recommend people only use stats for testing or debug. As you mentioned I'll try projection queries. Could you explain something more about NDB in-memory cache? Thanks again. Saludos. Moisés Belchín. 2013/10/12 Jason Galea ja...@lecstor.com Hi Moises, we're currently trying to deal with this issue too. Not in mapreduce, just regular handlers. same here - Python 2.7 and F2 instances with 256MB Early on I found that fetching 1000 entities and looping through them to update a property would blow the instance. Reducing this to say 100 fixed the issue. (than when create another task do do the next 100) After handling this request - as far as I understand this isn't so bad unless you're blowing it on every request and starting a new instance is detrimental. While handling this request - this concerns us most as the request does not complete and breaks stuff.. and we see far too many of them. I've spent more than a little time trying to work out what is causing the blowouts but as far as I've been able to work out, memory usage and what causes it is near impossible (or just very, very hard). Are you using NDB? if so.. - you could try disabling the in-memory cache. As I see it, even though you only access one entity at a time, NDB's in memory cache will store them all until the request is completed. - you could try projection queries if you don't need the complete object (or possibly even if you do). Projection queries get their data from an index and the entities returned cannot be put() so I assume they are not cached at all. We're trialling some fixes with these atm. ** If anyone knows any of this is incorrect, please let me know.. I'm actually surprised there is not more discussion of these issues from what we have experienced so maybe we're doing something fundamentally wrong, but I don't believe so. oh, is appstats turned on? I believe the most noticeable improvement we've seen was when we turned it off.. regards, Jason On Fri, Oct 11, 2013 at 5:51 PM, Moises Belchin moisesbelc...@gmail.comwrote: Hi Vinny, Thanks for the tips, but actually I'm not loading a file. I'm only using mapreduce lib for read all the entities for one of my kinds, work with them (I only read some properties to compose the csv line format) and then I write to CSV file on cloud storage using mapreduce FileOutputWriter. Any idea why I'm getting this Criticals memory errors? Thanks all again. Saludos. Moisés Belchín. 2013/10/10 Vinny P vinny...@gmail.com On Thu, Oct 10, 2013 at 5:47 AM, Moises Belchin moisesbelc...@gmail.com wrote: I'm getting a lot of memory limit critical errors in my app when I use mapreduce library. I'm working with Python 2.7, app engine 1.8.5 and F2 instances with 256MB. Has anyone got these critical errors? Hello Moises, How large is your input file? Are you loading the entire file into memory at once? If so, try moving to a BlobstoreLineInputReader which reads in one line at a time - it reduces the memory being used during file processing. - -Vinny P Technology Media Advisor Chicago, IL App Engine Code Samples: http://www.learntogoogleit.com -- You received this message because you are
Re: [google-appengine] Re: Error : Key kind string must be a non-empty string up to 500bytes
You can have 2 problems here: 1 - You are creating an entity with an empty key 2 - Your entity key exceeds 500 bytes See this: http://stackoverflow.com/questions/2557632/how-long-max-characters-can-a-datastore-entity-key-name-be-is-it-bad-to-haver ¿What are your trying to do, and how? If you want help you need to provide more info. 2013/10/14 timh zutes...@gmail.com help what ? You need to provide a lot more information here if you want some help On Monday, October 14, 2013 4:53:59 PM UTC+8, Vijay Kumbhani wrote: help -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- Alejandro González alejandro.gonza...@intelygenz.com +34 666 57 79 13 http://es.linkedin.com/pub/alejandro-gonzález-rodrigo/19/684/466/ http://www.intelygenz.com/ If you have a dream http://www.intelygenz.com/en/cases* we can write the * code http://www.thegameofcode.com/ -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] Re: Error : Key kind string must be a non-empty string up to 500bytes
Hi Vijay, Please provide all details what kind of help bcoz your giving information not be sufficient for any solution. On Mon, Oct 14, 2013 at 3:49 PM, Alejandro González Rodrigo alejandro.gonza...@intelygenz.com wrote: You can have 2 problems here: 1 - You are creating an entity with an empty key 2 - Your entity key exceeds 500 bytes See this: http://stackoverflow.com/questions/2557632/how-long-max-characters-can-a-datastore-entity-key-name-be-is-it-bad-to-haver ¿What are your trying to do, and how? If you want help you need to provide more info. 2013/10/14 timh zutes...@gmail.com help what ? You need to provide a lot more information here if you want some help On Monday, October 14, 2013 4:53:59 PM UTC+8, Vijay Kumbhani wrote: help -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- Alejandro González alejandro.gonza...@intelygenz.com +34 666 57 79 13 http://es.linkedin.com/pub/alejandro-gonzález-rodrigo/19/684/466/ http://www.intelygenz.com/ If you have a dream http://www.intelygenz.com/en/cases* we can write the *code http://www.thegameofcode.com/ -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- Thanks, Shilendra -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] how reduce memory usage on mapreduce controller callback
You guys should star https://code.google.com/p/googleappengine/issues/detail?id=9610 In the past, I've found surprising amount of memory use when working with NDB, especially when using repeated properties. In my experiments, it was very easy to blow up instances with a seemingly small number of entities. I think it's wrapped up in protobuf deserialization which is low enough down the stack that no one seems to have much appetite to touch. j On Monday, 14 October 2013 04:14:05 UTC-6, Jason Galea wrote: Hi Moises, you can find all the details here.. https://developers.google.com/appengine/docs/python/ndb/cache The In-Context Cache The in-context cache persists only for the duration of a single incoming HTTP request and is visible only to the code that handles that request. It's fast; this cache lives in memory. When an NDB function writes to the Datastore, it also writes to the in-context cache. When an NDB function reads an entity, it checks the in-context cache first. If the entity is found there, no Datastore interaction takes place. My take-away is that the in-context cache is handy when different parts of your code are calling get() on the same entities and would certainly make things faster, but comes with the trade-off that all entities you touch are staying in memory until the request completes, even if you don't need them any more. With queries you're going to be loading all entities regardless so disabling the in-context cache alone likely won't help much. If you do a keys_only query, though, and get() each entity in turn, then disabling the in-context cache should reduce memory usage (assuming that some/all of the memory used by previous entities is able to be re-used..). Once again, though, you'll likely be sacrificing speed for less memory usage. This is mostly based on how I believe the different parts would/should work, I have no hard evidence.. Jason On Mon, Oct 14, 2013 at 5:36 PM, Moises Belchin moises...@gmail.comjavascript: wrote: Hi Jason, Thanks for the detailed answer. I'm very surprised that no one else is talking about these issues. I'm using ndb and my appstats are off. I could see a incredible great improvement in my app when I turned off the stats. So, I recommend people only use stats for testing or debug. As you mentioned I'll try projection queries. Could you explain something more about NDB in-memory cache? Thanks again. Saludos. Moisés Belchín. 2013/10/12 Jason Galea ja...@lecstor.com javascript: Hi Moises, we're currently trying to deal with this issue too. Not in mapreduce, just regular handlers. same here - Python 2.7 and F2 instances with 256MB Early on I found that fetching 1000 entities and looping through them to update a property would blow the instance. Reducing this to say 100 fixed the issue. (than when create another task do do the next 100) After handling this request - as far as I understand this isn't so bad unless you're blowing it on every request and starting a new instance is detrimental. While handling this request - this concerns us most as the request does not complete and breaks stuff.. and we see far too many of them. I've spent more than a little time trying to work out what is causing the blowouts but as far as I've been able to work out, memory usage and what causes it is near impossible (or just very, very hard). Are you using NDB? if so.. - you could try disabling the in-memory cache. As I see it, even though you only access one entity at a time, NDB's in memory cache will store them all until the request is completed. - you could try projection queries if you don't need the complete object (or possibly even if you do). Projection queries get their data from an index and the entities returned cannot be put() so I assume they are not cached at all. We're trialling some fixes with these atm. ** If anyone knows any of this is incorrect, please let me know.. I'm actually surprised there is not more discussion of these issues from what we have experienced so maybe we're doing something fundamentally wrong, but I don't believe so. oh, is appstats turned on? I believe the most noticeable improvement we've seen was when we turned it off.. regards, Jason On Fri, Oct 11, 2013 at 5:51 PM, Moises Belchin moises...@gmail.comjavascript: wrote: Hi Vinny, Thanks for the tips, but actually I'm not loading a file. I'm only using mapreduce lib for read all the entities for one of my kinds, work with them (I only read some properties to compose the csv line format) and then I write to CSV file on cloud storage using mapreduce FileOutputWriter. Any idea why I'm getting this Criticals memory errors? Thanks all again. Saludos. Moisés Belchín. 2013/10/10 Vinny P vinn...@gmail.com javascript: On Thu, Oct 10, 2013 at 5:47 AM, Moises Belchin moises...@gmail.comjavascript: wrote:
Re: [google-appengine] how reduce memory usage on mapreduce controller callback
I dug up some old research I had done which found the entities that should be smaller than 10kB were lugging around almost 100kB in-memory due to the in-memory protobuf representation. On Monday, 14 October 2013 07:58:46 UTC-6, Jason Collins wrote: You guys should star https://code.google.com/p/googleappengine/issues/detail?id=9610 In the past, I've found surprising amount of memory use when working with NDB, especially when using repeated properties. In my experiments, it was very easy to blow up instances with a seemingly small number of entities. I think it's wrapped up in protobuf deserialization which is low enough down the stack that no one seems to have much appetite to touch. j -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] ImagesService getServingUrl fails often
It still happens every so often. I just catch the exception and mark particular file for retry later. With enough retries spaced out, I eventually get perma url for all of them. not the best solution, but the only one I found that worked for me On Fri, Oct 11, 2013 at 9:12 PM, Theodore Book theodoreb...@gmail.comwrote: Are you still having this problem? I noticed today that *every* request that I make to get a serving URL is failing with an ImagesServiceFailureException. I'm not sure how long this has been going on - a couple of days at most, probably just hours. There was no change in my code; just the repeated failure. Is anyone else experiencing this? 1. 2. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- Omnem crede diem tibi diluxisse supremum. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] ImagesService getServingUrl fails often
glad to see you found the problem. I think imageService can use some better error messages :) On Mon, Oct 14, 2013 at 2:24 PM, Theodore Book theodoreb...@gmail.comwrote: Thanks for the response! I realized that most of my problems were due to some empty URLs that I was passing to getServingUrl(). It turns out that I am not seeing consistent failure of the API. On Oct 14, 2013, at 1:16 PM, Wilson MacGyver wmacgy...@gmail.com wrote: It still happens every so often. I just catch the exception and mark particular file for retry later. With enough retries spaced out, I eventually get perma url for all of them. not the best solution, but the only one I found that worked for me On Fri, Oct 11, 2013 at 9:12 PM, Theodore Book theodoreb...@gmail.comwrote: Are you still having this problem? I noticed today that *every* request that I make to get a serving URL is failing with an ImagesServiceFailureException. I'm not sure how long this has been going on - a couple of days at most, probably just hours. There was no change in my code; just the repeated failure. Is anyone else experiencing this? 1. 2. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- Omnem crede diem tibi diluxisse supremum. -- You received this message because you are subscribed to a topic in the Google Groups Google App Engine group. To unsubscribe from this topic, visit https://groups.google.com/d/topic/google-appengine/JwW9CwNoWqA/unsubscribe . To unsubscribe from this group and all its topics, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- Omnem crede diem tibi diluxisse supremum. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] Error : Key kind string must be a non-empty string up to 500bytes
On Mon, Oct 14, 2013 at 5:19 AM, Alejandro González Rodrigo alejandro.gonza...@intelygenz.com wrote: You can have 2 problems here: 1 - You are creating an entity with an empty key 2 - Your entity key exceeds 500 bytes +1 On Mon, Oct 14, 2013 at 3:53 AM, Vijay Kumbhani vnkumbh...@gmail.com wrote: help If you're having difficulty expressing what the problem is, try posting screenshots of the error and your current source code. - -Vinny P Technology Media Advisor Chicago, IL App Engine Code Samples: http://www.learntogoogleit.com -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] BlobInfo object from a BlobKey created using blobstore.create_gs_key
On Mon, Oct 14, 2013 at 12:40 AM, PK p...@gae123.com wrote: I posted this question at StackOverflow a few days ago but I have not received any response. I would appreciate any suggestions: http://stackoverflow.com/questions/19287709/blobinfo-object-from-a-blobkey-created-using-blobstore-create-gs-key Make sure the permissions are set appropriately on the Cloud Storage bucket. Your application's gserviceaccount address ( *applicationname @ appspot . gserviceaccount . com* ) should be listed as having write permissions on the bucket. Here's a pic of how the permissions page should look: http://imgur.com/bOXcYpP If that doesn't work, you might want to verify that cloud storage is configured, the billing is set up and paid up, etc. - -Vinny P Technology Media Advisor Chicago, IL App Engine Code Samples: http://www.learntogoogleit.com -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] Some tasks are never executed and dynamic backend gets stuck
On Sun, Oct 13, 2013 at 11:48 AM, Luis Pereira l.pereira.fernan...@gmail.com wrote: Haven't had the chance to download the log files yet. But in the console, there is no sign at all about these tasks in the backend log files. Only when we shut down the backend we see those commented errors. Backends flush log data on a periodic basis, see https://developers.google.com/appengine/docs/java/backends/#Java_Periodic_logging for documentation. If you're not seeing any logs, try forcibly flushing logs by calling ApiProxy's flushLogs() method: https://developers.google.com/appengine/docs/java/javadoc/com/google/apphosting/api/ApiProxy What's odd is that there should still be logs of the request, even if the application itself is not printing any log data. Hopefully the missing logs will be recorded in the downloadable logs service. On Sun, Oct 13, 2013 at 11:48 AM, Luis Pereira l.pereira.fernan...@gmail.com wrote: We are moving to HRD in the coming weeks. Hope the issue disappears after the migration. The HRD migration is the best chance to fix this problem - M/S has unusual issues, and it wouldn't surprise me at all if that was the problem. - -Vinny P Technology Media Advisor Chicago, IL App Engine Code Samples: http://www.learntogoogleit.com -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] how reduce memory usage on mapreduce controller callback
thanks Jason, yeh I starred that one a while back. On Mon, Oct 14, 2013 at 11:58 PM, Jason Collins jason.a.coll...@gmail.comwrote: You guys should star https://code.google.com/p/googleappengine/issues/detail?id=9610 In the past, I've found surprising amount of memory use when working with NDB, especially when using repeated properties. In my experiments, it was very easy to blow up instances with a seemingly small number of entities. I think it's wrapped up in protobuf deserialization which is low enough down the stack that no one seems to have much appetite to touch. j On Monday, 14 October 2013 04:14:05 UTC-6, Jason Galea wrote: Hi Moises, you can find all the details here.. https://developers.** google.com/appengine/docs/**python/ndb/cachehttps://developers.google.com/appengine/docs/python/ndb/cache The In-Context Cache The in-context cache persists only for the duration of a single incoming HTTP request and is visible only to the code that handles that request. It's fast; this cache lives in memory. When an NDB function writes to the Datastore, it also writes to the in-context cache. When an NDB function reads an entity, it checks the in-context cache first. If the entity is found there, no Datastore interaction takes place. My take-away is that the in-context cache is handy when different parts of your code are calling get() on the same entities and would certainly make things faster, but comes with the trade-off that all entities you touch are staying in memory until the request completes, even if you don't need them any more. With queries you're going to be loading all entities regardless so disabling the in-context cache alone likely won't help much. If you do a keys_only query, though, and get() each entity in turn, then disabling the in-context cache should reduce memory usage (assuming that some/all of the memory used by previous entities is able to be re-used..). Once again, though, you'll likely be sacrificing speed for less memory usage. This is mostly based on how I believe the different parts would/should work, I have no hard evidence.. Jason On Mon, Oct 14, 2013 at 5:36 PM, Moises Belchin moises...@gmail.comwrote: Hi Jason, Thanks for the detailed answer. I'm very surprised that no one else is talking about these issues. I'm using ndb and my appstats are off. I could see a incredible great improvement in my app when I turned off the stats. So, I recommend people only use stats for testing or debug. As you mentioned I'll try projection queries. Could you explain something more about NDB in-memory cache? Thanks again. Saludos. Moisés Belchín. 2013/10/12 Jason Galea ja...@lecstor.com Hi Moises, we're currently trying to deal with this issue too. Not in mapreduce, just regular handlers. same here - Python 2.7 and F2 instances with 256MB Early on I found that fetching 1000 entities and looping through them to update a property would blow the instance. Reducing this to say 100 fixed the issue. (than when create another task do do the next 100) After handling this request - as far as I understand this isn't so bad unless you're blowing it on every request and starting a new instance is detrimental. While handling this request - this concerns us most as the request does not complete and breaks stuff.. and we see far too many of them. I've spent more than a little time trying to work out what is causing the blowouts but as far as I've been able to work out, memory usage and what causes it is near impossible (or just very, very hard). Are you using NDB? if so.. - you could try disabling the in-memory cache. As I see it, even though you only access one entity at a time, NDB's in memory cache will store them all until the request is completed. - you could try projection queries if you don't need the complete object (or possibly even if you do). Projection queries get their data from an index and the entities returned cannot be put() so I assume they are not cached at all. We're trialling some fixes with these atm. ** If anyone knows any of this is incorrect, please let me know.. I'm actually surprised there is not more discussion of these issues from what we have experienced so maybe we're doing something fundamentally wrong, but I don't believe so. oh, is appstats turned on? I believe the most noticeable improvement we've seen was when we turned it off.. regards, Jason On Fri, Oct 11, 2013 at 5:51 PM, Moises Belchin moises...@gmail.comwrote: Hi Vinny, Thanks for the tips, but actually I'm not loading a file. I'm only using mapreduce lib for read all the entities for one of my kinds, work with them (I only read some properties to compose the csv line format) and then I write to CSV file on cloud storage using mapreduce FileOutputWriter. Any idea why I'm getting this Criticals memory errors? Thanks all again. Saludos. Moisés Belchín. 2013/10/10 Vinny P
Re: [google-appengine] update record using ndb.expando in python
On Mon, Oct 14, 2013 at 2:17 AM, Vijay Kumbhani vnkumbh...@gmail.com wrote: update record using ndb.expando in python See here for a code example: https://developers.google.com/appengine/docs/python/ndb/entities#expando In short, query for the entity, retrieve it, update the property you need to, and then put the entity back in with the same key name. - -Vinny P Technology Media Advisor Chicago, IL App Engine Code Samples: http://www.learntogoogleit.com -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] BlobInfo object from a BlobKey created using blobstore.create_gs_key
Thanks Vinny, the permissions on the bucket look good. What about the permissions on the files created by the app? How are these supposed to look? Are they inherited from the bucket, or should they be explicitly specified? Thanks, PK http://www.gae123.com On October 14, 2013 at 4:54:29 PM, Vinny P (vinny...@gmail.com) wrote: On Mon, Oct 14, 2013 at 12:40 AM, PK p...@gae123.com wrote: I posted this question at StackOverflow a few days ago but I have not received any response. I would appreciate any suggestions: http://stackoverflow.com/questions/19287709/blobinfo-object-from-a-blobkey-created-using-blobstore-create-gs-key Make sure the permissions are set appropriately on the Cloud Storage bucket. Your application's gserviceaccount address ( applicationname @ appspot . gserviceaccount . com ) should be listed as having write permissions on the bucket. Here's a pic of how the permissions page should look: http://imgur.com/bOXcYpP If that doesn't work, you might want to verify that cloud storage is configured, the billing is set up and paid up, etc. - -Vinny P Technology Media Advisor Chicago, IL App Engine Code Samples: http://www.learntogoogleit.com -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] Re: Not Equal Operator in Filter with datastore.Query in GAE with Python
FYI, I do not recommend you use that solution as it registers kind in the kind_map for all requests used by a given instance. Instead there should be an option to decode undeclared kinds using an Expando. This feature actually does exist, though I noticed it was broken when using memcache. I recommend you file a feature request to support this use case so we can track a good solution. On Fri, Oct 11, 2013 at 3:26 AM, Mitul Golakiya mtl.golak...@gmail.comwrote: Thanks Alftred, It worked... Thank you so much for you help... On Friday, October 11, 2013 1:22:22 PM UTC+5:30, Alfred Fuller wrote: Thats because you are using 'db' instead of 'ndb' On Fri, Oct 11, 2013 at 12:44 AM, Mitul Golakiya mtl.go...@gmail.comwrote: I have tried this code, but still it is creating entity as class name (* DynamicEntity*) not as given my custom entity name(*MyCustomEntity*): class DynamicEntity(db.Expando): @classmethod def _get_kind(cls): return 'MyCustomEntity' newEntity = DynamicEntity() newEntity.fname = Test newEntity.put() On Friday, October 11, 2013 11:39:43 AM UTC+5:30, Alfred Fuller wrote: FYI, the following function should be sufficient to register a dynamic kind in the kind_map: def addExpandoForKind(kind): class Dummy(db.Expando): @classmethod def _get_kind(cls): return kind though it is not a good solution :-) (there are all sorts of memory, thread and request related issues) On Thu, Oct 10, 2013 at 10:54 PM, timh zute...@gmail.com wrote: You could use metaclasses to create dynamic models, however you would always have to create a matching class (which would register the class to kind map) before you could perform any query. I think you should elaborate more on what you are trying to do, we may be able to suggest an approach which doesn't require you to go this far. Regards Tim On Wednesday, October 9, 2013 9:32:03 PM UTC+8, Mitul Golakiya wrote: Hello All, We are developing one app on GAE with python. We are using datastore.py for querying data from datastore, because we have to define our entities at runtime. So we can not use db.Model to define models. And retrieve data by models. We have to use Not Equal operator to retrieve some data from datastore. Suppose, I want to retrieve all Persons, with name is not equal to Test. Here is my code : filerObj = {} filterObj[name!=] = Test dbObj = datastore.Query(Person, filterObj) dbObj.Run() recordsList = dbObj.Get() By this code, we are getting empty result. Any idea about what's wrong ?? Thanks in Advance... -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengi...@**googlegroups**.com. To post to this group, send email to google-a...@googlegroups.**com. Visit this group at http://groups.google.com/**group** /google-appengine http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/**grou**ps/opt_outhttps://groups.google.com/groups/opt_out . -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengi...@**googlegroups.com. To post to this group, send email to google-a...@googlegroups.**com. Visit this group at http://groups.google.com/**group/google-appenginehttp://groups.google.com/group/google-appengine . For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out . -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.
Re: [google-appengine] Re: Not Equal Operator in Filter with datastore.Query in GAE with Python
I will submit a feature request for this On Tuesday, October 15, 2013 7:35:27 AM UTC+5:30, Alfred Fuller wrote: FYI, I do not recommend you use that solution as it registers kind in the kind_map for all requests used by a given instance. Instead there should be an option to decode undeclared kinds using an Expando. This feature actually does exist, though I noticed it was broken when using memcache. I recommend you file a feature request to support this use case so we can track a good solution. On Fri, Oct 11, 2013 at 3:26 AM, Mitul Golakiya mtl.go...@gmail.comjavascript: wrote: Thanks Alftred, It worked... Thank you so much for you help... On Friday, October 11, 2013 1:22:22 PM UTC+5:30, Alfred Fuller wrote: Thats because you are using 'db' instead of 'ndb' On Fri, Oct 11, 2013 at 12:44 AM, Mitul Golakiya mtl.go...@gmail.comwrote: I have tried this code, but still it is creating entity as class name ( *DynamicEntity*) not as given my custom entity name(*MyCustomEntity*): class DynamicEntity(db.Expando): @classmethod def _get_kind(cls): return 'MyCustomEntity' newEntity = DynamicEntity() newEntity.fname = Test newEntity.put() On Friday, October 11, 2013 11:39:43 AM UTC+5:30, Alfred Fuller wrote: FYI, the following function should be sufficient to register a dynamic kind in the kind_map: def addExpandoForKind(kind): class Dummy(db.Expando): @classmethod def _get_kind(cls): return kind though it is not a good solution :-) (there are all sorts of memory, thread and request related issues) On Thu, Oct 10, 2013 at 10:54 PM, timh zute...@gmail.com wrote: You could use metaclasses to create dynamic models, however you would always have to create a matching class (which would register the class to kind map) before you could perform any query. I think you should elaborate more on what you are trying to do, we may be able to suggest an approach which doesn't require you to go this far. Regards Tim On Wednesday, October 9, 2013 9:32:03 PM UTC+8, Mitul Golakiya wrote: Hello All, We are developing one app on GAE with python. We are using datastore.py for querying data from datastore, because we have to define our entities at runtime. So we can not use db.Model to define models. And retrieve data by models. We have to use Not Equal operator to retrieve some data from datastore. Suppose, I want to retrieve all Persons, with name is not equal to Test. Here is my code : filerObj = {} filterObj[name!=] = Test dbObj = datastore.Query(Person, filterObj) dbObj.Run() recordsList = dbObj.Get() By this code, we are getting empty result. Any idea about what's wrong ?? Thanks in Advance... -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengi...@**googlegroups**.com. To post to this group, send email to google-a...@googlegroups.**com. Visit this group at http://groups.google.com/**group** /google-appengine http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/**grou**ps/opt_outhttps://groups.google.com/groups/opt_out . -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengi...@**googlegroups.com. To post to this group, send email to google-a...@googlegroups.**com. Visit this group at http://groups.google.com/**group/google-appenginehttp://groups.google.com/group/google-appengine . For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out . -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengi...@googlegroups.com javascript:. To post to this group, send email to google-a...@googlegroups.comjavascript: . Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To unsubscribe from this group and stop receiving emails from it, send an email to google-appengine+unsubscr...@googlegroups.com. To post to this group, send email to google-appengine@googlegroups.com. Visit this group at http://groups.google.com/group/google-appengine. For more options, visit https://groups.google.com/groups/opt_out.