[google-appengine] Calculation of frontend hours seems off by almost factor 2.
There seems to be an issue with your billing in general. Have a look at your datastore costs - it's way off than it should be (what you are charged for != daily storage costs) Marcel -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine/-/-S53MUFQAF0J. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Calculation of frontend hours seems off by almost factor 2.
Discard my post, I overlooked your colums. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine/-/-TgsxxQ0BpcJ. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Backends: always-on vs. dynamic when it comes to scaling up/down instances
You could use named versions instead, unless you really need backend features like 10min requests. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Calculation of frontend hours seems off by almost factor 2.
Not directly answering the question, I'm posing another, why the eff are you running instances in such a manner for such little qps Also, I can also suggest that at that low of qps, you could stand to substantially optimize the work you're doing reading data from the data store, which will also help minimize the frontend instance workload -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine/-/uvm50fhNdsYJ. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine]
-- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] The Schedule Format of cron.xml
Hello All, Is it possible to execute my job as following: ?xml version=1.0 encoding=UTF-8? cronentries cron url/mytask/url descriptionExecute my task, every 30 minutes of first and15th of month /description schedule1,15, of month every 2 hours/schedule timezoneAmerica/New_York/timezone /cron /cronentries This syntax is wrong and fail to deploy to GAE. I tested another solution. Execute mytask every 30 minutes. Execute the real task only the date is 1 and 15. I check the dashboard. The Frontend Instance Hours increase dramatically. Any thoughts? Larry -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine/-/YfhcwP0GmhYJ. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] lost access to appengine projects
I am currently unable to access my Appengine projects. When I go here: https://appengine.google.com/a/pennswoods.net I receive this message: The application Admin Console is requesting permission to access your Google Account. Please select an account that you would like to use. No accounts are listed. Clicking Continue provides me with that error message again. Clicking No thanks takes me to: http://www.google.com/ Any options for recovering my projects? -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Calculation of frontend hours seems off by almost factor 2.
Hi Gregory, if I'm doing something wrong here then I'd love to learn how you do it. I tried setting the max idle instances to 1 (didn't help, except increase latency), I set the pending queue to 1s and 2s and 3s (didn't help, but increased the latency). If there is a way to limit the instances reliably, I would love to hear how you do it. We have maybe 0.5 to 1 requests per second on average (the QPS up there in the screenshot is only a snapshot for the past minute). Our average response time is around 300ms. Occasionally more, occasionally less. It's not great, but it's still a lot lower than the 1s limit for continuous usage. We do use the memcache where we can, but it's limited to some 10MB to 30MB for us at the moment, and the hit ratio is always around 65%, so we do have to access the datastore more than we want to. We're also denormalising plenty of data for faster access, but it would substantially slow us down if we had to denormalise *everything* that is displayed on a page. I don't think it should be a problem to do 2 to 3 queries in a single request, maybe 5 datastore gets plus a few memcache gets, should it? Given that we're developing a complex application (www.small-improvements.com) and not just a number crunching app, this seems not too bad. And performance-wise we're happy, it's just the cost that's prohibitive. Anyway, I would love to hear how you do it, and maybe a screenshot of your system (and the ratio of requests to instances) would be interesting. Cheers, Per On Tuesday, May 15, 2012 11:01:26 AM UTC+2, Gregory Nicholas wrote: Not directly answering the question, I'm posing another, why the eff are you running instances in such a manner for such little qps Also, I can also suggest that at that low of qps, you could stand to substantially optimize the work you're doing reading data from the data store, which will also help minimize the frontend instance workload -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine/-/ht06S7VVAkkJ. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Upload data to development server
Could you give me a hint on how I download all of my APPstore data and load it into my localhost. I want a better test environment I am downloading like this appcfg.py download_data --application=s~crime-syndicate --url=remote api --filename=test But uploading like this appcfg.py upload_data --application=crime-syndicate --filename=test --url=http://localhost:/cp_remote_api --email=email --passin fails with error: [Errno 54] Connection reset by peer and Am I using the correct data types. Filename test is in some kind of sql3 stuff On Friday, August 5, 2011 12:25:25 AM UTC+2, Roch Delsalle wrote: Lol, wow I just did the same and it solved my problem. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine/-/Vgazl08fYaUJ. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] appengine admin
I successfully posted an application and it is running on appspot.com Now when I try to submit another: I open appengine.google.com I am redirected to start page and my only options are to read some info or 'Create Application' On selecting Create Application I get prompted for mobile number and fail with message that the number was already used. So can anyone tell me if you have 1 application running must you stop or delete before you can run another? I don't seem to have Adminconsole to administer the application so I have no idea how to delete or stop it. Any ideas? -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine/-/jWkh8M8rZHUJ. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
Re: [google-appengine] Scheduler/billing changes in 1.6.5? My daily cost went from $5 to $35.
Hi Per, Thanks for sending the details. First thing comes to mind is that the culprit might be instances on a version 'ah-builtin-python-bundle'. This version is currently dedicated to the 'Datastore Admin' feature. I think you are running some task like backup/copy/deletion in 'Datastore Admin', right? I hope it could explain the situation you are experiencing right now. Please let me know if that isn't the case. Thanks, -- Takashi On Tue, May 15, 2012 at 7:06 AM, Per per.fragem...@gmail.com wrote: https://lh3.googleusercontent.com/-qa-cjA-cUbM/T7GBSShwGrI/ADY/OYljQYgcczc/s1600/changed-2-weeks-ago.jpg Hi Takashi, I waited with my response because I didn't want to jump to conclusions. Yes, I have been experimenting with some settings every now and then, limiting instances, playing with the pending sliders etc. But all it did was change how quickly instances were created or collected (while always increasing latency). It didn't change how the billed instances were calculated. Above is the chart of my applications' instances for the past month. (The editor didn't allow me to place it at the right location...) It's rather easy to spot when things changed. We used to get billed for roughly one instance on average. Yes, we did have more than one running, but it was (and is!) always one instances that handles 95% to 99% of the load. So it seemed only just that Google would only charge for the main instances. I never asked for an instance that just sits there, but I didn't mind it while it was free :) So now being charged for two or three instances, when only one is really doing anything, seems like a major change that should be documented. Okay, maybe it's just our application, but our pricing has increased steeply. I just posted another message about the frontend hour calculation. Both issues combined seem to have led to a pricing increase from $5 to $10 before, to now $30 to $50. Our request per second have increased moderately, we may have made some requests slower, we might have slightly different usage patterns. But I cannot see a reason for a price increase this steep. Any help or insight would be appreciated. Our app ID is small-improvements-hrd Kind regards, Per On Friday, April 27, 2012 7:02:26 AM UTC+2, Takashi Matsuo (Google) wrote: On Fri, Apr 27, 2012 at 10:16 AM, Per per.fragem...@gmail.com wrote: Hi team, previously, the scheduler used to spin up 3 instances for our application, only used 1 out of the 3, but at least we didn't have to pay for the unused instances. There was always a big gap between total instances and billed instances. Wasn't it because that you set Max Idle Instances at that time? If you set Max Idle Instances to Automatic, the total instances and billed instances should be the same. However, as of yesterday, the scheduler continues to spin up 3 instances on average, but we have to pay for all of them. We're at maybe 1 request per second on average, and it's all handled just fine by one instance. We're on F4, so having to pay for 2 mostly unused instances hurts. Since this seems to coincide with the 1.6.5 release, I'm tempted to think that there's something changed in the background, and it would be great to learn more about the new suggested way. When we tried to limit idle instances last time, all we got was instance churn and bad latency. So we're on automatic/automatic these days, and I'd prefer not to have to experiment again. Some advice would be great. Again, I'm wondering when you tried to limit the idle instances. Generally speaking, you can not get everything. If you want a better performance, you'll need to pay more, and vice versa. However, if you're really certain that the behavior of our scheduler has significantly changed, please let me know your app-id and detailed explanation(it can be off-list), so I can look into it further. That aside 1.6.5 is great of course! :) Thanks! -- Takashi Cheers, Per -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.** com google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscribe@**googlegroups.comgoogle-appengine%2bunsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/** group/google-appengine?hl=enhttp://groups.google.com/group/google-appengine?hl=en. -- Takashi Matsuo | Developer Advocate | tmat...@google.com -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine/-/7AtnmRTCRToJ. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to
Re: [google-appengine] Re: full text search API is experimental, ready for a test drive
Hi All, Let me provide a bit more background on our thinking for pricing for the Search API. We use the experimental phase for many APIs to test the API's stability, get developer feedback and to fine-tune our pricing. It's important to us that we only publish pricing when we are confident that it won't change drastically in a short time frame, so that our developers will have a better sense of what to expect going forward. That being said, here's the general idea of what we're thinking pricing-wise for the Search API. We are planning to charge for the Search API based on three metrics: indexing, searches and storage. Indexing will likely be charged per GB indexed (or re-indexed) and searches per query. This will allow our developers to pay for what they use while still being able to predict their usage (we expect that it's easier to estimate the number of searches your app will make per day than how many front-end instances you might need to run a search stack). As noted in our docshttps://developers.google.com/appengine/docs/python/search/overview#Quotas, we expect our free quotas to cover about 1000 searches per day on a 250MB index once we graduate from experimental -- so that should give you an indication of whether you will be able to keep your usage under the free quota. Thanks, Christina On Sat, May 12, 2012 at 11:48 PM, Jeff Schnitzer j...@infohazard.orgwrote: On Fri, May 11, 2012 at 4:17 PM, Adam Sah adam@gmail.com wrote: - this is pre-release, and if you personally can't bear that risk, then let others play beta-tester for you. - if Google waits for pricing before release then that delays the release, which nobody wants. - pricing can depend on usage patterns, which they only know once they see usage. It isn't black and white. A little more transparency might entice some of the more experienced users to try out the feature and provide feedback. It doesn't require firm pricing, but it does require some amount of trust that the feature will be priced attainably. Unfortunately, there are several recent examples of Google announcing prices that effectively take features offline: Backends, XMPP, and Maps. I don't have high hopes that FTS will avoid the same fate. Jeff -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
Re: [google-appengine] Re: full text search API is experimental, ready for a test drive
I guess what I'm most interested in is knowing how it will compare, pricewise and performancewise, to using the datastore for FTS. Obviously the datastore doesn't provide all the bells and whistles, but it's pretty easy to store all the fragments of a set of words in an indexed list property. For example, 'foo' would store 'f', 'fo', 'foo', and now I can do as-you-type queries. Would this be cheaper (both for storage and for queries) with FTS? I could imagine using this to index geocell queries too. Would this be more or less expensive than a datastore-based solution? Thanks, Jeff On Tue, May 15, 2012 at 9:46 AM, Christina Ilvento cilve...@google.com wrote: Hi All, Let me provide a bit more background on our thinking for pricing for the Search API. We use the experimental phase for many APIs to test the API's stability, get developer feedback and to fine-tune our pricing. It's important to us that we only publish pricing when we are confident that it won't change drastically in a short time frame, so that our developers will have a better sense of what to expect going forward. That being said, here's the general idea of what we're thinking pricing-wise for the Search API. We are planning to charge for the Search API based on three metrics: indexing, searches and storage. Indexing will likely be charged per GB indexed (or re-indexed) and searches per query. This will allow our developers to pay for what they use while still being able to predict their usage (we expect that it's easier to estimate the number of searches your app will make per day than how many front-end instances you might need to run a search stack). As noted in our docs, we expect our free quotas to cover about 1000 searches per day on a 250MB index once we graduate from experimental -- so that should give you an indication of whether you will be able to keep your usage under the free quota. Thanks, Christina On Sat, May 12, 2012 at 11:48 PM, Jeff Schnitzer j...@infohazard.org wrote: On Fri, May 11, 2012 at 4:17 PM, Adam Sah adam@gmail.com wrote: - this is pre-release, and if you personally can't bear that risk, then let others play beta-tester for you. - if Google waits for pricing before release then that delays the release, which nobody wants. - pricing can depend on usage patterns, which they only know once they see usage. It isn't black and white. A little more transparency might entice some of the more experienced users to try out the feature and provide feedback. It doesn't require firm pricing, but it does require some amount of trust that the feature will be priced attainably. Unfortunately, there are several recent examples of Google announcing prices that effectively take features offline: Backends, XMPP, and Maps. I don't have high hopes that FTS will avoid the same fate. Jeff -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] 1.6.6 Pre-release SDKs available
Hi, The pre-release SDKs for 1.6.6 are available. You can download them here: Python: http://code.google.com/p/googleappengine/downloads/detail?name=google_appengine_1.6.6_prerelease.zip Java: http://code.google.com/p/googleappengine/downloads/detail?name=appengine-java-sdk-1.6.6_prerelease.zip Java Version 1.6.6 === - On May 8, 2012 we released an experimental Search API. http://googleappengine.blogspot.com/2012/05/looking-for-search-find-it-on-google.html - App creation for apps using the Master/Slave datastore is now restricted to only those users who already own a Master/Slave app. - Apps with billing enabled are now able to configure up to 100 cron jobs. - Admin Console can no longer be included in an iframe. To prevent clickjacking attacks on the Admin Console, we are now setting X-Frame-Options: SAMEORIGIN. To read more about clickjacking, please read: https://www.owasp.org/index.php/Clickjacking. - The datastore now supports embedding entities as properties of other entities. - The Admin Console will now periodically prompt administrators to take an optional App Engine satisfaction survey. - We have released the full MapReduce framework as experimental. - Appstats now contains information about the cost of the RPCs made during the request. - The Search API has deprecated AddDocumentResponse class. This may require recompilation of your application. - Fixed an issue where large datastore backups were unable to be deleted. - Fixed an issue where datastore backups fail due to an ascii decoding issue. - Fixed an issue where running a projection query on a multi-valued property with an equality filter did not return any results. - Fixed an issue where XG transactions did not work with the Remote API. http://code.google.com/p/googleappengine/issues/detail?id=7238 Python Version 1.6.6 = - On May 8, 2012 we released an experimental Search API. http://googleappengine.blogspot.com/2012/05/looking-for-search-find-it-on-google.html - App creation for apps using the Master/Slave datastore is now restricted to only those users who already own a Master/Slave app. - Apps with billing enabled are now able to configure up to 100 cron jobs. - Admin Console can no longer be included in an iframe. To prevent clickjacking attacks on the Admin Console, we are now setting X-Frame-Options: SAMEORIGIN. To read more about clickjacking, please read: https://www.owasp.org/index.php/Clickjacking. - The Admin Console will now periodically prompt administrators to take an optional App Engine satisfaction survey. - You can now use the third party PyAMF library with Python 2.7. This is available as an experimental feature. - For NDB, Rollback has been added to the default list of flow exceptions. http://code.google.com/p/appengine-ndb-experiment/issues/detail?id=179 - The Search API has deprecated order_id attribute on Document class. It has been replaced with rank. - Fixed an issue where large datastore backups were unable to be deleted. - Fixed an issue where datastore backups fail due to an ASCII decoding issue. - Fixed an issue where the SDK did not import subpackages correctly when using import hooks. - Fixed an issue where running a projection query on a multi-valued property with an equality filter did not return any results. - Fixed an issue where unicode is not consistently handled in python search API. - Fixed an issue where unicode environment variables were dropped in Appstats when using Python 2.7. http://code.google.com/p/googleappengine/issues/detail?id=6448 - Fixed an issue where XG transactions did not work with the Remote API. http://code.google.com/p/googleappengine/issues/detail?id=7238 -Marzia -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Regarding Cursor Problem in SEARCH API
Hi, I am using SEARCH API to perform text search in my application developed using Google AppEngine (Platform: JAVA). I am facing problems with the cursor. Initially I won't have the cursor string and so by using following codes I am able to get first set of records that I needed along with the cursor string. *Code:* query = Query.newBuilder() .setOptions( QueryOptions.newBuilder().setLimit( limit ).setCursor( Cursor.newBuilder().build() ).build() ) .build( queryString ); searchResults = getIndex( accountPin ).search( query ); nextCursor = searchResults.getCursor().toWebSafeString(); *Result Cursor (nextCursor):* false:CqADCuUBCswB9wUnngv/c35zdGFnaW4tY21zAP//AIuSX19mdHNfXwD//wCiYXBwZW5naW5lAP//AIyLkmluZGV4AP//AKJNUDUwVlYA//8AjIuSZG9jX2lkAP//AKI4UUk3NDQA//8AjIA4UUk3NDQA//8AAP8B//6MgYyLnpiWkdKckoz/AHRtoKCZi4ygoP8AXZ6Pj5qRmJaRmv8Ac3RtlpGbmof/AF2yr8rPqan/AHN0bZuQnKCWm/8AXceutsjLy/8Ac3/HrrbIy8v/AP/+EAohwwIDw+6dfww5APRh2PpIARINRG9jdW1lbnRJbmRleBqfAShBTkQgKElTICJjdXN0b21lcl9uYW1lIiAiYXBwZW5naW5lIikgKElTICJncm91cF9uYW1lIiAic35zdGFnaW4tY21zIikgKElTICJuYW1lc3BhY2UiICIiKSAoSVMgImluZGV4X25hbWUiICJNUDUwVlYiKSAoT1IgKFFUICJ0ZXN0IikgKElTICJfX2dhdG9tX18iICJ0ZXN0IikpKUoFCABA6Ac= Is the result cursor which I showed above is a *websafestring*? I don't think so. If it is a *websafestring, *I should be able to put as parameter in URL and get the subsequent records. For the subsequent requests, I will use the following query. *CODE:* query = Query .newBuilder() .setOptions( QueryOptions.newBuilder().setLimit( limit ).setCursor( Cursor.newBuilder().build( nextCursor ) ).build() ) .build( queryType + queryString ); But when I try with the cursor what I have showed, its giving me an exception. *SAMPLE URL:* http://example.appspot.com/?queryType=allqueryString=** testlimit=10cursor=false:**CqADCuUBCswB9wUnngv/**c35zdGFnaW4tY21zAP//* *AIuSX19mdHNfXwD//**wCiYXBwZW5naW5lAP//**AIyLkmluZGV4AP//AKJNUDUwVlYA//** 8AjIuSZG9jX2lkAP//**AKI4UUk3NDQA//8AjIA4UUk3NDQA//** 8AAP8B//6MgYyLnpiWkdKckoz/**AHRtoKCZi4ygoP8AXZ6Pj5qRmJaRmv** 8Ac3RtlpGbmof/AF2yr8rPqan/**AHN0bZuQnKCWm/8AXceutsjLy/**8Ac3/HrrbIy8v/AP/http://29.stagin-cms.appspot.com/services/data/v1.0/objects/search/?apikey=MP50VVqueryType=allqueryString=testlimit=10cursor=false:CqADCuUBCswB9wUnngv/c35zdGFnaW4tY21zAP//AIuSX19mdHNfXwD//wCiYXBwZW5naW5lAP//AIyLkmluZGV4AP//AKJNUDUwVlYA//8AjIuSZG9jX2lkAP//AKI4UUk3NDQA//8AjIA4UUk3NDQA//8AAP8B//6MgYyLnpiWkdKckoz/AHRtoKCZi4ygoP8AXZ6Pj5qRmJaRmv8Ac3RtlpGbmof/AF2yr8rPqan/AHN0bZuQnKCWm/8AXceutsjLy/8Ac3/HrrbIy8v/AP/ EAohwwIDw 6dfww5APRh2PpIARINRG9jdW1l**bnRJbmRleBqfAShBTkQgKElTICJjdX** N0b21lcl9uYW1lIiAiYXBwZW5naW5l**IikgKElTICJncm91cF9uYW1lIiAic3** 5zdGFnaW4tY21zIikgKElTICJuYW1l**c3BhY2UiICIiKSAoSVMgImluZGV4X2** 5hbWUiICJNUDUwVlYiKSAoT1IgKFFU**ICJ0ZXN0IikgKElTICJfX2dhdG9tX1** 8iICJ0ZXN0IikpKUoFCABA6Ac= *Exception:* at com.google.appengine.api.**urlfetch.URLFetchServiceImpl.**conver tApplicationException(**URLFetchServiceImpl.java:115) at com.google.appengine.api.**urlfetch.URLFetchServiceImpl.**fetch( URLFetchServiceImpl.**java:42) at com.google.apphosting.utils.**security.urlfetch.**URLFetchServic eStreamHandler$**Connection.fetchResponse(**URLFetchServiceStreamHandler.** java:418) at com.google.apphosting.utils.**security.urlfetch.**URLFetchServic eStreamHandler$**Connection.getInputStream(**URLFetchServiceStreamHandler.** java:297)* * *I have used URLEncoder and URLDecoder for the cursor string.But that will not workSince it will replace '+' sign with empty space.* *Please suggest me a solution to fix this problem.* -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine/-/w5LCFTLCWDoJ. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] How to store images in a database
Hallo, I am new to this and i would like some help on how to store images to a tinywebdatabase. Thank you in advance. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: 1.6.6 Pre-release SDKs available
on java it produce error, at least on my side, removing appstats filter solves error Error for /_ah/queue/__deferred__ java.lang.NoSuchMethodError: com.google.appengine.tools.appstats.StatsProtos $IndividualRpcStatsProto.hasCallCostMicrodollars()Z at com.google.appengine.tools.appstats.MemcacheWriter.commit(MemcacheWriter.java: 157) at com.google.appengine.tools.appstats.AppstatsFilter.doFilter(AppstatsFilter.java: 151) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.ParseBlobUploadFilter.doFilter(ParseBlobUploadFilter.java: 102) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.runtime.jetty.SaveSessionFilter.doFilter(SaveSessionFilter.java: 35) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java: 43) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java: 388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java: 216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java: 182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java: 765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java: 418) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle(AppVersionHandlerMap.java: 249) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java: 152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java: 542) at org.mortbay.jetty.HttpConnection $RequestHandler.headerComplete(HttpConnection.java:923) at com.google.apphosting.runtime.jetty.RpcRequestParser.parseAvailable(RpcRequestParser.java: 76) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java: 135) at com.google.apphosting.runtime.JavaRuntime $RequestRunnable.run(JavaRuntime.java:446) at com.google.tracing.TraceContext $TraceContextRunnable.runInContext(TraceContext.java:449) at com.google.tracing.TraceContext$TraceContextRunnable $1.run(TraceContext.java:455) at com.google.tracing.TraceContext.runInContext(TraceContext.java: 695) at com.google.tracing.TraceContext $AbstractTraceContextCallback.runInInheritedContextNoUnref(TraceContext.java: 333) at com.google.tracing.TraceContext $AbstractTraceContextCallback.runInInheritedContext(TraceContext.java: 325) at com.google.tracing.TraceContext $TraceContextRunnable.run(TraceContext.java:453) at com.google.apphosting.runtime.ThreadGroupPool $PoolEntry.run(ThreadGroupPool.java:251) at java.lang.Thread.run(Thread.java:679) On May 15, 9:09 pm, Marzia Niccolai marce+appeng...@google.com wrote: Hi, The pre-release SDKs for 1.6.6 are available. You can download them here: Python:http://code.google.com/p/googleappengine/downloads/detail?name=google... Java:http://code.google.com/p/googleappengine/downloads/detail?name=appeng... Java Version 1.6.6 === - On May 8, 2012 we released an experimental Search API. http://googleappengine.blogspot.com/2012/05/looking-for-search-find-i... - App creation for apps using the Master/Slave datastore is now restricted to only those users who already own a Master/Slave app. - Apps with billing enabled are now able to configure up to 100 cron jobs. - Admin Console can no longer be included in an iframe. To prevent clickjacking attacks on the Admin Console, we are now setting X-Frame-Options: SAMEORIGIN. To read more about clickjacking, please read: https://www.owasp.org/index.php/Clickjacking. - The datastore now supports embedding entities as properties of other entities. - The Admin Console will now periodically prompt administrators to take an optional App Engine satisfaction survey. - We have released the full MapReduce framework as experimental. - Appstats now contains information about the cost of the RPCs made during the request. - The Search API has deprecated AddDocumentResponse class. This may require recompilation of your application. - Fixed an issue where large datastore backups were unable to be deleted. - Fixed an issue where datastore backups fail due to an ascii decoding issue. - Fixed an issue where running a projection query on a multi-valued property with an equality filter did not return any results. - Fixed an issue where XG transactions did not work with the Remote API.
[google-appengine] Re: Regarding Cursor Problem in SEARCH API
There is an open issue on this http://code.google.com/p/googleappengine/issues/detail?id=7489q=Component%3DFullTextSearchsort=componentcolspec=ID%20Type%20Component%20Status%20Stars%20Summary%20Language%20Priority%20Owner%20Log that has been acknowledged as defect Please flag it too... On May 15, 3:50 pm, Ananthakrishnan Venkatasubramanian ananthakrishnan.venkatasubraman...@a-cti.com wrote: Hi, I am using SEARCH API to perform text search in my application developed using Google AppEngine (Platform: JAVA). I am facing problems with the cursor. Initially I won't have the cursor string and so by using following codes I am able to get first set of records that I needed along with the cursor string. *Code:* query = Query.newBuilder() .setOptions( QueryOptions.newBuilder().setLimit( limit ).setCursor( Cursor.newBuilder().build() ).build() ) .build( queryString ); searchResults = getIndex( accountPin ).search( query ); nextCursor = searchResults.getCursor().toWebSafeString(); *Result Cursor (nextCursor):* false:CqADCuUBCswB9wUnngv/c35zdGFnaW4tY21zAP//AIuSX19mdHNfXwD//wCiYXBwZW5naW5lAP//AIyLkmluZGV4AP//AKJNUDUwVlYA//8AjIuSZG9jX2lkAP//AKI4UUk3NDQA//8AjIA4UUk3NDQA//8AAP8B//6MgYyLnpiWkdKckoz/AHRtoKCZi4ygoP8AXZ6Pj5qRmJaRmv8Ac3RtlpGbmof/AF2yr8rPqan/AHN0bZuQnKCWm/8AXceutsjLy/8Ac3/HrrbIy8v/AP/+EAohwwIDw+6dfww5APRh2PpIARINRG9jdW1lbnRJbmRleBqfAShBTkQgKElTICJjdXN0b21lcl9uYW1lIiAiYXBwZW5naW5lIikgKElTICJncm91cF9uYW1lIiAic35zdGFnaW4tY21zIikgKElTICJuYW1lc3BhY2UiICIiKSAoSVMgImluZGV4X25hbWUiICJNUDUwVlYiKSAoT1IgKFFUICJ0ZXN0IikgKElTICJfX2dhdG9tX18iICJ0ZXN0IikpKUoFCABA6Ac= Is the result cursor which I showed above is a *websafestring*? I don't think so. If it is a *websafestring, *I should be able to put as parameter in URL and get the subsequent records. For the subsequent requests, I will use the following query. *CODE:* query = Query .newBuilder() .setOptions( QueryOptions.newBuilder().setLimit( limit ).setCursor( Cursor.newBuilder().build( nextCursor ) ).build() ) .build( queryType + queryString ); But when I try with the cursor what I have showed, its giving me an exception. *SAMPLE URL:*http://example.appspot.com/?queryType=allqueryString=** testlimit=10cursor=false:**CqADCuUBCswB9wUnngv/**c35zdGFnaW4tY21zAP//* *AIuSX19mdHNfXwD//**wCiYXBwZW5naW5lAP//**AIyLkmluZGV4AP//AKJNUDUwVlYA//** 8AjIuSZG9jX2lkAP//**AKI4UUk3NDQA//8AjIA4UUk3NDQA//** 8AAP8B//6MgYyLnpiWkdKckoz/**AHRtoKCZi4ygoP8AXZ6Pj5qRmJaRmv** 8Ac3RtlpGbmof/AF2yr8rPqan/**AHN0bZuQnKCWm/8AXceutsjLy/**8Ac3/HrrbIy8v/AP/http://29.stagin-cms.appspot.com/services/data/v1.0/objects/search/?a... EAohwwIDw 6dfww5APRh2PpIARINRG9jdW1l**bnRJbmRleBqfAShBTkQgKElTICJjdX** N0b21lcl9uYW1lIiAiYXBwZW5naW5l**IikgKElTICJncm91cF9uYW1lIiAic3** 5zdGFnaW4tY21zIikgKElTICJuYW1l**c3BhY2UiICIiKSAoSVMgImluZGV4X2** 5hbWUiICJNUDUwVlYiKSAoT1IgKFFU**ICJ0ZXN0IikgKElTICJfX2dhdG9tX1** 8iICJ0ZXN0IikpKUoFCABA6Ac= *Exception:* at com.google.appengine.api.**urlfetch.URLFetchServiceImpl.**conver tApplicationException(**URLFetchServiceImpl.java:115) at com.google.appengine.api.**urlfetch.URLFetchServiceImpl.**fetch( URLFetchServiceImpl.**java:42) at com.google.apphosting.utils.**security.urlfetch.**URLFetchServic eStreamHandler$**Connection.fetchResponse(**URLFetchServiceStreamHandler.** java:418) at com.google.apphosting.utils.**security.urlfetch.**URLFetchServic eStreamHandler$**Connection.getInputStream(**URLFetchServiceStreamHandler.** java:297)* * *I have used URLEncoder and URLDecoder for the cursor string.But that will not workSince it will replace '+' sign with empty space.* *Please suggest me a solution to fix this problem.* -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Login Authentication 404 Error on New App Engine Connected Android Deployment (C2DM)
Hi Joseph, if you're still experiencing this problem please file a production ticket. Thank you. http://code.google.com/p/googleappengine/issues/entry?template=Production%20issue -iein On Saturday, May 12, 2012 10:44:02 AM UTC-7, JLM wrote: I have a new Out of the Box Deployment of an App Engine Connected Android Project. I have deployed the project successfully http://appauthtest.appspot.com/ I am following the instructions on this page https://developers.google.com/eclipse/docs/appeng_android_run_debug The problem comes when I try to trigger a security authentication 1. Force sign in to GAE app by going to *your_production_url*/tasks/. This triggers the security constraint in web.xml to get logged in to GAE. (After logging in, you’ll get an error page, but you can ignore that.) 2. Go to *your_production_url* and Say Hello to App Engine. If logged in, you should see a success message. When I open http://appauthtest.appspot.com/tasks/ I get a 404 Not Found error without it ever having triggered the login screen. The same happens for http://appauthtest.appspot.com/Auth.html which is the Welcome Page directly. Please Help! What is happening. I only signed up for C2DM today and it said it would take a day to kick in. It doesnt make sense for that to be the blocker though in this authentication dimension. Thanks Joseph -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine/-/SLxRxoSBn9cJ. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
Re: [google-appengine] Scheduler/billing changes in 1.6.5? My daily cost went from $5 to $35.
Oh my Goodness! You're right. I never noticed the new built-in version. Plus I never realised there were failed backup-tasks that were retrying all over again (for 20 days! 500 retries!) to do a backup. This meant two instances were permanently doing stuff. Also this explains the unusually high number of datastore operations. Stupid me! But stupid GAE user interface too. How about adding a few words to the backup initiated confirmation dialog? Like these tasks will run in a built-in version, so you can track progress and billing over there? Something along the lines. There's plenty of whitespace on that screen. Anyway, thanks Takashi for pointing me in the right direction! Per On Tuesday, May 15, 2012 5:33:47 PM UTC+2, Takashi Matsuo (Google) wrote: Hi Per, Thanks for sending the details. First thing comes to mind is that the culprit might be instances on a version 'ah-builtin-python-bundle'. This version is currently dedicated to the 'Datastore Admin' feature. I think you are running some task like backup/copy/deletion in 'Datastore Admin', right? I hope it could explain the situation you are experiencing right now. Please let me know if that isn't the case. Thanks, -- Takashi On Tue, May 15, 2012 at 7:06 AM, Per per.fragem...@gmail.com wrote: https://lh3.googleusercontent.com/-qa-cjA-cUbM/T7GBSShwGrI/ADY/OYljQYgcczc/s1600/changed-2-weeks-ago.jpg Hi Takashi, I waited with my response because I didn't want to jump to conclusions. Yes, I have been experimenting with some settings every now and then, limiting instances, playing with the pending sliders etc. But all it did was change how quickly instances were created or collected (while always increasing latency). It didn't change how the billed instances were calculated. Above is the chart of my applications' instances for the past month. (The editor didn't allow me to place it at the right location...) It's rather easy to spot when things changed. We used to get billed for roughly one instance on average. Yes, we did have more than one running, but it was (and is!) always one instances that handles 95% to 99% of the load. So it seemed only just that Google would only charge for the main instances. I never asked for an instance that just sits there, but I didn't mind it while it was free :) So now being charged for two or three instances, when only one is really doing anything, seems like a major change that should be documented. Okay, maybe it's just our application, but our pricing has increased steeply. I just posted another message about the frontend hour calculation. Both issues combined seem to have led to a pricing increase from $5 to $10 before, to now $30 to $50. Our request per second have increased moderately, we may have made some requests slower, we might have slightly different usage patterns. But I cannot see a reason for a price increase this steep. Any help or insight would be appreciated. Our app ID is small-improvements-hrd Kind regards, Per On Friday, April 27, 2012 7:02:26 AM UTC+2, Takashi Matsuo (Google) wrote: On Fri, Apr 27, 2012 at 10:16 AM, Per per.fragem...@gmail.com wrote: Hi team, previously, the scheduler used to spin up 3 instances for our application, only used 1 out of the 3, but at least we didn't have to pay for the unused instances. There was always a big gap between total instances and billed instances. Wasn't it because that you set Max Idle Instances at that time? If you set Max Idle Instances to Automatic, the total instances and billed instances should be the same. However, as of yesterday, the scheduler continues to spin up 3 instances on average, but we have to pay for all of them. We're at maybe 1 request per second on average, and it's all handled just fine by one instance. We're on F4, so having to pay for 2 mostly unused instances hurts. Since this seems to coincide with the 1.6.5 release, I'm tempted to think that there's something changed in the background, and it would be great to learn more about the new suggested way. When we tried to limit idle instances last time, all we got was instance churn and bad latency. So we're on automatic/automatic these days, and I'd prefer not to have to experiment again. Some advice would be great. Again, I'm wondering when you tried to limit the idle instances. Generally speaking, you can not get everything. If you want a better performance, you'll need to pay more, and vice versa. However, if you're really certain that the behavior of our scheduler has significantly changed, please let me know your app-id and detailed explanation(it can be off-list), so I can look into it further. That aside 1.6.5 is great of course! :) Thanks! -- Takashi Cheers, Per -- You received this message because you are subscribed to the Google Groups Google App
[google-appengine] Re: Search API Question
Well there is a way.. You could permutate the fields you would like to be able to partially search and add that as a textfield. I use def get_permutations(string): newfragments = [] tokens = string.lower().split(' ') tokens = list(set(tokens)) for token in tokens: try: token = unicode(token, 'utf-8') except TypeError: #logging.info(token) for n in range(2, len(token) + 1): if not n in newfragments: newfragments.append(token[0:n].encode('utf-8')) return newfragments this splits up the string into tokens down to two letters per word, so for example This is a property would become th thi this is pr pro prop prope proper propert property Seems to work great.. Im not quite sure how efficient this is but it works.. On May 14, 11:50 pm, dennis dennis.kuznet...@gmail.com wrote: Have to admit I was very excited about this when I first saw the experimental release. Sadly I ran into a problem right away with partial word matching. Yes I know this is not supposed to work and I have code that executes datastore starts with queries when we get 0 results for a short single word query. The more interesting problem occurs at one fully matched word and a partial match on the second word of a query. Example: I have a document with a name field set to foo bar and about 999 other documents with the name field set to foo any word that doesn't contain bar in it When I do a query for foo ba I get 0 results from the Search API. and when I do a query for foo I get 1000 results. Out of those 1000 results I really want foo bar to be at the top but since it's only a partial match it can end up anywhere in the result set. The only way I've come up to get around this is do result merging that breaks up the query into individual words and if still not finding much then uses datastore starts with queries. If anyone has suggestions/ideas for other ways this can be implemented I'm all ears. - Dennis. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] Re: Search API Question
argh.. had some app-specific code.. here you go, a clean version the should work out of the box. def get_permutations(string): newfragments = [] tokens = string.lower().split(' ') tokens = list(set(tokens)) for token in tokens: for n in range(2, len(token) + 1): if not n in newfragments: newfragments.append(token[0:n]) return .join(newfragments) On May 16, 1:35 am, Jakob Holmelund j...@kobstaden.dk wrote: Well there is a way.. You could permutate the fields you would like to be able to partially search and add that as a textfield. I use def get_permutations(string): newfragments = [] tokens = string.lower().split(' ') tokens = list(set(tokens)) for token in tokens: try: token = unicode(token, 'utf-8') except TypeError: #logging.info(token) for n in range(2, len(token) + 1): if not n in newfragments: newfragments.append(token[0:n].encode('utf-8')) return newfragments this splits up the string into tokens down to two letters per word, so for example This is a property would become th thi this is pr pro prop prope proper propert property Seems to work great.. Im not quite sure how efficient this is but it works.. On May 14, 11:50 pm, dennis dennis.kuznet...@gmail.com wrote: Have to admit I was very excited about this when I first saw the experimental release. Sadly I ran into a problem right away with partial word matching. Yes I know this is not supposed to work and I have code that executes datastore starts with queries when we get 0 results for a short single word query. The more interesting problem occurs at one fully matched word and a partial match on the second word of a query. Example: I have a document with a name field set to foo bar and about 999 other documents with the name field set to foo any word that doesn't contain bar in it When I do a query for foo ba I get 0 results from the Search API. and when I do a query for foo I get 1000 results. Out of those 1000 results I really want foo bar to be at the top but since it's only a partial match it can end up anywhere in the result set. The only way I've come up to get around this is do result merging that breaks up the query into individual words and if still not finding much then uses datastore starts with queries. If anyone has suggestions/ideas for other ways this can be implemented I'm all ears. - Dennis. -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
Re: [google-appengine] Large latency spike, need assistance
Hi Ronoaldo, A good question! Looking through the Python memcache API, some methods like get_multi_async() or set_multi_async() on the Client class receive rpc object as a keyword argument, so you can pass an rpc object with a particular deadline. For more details, please refer to our doc at: https://developers.google.com/appengine/docs/python/memcache/clientclass Thanks, On Wed, May 9, 2012 at 6:19 AM, Ronoaldo José de Lana Pereira rpere...@beneficiofacil.com.br wrote: Hello Takashi, Is there a way to set a deadline for memcache? Does the pattern: get from cache: if not there, get from datastore: store in cache .. in a scenario where the memcache is down and the timeout to see that it is down will degrade the app performance instead of make it better? As we can see on the status dashboard, there was a huge increase in memcache and datastore latency: https://lh6.googleusercontent.com/-GdRSS0CDCtM/T6mAOddnAHI/ADo/4AyLDR-23hQ/s1600/Captura_de_tela-1.png https://lh3.googleusercontent.com/-9WfMfqlOeGs/T6mAK0LOLjI/ADg/sb0pN-TdiuM/s1600/Captura_de_tela.png Em quinta-feira, 26 de abril de 2012 15h08min16s UTC-3, Takashi Matsuo (Google) escreveu: Hi Nathan, On Fri, Apr 27, 2012 at 1:17 AM, Nathan Skone nsk...@headsprout.com wrote: Takashi, The latency spike stopped around 11:30pm PST. Can you tell what caused the high latency, and if it is likely to occur again in the future? Occasional spikes such as what happened yesterday would make the Google App Engine much less useful to my company. First, are you really sure that the cause is in our side? Do you have any appstats results which show that any of your RPC calls don't take longer time than usual? If your RPCs take time, there are several things you can do to mitigate this. Do you set any deadline on your datastore calls? If no, you may want to set it appropriately, and when hitting the deadline, you can return a failure to your web clients and tell them to retry. Are you using urlfetch service to retrieve external resources? If so, sometimes those external resources can be the culprits. If you entirely rely your app's performance on the memcache service, which has no SLA, your app might see high latency when the memcache is flushed. In this particular case, as far as I know, there was no significant system issue around that time, so I don't think this was a system wide issue, and in such cases, please understand that we can not offer reports like that every time to every customers who experienced high latency(again, premier customers are different, at least to my knowledge). I understand there is no SLA support channel for non-premium accounts. Does that mean that us paying customers that cannot justify the extra $500 monthly fee cannot depend on any support from Google when experiencing problems? No, not at all. There are still several options. You can use a new feature for reporting production issues in your admin console. You should be able to see a link 'Report Production Issue' on the right top side of your admin console, where you can report a production issue alongside of a screenshot with some highlights and privacy masks. That way, now you can report issues to us privately without revealing your app-id. Of course, you can also post here, then we offer a best effort support, like this ;) -- Takashi Thank you for your response, Nathan Skone DYMO / Mimio - A Newell Rubbermaid Company On Wednesday, April 25, 2012 11:58:27 PM UTC-7, Takashi Matsuo (Google) wrote: Hi Nathan, I think it's OK now. Are you still seeing this? BTW, this list is not a support channel with any kind of SLA. Now we're offering premier support for that type of demand. For more details about our premier support, please see: https://developers.google.com/**appengine/docs/premier/https://developers.google.com/appengine/docs/premier/ Regards, -- Takashi On Thu, Apr 26, 2012 at 2:43 AM, Nathan Skone nsk...@headsprout.com wrote: Application: hs-hbo Datastore: High Replication Normal latencies: 50ms-200ms Today's latencies: 5000ms-1ms Idle Instances: ( Automatic – **Automatic ) Pending Latency: ( Automatic – **Automatic ) Dear Appengine Team, This morning the latency of my application saw a sudden spike that has made it unusable for my company's purposes. How can I get assistance with this problem? This is an urgent issue that is directly effecting our customers. Thank you, Nathan Skone DYMO / Mimio - A Newell Rubbermaid Company -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/**msg/google-appengine/-/O-**GXusXXlzsJhttps://groups.google.com/d/msg/google-appengine/-/O-GXusXXlzsJ. To post to this group, send email to
[google-appengine] how to download kind which contain BLOB column
i am trying to download the entity/kind from google appengine datastore so that i upload it on my development server. i am using bulkloader --dump file=ph.dmp --kind=image --url=http://appspot.com/ remote_api -- You received this message because you are subscribed to the Google Groups Google App Engine group. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
[google-appengine] import_string() failed for 'webapp2_extras.appengine.auth.models.User'.
Hi I've been trying to use webapp2_extras auth and sessions modules in my application. I have the application running fine on my machine but it gives an error when I run it on GAE. This is the error I get: import_string() failed for 'webapp2_extras.appengine.auth.models.User'. Possible reasons are: - missing __init__.py in a package; - package or module How can I resolve this? Why is it running fine on my machine and not on GAE. Is there a package missing? Isn't the User model bundled with webapp2_extras.auth ? Same question on StackOverflow: http://stackoverflow.com/questions/10606843/import-error-for-user-modehttp://stackoverflow.com/questions/10606843/import-error-for-user-model Thanks -- You received this message because you are subscribed to the Google Groups Google App Engine group. To view this discussion on the web visit https://groups.google.com/d/msg/google-appengine/-/6onMKoV5a74J. To post to this group, send email to google-appengine@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.