[google-appengine] Re: App Engine VM-based Backends - Trusted Tester Sign-up

2013-09-19 Thread Mihai
This allows us to run websockets  however without auto-scaling is 
worthless .

Thanks,
Mihai.


On Thursday, 20 June 2013 20:21:06 UTC+1, Takashi Matsuo (Google) wrote:


 Fellow App Engine Gurus,

 We're happy to announce the next generation of App Engine Managed 
 Backends. These Backends utilize the App Engine VM Runtime, allowing 
 developers to run Backends on Compute Engine VMs. By building on top of 
 Compute Engine VMs, developers can:


- take advantage of higher CPU and memory
- rely on longer-lived processes
- utilize a local filesystem
- communicate via native network stacks
- execute external processes
- access the entire JRE
- upload arbitrary Python extensions


 Given that these are App Engine Backends, you can still use all the App 
 Engine APIs to access the existing managed services (Datastore, Task 
 Queues, Memcache, etc.)

 Updating existing Backends to run on Compute Engine VMs is a simple config 
 change:

 app.yaml
 
 application: app-id
 version: v1
 runtime: python27
 *vm: true*
 *
 *
 *manual_scaling:*
 *  instances: 1*


 That’s all you need to get started. We’ll pick a deafult VM machine type 
 and spin everything up your behalf. Of course there other options that you 
 can set (including machine type) and these are documented in the Getting 
 Started 
 Guidehttps://docs.google.com/document/d/1VH1oVarfKILAF_TfvETtPPE3TFzIuWqsa22PtkRkgJ4/edit#
 .

 In order to build a great product, we need quality feedback from brave 
 early adopters. If you’re interested test driving, please sign-up at the 
 link below and we’ll take care of the rest.

 App Engine VM-based VM Runtime - Trusted Tester 
 Sign-uphttps://docs.google.com/forms/d/1NTPROehZLn7lzu3pcXryB8BlZN5cu0SwiIzPnl35xHs/viewform

 Also, if you have any questions, please feel free to send an email to:
 appengine-...@googelgroups.com javascript:

 Thanks!

 -- 
 Takashi Matsuo | Developers Programs Engineer | tma...@google.comjavascript:
  

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Re: Lots of warmup requests

2013-09-19 Thread Francois Masurel
Things seem to have stabilized at the moment, no more instance agressive 
killing like yesterday.

The problem was for Java apps only.


On Wednesday, September 18, 2013 6:14:51 PM UTC+2, Francois Masurel wrote:

 GAE is starting new instances like crazy since a few minutes.  It's 
 probably related to 
 *this*https://code.google.com/status/appengine/detail/serving-java/2013/09/18#ae-trust-detail-helloworld-get-java-latency:


 https://lh5.googleusercontent.com/-8xyqNoiQlCQ/UjnQ7BXbw7I/8dQ/etaheAZtwdM/s1600/Google+App+Engine+Java+Status+-+Google+Chrome.jpg
 Anybody knows what is going on ?



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Re: Writes to cloud storage hanging after opening file?

2013-09-19 Thread Ben Smithers
A few more updates on this,

It looks like Vinny was correct - writes of a smaller size do not seem to 
cause a problem. Specifically, writes from backend instances to datastore, 
blobstore and cloud storage of 32KBs all display the hanging behaviour I 
have been talking about. Writes to each of these of 30KBs all seem to work 
without problem (the actual size at which there is a problem seems to vary 
slightly depending on the service, presumably because of varying overhead - 
I have had full success with writes of 31.5KBs to datastore).

My guess therefore is that there is some kind of block size used somewhere 
in the underlying urlfetch/RPC implementation and whenever multiple blocks 
are required for the request, there is some probability of the request 
hanging. Why this issue affects backend instances only, I have no idea.

Using the deprecated files API for blobstore, it is possible to write large 
files simply by writing in chunks of 30KBs:

def write_in_chunks(self,data):
chunk_size = 1024 * 30
file_name = files.blobstore.create(mime_type=text/plain)
num_chunks = int(math.ceil(len(data)/chunk_size))
with files.open(file_name, 'a') as f:
for i in range(0, num_chunks):
f.write(data[i*chunk_size:(i+1)*chunk_size])
files.finalize(file_name)

Using the cloudstorage library, this isn't possible. It looks like this is 
because the library only flushes data once 256K characters have been 
written; this can be seen in the write method in storage_api.py (the 
blocksize is set to 256K on line 489): 

  def write(self, data):
Write some bytes.
self._check_open()
assert isinstance(data, str)
if not data:
  return
self._buffer.append(data)
self._buffered += len(data)
self._offset += len(data)
if self._buffered = self._blocksize:
  self._flush()

I tried fiddling this parameter, but this seems to cause too many requests 
to GCS as I started receiving 503 responses, which corresponds to a request 
rate that is too high (
https://developers.google.com/storage/docs/reference-status#standardcodes).

Hopefully this is helpful for others and for identifying and fixing the 
underlying issue.

Ben

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Cross-reference multiple many-to-many relationships. How well does this perform on BigTable?

2013-09-19 Thread Tony França
Sometimes you just gotta ask the specialists. Help anyone?
http://stackoverflow.com/questions/18895586/cross-reference-multiple-many-to-many-relationships-which-database-should-i-pic

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [google-appengine] Transfer application to other account

2013-09-19 Thread AndyD
OK, it's been 3 years.  Anybody know if this policy is still in effect? 
 I'm working with a startup and I want to get them off the ground with GAE 
by creating the application for them and then transferring it to them 
later, but I don't want to use up one of my allotted 10 applications 
permanently.

-Andy

On Tuesday, October 19, 2010 11:39:36 AM UTC-4, Nick Johnson (Google) wrote:

 Hi Nikolay,

 You are able to create 10 apps per account. What you do with those apps 
 afterwards - including transferring them to other accounts - is immaterial 
 to your app creation quota.

 -Nick Johnson

 On Sun, Oct 10, 2010 at 9:14 PM, Nikolay Tenev 
 tenev@gmail.comjavascript:
  wrote:

  Hi

 I was created application in app engine and today I want to give 
 ownership of application to another account. I invite the other developer, 
 he accept invitation and then delete my account as application developer 
 but ... in my app engine account I still view You have 7 applications 
 remaining. (I have 2 other working) and the other developer, in dashboard, 
 have transfered application but the text is You have 10 applications 
 remaining. It is true that I create the application but is it normal to 
 still count on me even I'm not developer/administrator on application any 
 more ?

 Regards !

  -- 
 You received this message because you are subscribed to the Google Groups 
 Google App Engine group.
 To post to this group, send email to 
 google-a...@googlegroups.comjavascript:
 .
 To unsubscribe from this group, send email to 
 google-appengi...@googlegroups.com javascript:.
 For more options, visit this group at 
 http://groups.google.com/group/google-appengine?hl=en.




 -- 
 Nick Johnson, Developer Programs Engineer, App Engine Google Ireland Ltd. 
 :: Registered in Dublin, Ireland, Registration Number: 368047
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number: 
 368047
  

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [google-appengine] Cross-reference multiple many-to-many relationships. How well does this perform on BigTable?

2013-09-19 Thread Barry Hunter
In App-Engine DataStore (which while based on BigTable isn't actully
'BigTable')

You have StringListProperties

https://developers.google.com/appengine/docs/python/datastore/typesandpropertyclasses#StringListProperty

So can just store the tag(s) in a string list property on your CONTENT
model. No TC or TAG model!

You can actully filter string lists, where an equality query actually says
where the list contains this value. (case 2)

Can then do queries that look for entities, that contain this value AND
this value (3)

Can also combine one 'range' filter with such queries to do your (case 4)


Luckily, you can also mostly avoid a historic problem with multi-list
queries
https://developers.google.com/appengine/articles/indexselection#Exploding_Search


Basically you 'Normalize' the database. With a relational database you are
used to denormalizing, don't do it with DataStore.


With the datastore like this, you can't easily get a list of all tags,
with your RDMS could just query the TAG table. If need a list of tags,
store that seperately.




---

Also your TC.tag_key.IN, mentioned in the question. Looks for entities,
with EITHER of the tags mentioned in the query. Not both.
https://developers.google.com/appengine/docs/python/ndb/queries#repeated_properties

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Re: Writes to cloud storage hanging after opening file?

2013-09-19 Thread Doug Anderson

   
   1. Why hasn't the GCS client download been updated since early June? 
Numerous changes have been checked in since then.  As I understand it one 
   of them is crucial to GCS reliability.
   2. Why hasn't the GCS client been integrated into the App Engine SDK / 
   Runtime.  This seems like the best way to ensure developers are using the 
   latest-approved version.
   3. These 'hangs' are reminiscent of the unreliability of the deprecated 
   files API... my hope remains that GCS can become MUCH more reliable in the 
   very near future!
   4. On a side note... the python SDK installation can take A LONG time... 
   I believe this is largely due to more and more Django versions getting 
   added to the SDK.  Without Django versions v0.96 thru v1.5 the uncompressed 
   SDK size drops from 138MB to 24MB (from 19k+ files to just 1.9k files). 
I'm a Django fan but I'm not using it at the moment with App Engine... it 
   would be nice if the Django installation was optional and/or let you select 
   the versions of Django to install... I doubt any developer needs all 
   versions from 0.96 to 1.5.  Just a minor annoyance really... please fix the 
   GCS issues asap!


On Tuesday, September 10, 2013 6:14:58 AM UTC-4, Ben Smithers wrote:

 Hi,

 I've been having some problems writing data to google cloud storage using 
 the python client library (
 https://developers.google.com/appengine/docs/python/googlecloudstorageclient/
 ).

 Frequently, the write to cloudstorage will 'hang' indefinitely. The call 
 to open a new file is successful, but the write (1MB in size in this test 
 case) never returns and the file never appears in the bucket. For example, 
 I launch 10 instances of a (backend) module, each of which attempts to 
 write a file. Typically, somewhere between 4 and 9 of these will succeed, 
 with the others hanging after opening. This is the code I am running:

 class StartHandler(webapp2.RequestHandler):

 GCS_BUCKET=/

 def debugMessage(self, msg):
 logging.debug(msg)
 logservice.flush()

 def get(self):
 suffix = str(backends.get_instance())
 filename=self.GCS_BUCKET + /testwrite + suffix + .txt
 gcs_file = cloudstorage.open(filename, 'w', content_type=
 'text/plain' )
 self.debugMessage(opened file)
 gcs_file.write(f * 1024 * 1024 * 1 + '\n')
 self.debugMessage(data written)
 gcs_file.close()
 self.debugMessage(file closed)


 I have also attached a tarred example of the full application in case it 
 is relevant (to run, you should only need to modify the application name in 
 the two .yaml files and the bucket name in storagetest.py). A few 
 additional things:

 1.) I wondered if this was a problem with simultaneous writes so I had 
 each instance sleep for 30 seconds * its instance number; I observe the 
 same behaviour.
 2.) I have seen this behaviour on frontend instances, but far far more 
 rarely. I modified the above to run in response to a user request - once 
 out of 60 times the write hung after opening (a Deadline Exceeded Exception 
 was then thrown). 
 3.) I have experimented with the RetryParams (though according to the 
 documentation, the defaults should be sufficient) but to no avail. I also 
 find it hard to believe this is the issue - I would assume I would be 
 getting a TimeoutError.

 Has anyone else observed this behaviour? Does anyone have any suggestions 
 for what I am doing wrong? Or a different approach to try?

 Very grateful for any help,
 Ben


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Re: Writes to cloud storage hanging after opening file?

2013-09-19 Thread Ye Yuan
Ben,

Speaking personally, thank you a lot for your posts! It is very helpful for 
identifying this issue. I was focused on GCS client and wasn't aware of the 
full scope. More people had been looped in and I'll reach out to other 
relevant folks today. Very sorry for all the inconvenience!

I tried fiddling this parameter, but this seems to cause too many requests 
to GCS as I started receiving 503 responses, which corresponds to a request 
rate that is too high 
(https://developers.google.com/storage/docs/reference-status#standardcodes).
GCS has a minimum chunk size of 256KB. Flushing data of smaller size has 
undefined behavior.


On Thursday, September 19, 2013 5:46:49 AM UTC-7, Ben Smithers wrote:

 A few more updates on this,

 It looks like Vinny was correct - writes of a smaller size do not seem to 
 cause a problem. Specifically, writes from backend instances to datastore, 
 blobstore and cloud storage of 32KBs all display the hanging behaviour I 
 have been talking about. Writes to each of these of 30KBs all seem to work 
 without problem (the actual size at which there is a problem seems to vary 
 slightly depending on the service, presumably because of varying overhead - 
 I have had full success with writes of 31.5KBs to datastore).

 My guess therefore is that there is some kind of block size used somewhere 
 in the underlying urlfetch/RPC implementation and whenever multiple blocks 
 are required for the request, there is some probability of the request 
 hanging. Why this issue affects backend instances only, I have no idea.

 Using the deprecated files API for blobstore, it is possible to write 
 large files simply by writing in chunks of 30KBs:

 def write_in_chunks(self,data):
 chunk_size = 1024 * 30
 file_name = files.blobstore.create(mime_type=text/plain)
 num_chunks = int(math.ceil(len(data)/chunk_size))
 with files.open(file_name, 'a') as f:
 for i in range(0, num_chunks):
 f.write(data[i*chunk_size:(i+1)*chunk_size])
 files.finalize(file_name)

 Using the cloudstorage library, this isn't possible. It looks like this is 
 because the library only flushes data once 256K characters have been 
 written; this can be seen in the write method in storage_api.py (the 
 blocksize is set to 256K on line 489): 

   def write(self, data):
 Write some bytes.
 self._check_open()
 assert isinstance(data, str)
 if not data:
   return
 self._buffer.append(data)
 self._buffered += len(data)
 self._offset += len(data)
 if self._buffered = self._blocksize:
   self._flush()

 I tried fiddling this parameter, but this seems to cause too many requests 
 to GCS as I started receiving 503 responses, which corresponds to a request 
 rate that is too high (
 https://developers.google.com/storage/docs/reference-status#standardcodes
 ).

 Hopefully this is helpful for others and for identifying and fixing the 
 underlying issue.

 Ben


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Re: Writes to cloud storage hanging after opening file?

2013-09-19 Thread Ye Yuan
Why hasn't the GCS client download been updated since early June?  Numerous 
changes have been checked in since then.  As I understand it one of them is 
crucial to GCS reliability.
The SVN is updated with fixes. Sorry about the download zips. We are in the 
process of sorting out distributions.

These 'hangs' are reminiscent of the unreliability of the deprecated files 
API... my hope remains that GCS can become MUCH more reliable in the very 
near future!
Yeah from the evidences so far I am also hoping the hangs aren't originated 
from the GCS lib.

On a side note... the python SDK installation can take A LONG time... I 
believe this is largely due to more and more Django versions getting added 
to the SDK.  Without Django versions v0.96 thru v1.5 the uncompressed SDK 
size drops from 138MB to 24MB (from 19k+ files to just 1.9k files).  I'm a 
Django fan but I'm not using it at the moment with App Engine... it would 
be nice if the Django installation was optional and/or let you select the 
versions of Django to install... I doubt any developer needs all versions 
from 0.96 to 1.5.  Just a minor annoyance really... please fix the GCS 
issues asap!
I'll send this feedback to relevant people. Thanks!

On Thursday, September 19, 2013 8:36:46 AM UTC-7, Doug Anderson wrote:


1. Why hasn't the GCS client download been updated since early June? 
 Numerous changes have been checked in since then.  As I understand it one 
of them is crucial to GCS reliability.
2. Why hasn't the GCS client been integrated into the App Engine SDK / 
Runtime.  This seems like the best way to ensure developers are using the 
latest-approved version.
3. These 'hangs' are reminiscent of the unreliability of the 
deprecated files API... my hope remains that GCS can become MUCH more 
reliable in the very near future!
4. On a side note... the python SDK installation can take A LONG 
time... I believe this is largely due to more and more Django versions 
getting added to the SDK.  Without Django versions v0.96 thru v1.5 the 
uncompressed SDK size drops from 138MB to 24MB (from 19k+ files to just 
1.9k files).  I'm a Django fan but I'm not using it at the moment with App 
Engine... it would be nice if the Django installation was optional and/or 
let you select the versions of Django to install... I doubt any developer 
needs all versions from 0.96 to 1.5.  Just a minor annoyance really... 
please fix the GCS issues asap!


 On Tuesday, September 10, 2013 6:14:58 AM UTC-4, Ben Smithers wrote:

 Hi,

 I've been having some problems writing data to google cloud storage using 
 the python client library (
 https://developers.google.com/appengine/docs/python/googlecloudstorageclient/
 ).

 Frequently, the write to cloudstorage will 'hang' indefinitely. The call 
 to open a new file is successful, but the write (1MB in size in this test 
 case) never returns and the file never appears in the bucket. For example, 
 I launch 10 instances of a (backend) module, each of which attempts to 
 write a file. Typically, somewhere between 4 and 9 of these will succeed, 
 with the others hanging after opening. This is the code I am running:

 class StartHandler(webapp2.RequestHandler):

 GCS_BUCKET=/

 def debugMessage(self, msg):
 logging.debug(msg)
 logservice.flush()

 def get(self):
 suffix = str(backends.get_instance())
 filename=self.GCS_BUCKET + /testwrite + suffix + .txt
 gcs_file = cloudstorage.open(filename, 'w', content_type=
 'text/plain' )
 self.debugMessage(opened file)
 gcs_file.write(f * 1024 * 1024 * 1 + '\n')
 self.debugMessage(data written)
 gcs_file.close()
 self.debugMessage(file closed)


 I have also attached a tarred example of the full application in case it 
 is relevant (to run, you should only need to modify the application name in 
 the two .yaml files and the bucket name in storagetest.py). A few 
 additional things:

 1.) I wondered if this was a problem with simultaneous writes so I had 
 each instance sleep for 30 seconds * its instance number; I observe the 
 same behaviour.
 2.) I have seen this behaviour on frontend instances, but far far more 
 rarely. I modified the above to run in response to a user request - once 
 out of 60 times the write hung after opening (a Deadline Exceeded Exception 
 was then thrown). 
 3.) I have experimented with the RetryParams (though according to the 
 documentation, the defaults should be sufficient) but to no avail. I also 
 find it hard to believe this is the issue - I would assume I would be 
 getting a TimeoutError.

 Has anyone else observed this behaviour? Does anyone have any suggestions 
 for what I am doing wrong? Or a different approach to try?

 Very grateful for any help,
 Ben



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group 

[google-appengine] Re: Writes to cloud storage hanging after opening file?

2013-09-19 Thread Ben Smithers
Hi Ye,

No problems. Thanks for the update, I appreciate it.

In light of Doug's comments, I have also confirmed the same behaviour 
(success on writing 30KB; some hangs on writing =32KBs) occurs with the 
latest release (r105) of the client library from SVN. I was expecting this, 
as it doesn't look like the GCS library is the problem.

Ben

On Thursday, September 19, 2013 5:47:08 PM UTC+1, Ye Yuan wrote:

 Ben,

 Speaking personally, thank you a lot for your posts! It is very helpful 
 for identifying this issue. I was focused on GCS client and wasn't aware of 
 the full scope. More people had been looped in and I'll reach out to other 
 relevant folks today. Very sorry for all the inconvenience!

 I tried fiddling this parameter, but this seems to cause too many requests 
 to GCS as I started receiving 503 responses, which corresponds to a request 
 rate that is too high (
 https://developers.google.com/storage/docs/reference-status#standardcodes
 ).
 GCS has a minimum chunk size of 256KB. Flushing data of smaller size has 
 undefined behavior.


 On Thursday, September 19, 2013 5:46:49 AM UTC-7, Ben Smithers wrote:

 A few more updates on this,

 It looks like Vinny was correct - writes of a smaller size do not seem to 
 cause a problem. Specifically, writes from backend instances to datastore, 
 blobstore and cloud storage of 32KBs all display the hanging behaviour I 
 have been talking about. Writes to each of these of 30KBs all seem to work 
 without problem (the actual size at which there is a problem seems to vary 
 slightly depending on the service, presumably because of varying overhead - 
 I have had full success with writes of 31.5KBs to datastore).

 My guess therefore is that there is some kind of block size used 
 somewhere in the underlying urlfetch/RPC implementation and whenever 
 multiple blocks are required for the request, there is some probability of 
 the request hanging. Why this issue affects backend instances only, I have 
 no idea.

 Using the deprecated files API for blobstore, it is possible to write 
 large files simply by writing in chunks of 30KBs:

 def write_in_chunks(self,data):
 chunk_size = 1024 * 30
 file_name = files.blobstore.create(mime_type=text/plain)
 num_chunks = int(math.ceil(len(data)/chunk_size))
 with files.open(file_name, 'a') as f:
 for i in range(0, num_chunks):
 f.write(data[i*chunk_size:(i+1)*chunk_size])
 files.finalize(file_name)

 Using the cloudstorage library, this isn't possible. It looks like this 
 is because the library only flushes data once 256K characters have been 
 written; this can be seen in the write method in storage_api.py (the 
 blocksize is set to 256K on line 489): 

   def write(self, data):
 Write some bytes.
 self._check_open()
 assert isinstance(data, str)
 if not data:
   return
 self._buffer.append(data)
 self._buffered += len(data)
 self._offset += len(data)
 if self._buffered = self._blocksize:
   self._flush()

 I tried fiddling this parameter, but this seems to cause too many 
 requests to GCS as I started receiving 503 responses, which corresponds to 
 a request rate that is too high (
 https://developers.google.com/storage/docs/reference-status#standardcodes
 ).

 Hopefully this is helpful for others and for identifying and fixing the 
 underlying issue.

 Ben



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [google-appengine] Re: Writes to cloud storage hanging after opening file?

2013-09-19 Thread Vinny P
On Thu, Sep 19, 2013 at 7:46 AM, Ben Smithers smithers@googlemail.com
 wrote:

 It looks like Vinny was correct - writes of a smaller size do not seem to
 cause a problem. Specifically, writes from backend instances to datastore,
 blobstore and cloud storage of 32KBs all display the hanging behaviour



Your experiences parallel mine - backend instances seem to have more
difficulty communicating with other Google services compared to frontend
instances. I often speculate that there is an additional communications
layer within the backends system, or different architectural decisions.


On Thu, Sep 19, 2013 at 11:53 AM, Ben Smithers smithers@googlemail.com
 wrote:

 In light of Doug's comments, I have also confirmed the same behaviour
 (success on writing 30KB; some hangs on writing =32KBs) occurs with the
 latest release (r105) of the client library from SVN. I was expecting this,
 as it doesn't look like the GCS library is the problem.



+1. I see similar behavior when communicating directly to GCS via the JSON
API.


On Thu, Sep 19, 2013 at 11:50 AM, Ye Yuan y...@google.com wrote:

 Yeah from the evidences so far I am also hoping the hangs aren't
 originated from the GCS lib.



Ye, I want to emphasize something since you're reading this thread. I
believe there are two different issues here: One, backend instance urlfetch
to anything (not just GCS) seems to be less stable than frontend instance
urlfetch (perhaps there's something chunking requests from the backend?).
Second, GCS itself (not the library, but GCS servers) seems to have
difficulty with differing chunk sizes. You wrote that GCS has a minimum
chunk size of 256KB, but I've flushed chunks much smaller than that
reliably and without difficulty.

So the bottom line here is when you're testing, test from App Engine
frontend and backend instances, Compute Engine machines, and from an
external-to-Google-networks machine. I'd focus less on the client library.


-
-Vinny P
Technology  Media Advisor
Chicago, IL

App Engine Code Samples: http://www.learntogoogleit.com

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [google-appengine] Re: Writes to cloud storage hanging after opening file?

2013-09-19 Thread Doug Anderson
I realize this is likely a GCS server issue and I don't mean to distact 
from that but I'd also like to point out that the SDK's 
google-api-python-client (which I assume GCS relies upon under the hood) is 
v1.0beta6 from Oct 28, 2011, where the latest release seems to be v1.2 from 
Aug 7, 2013.  I assume the runtime versions on the live servers are up to 
date :)  PKG-INFO snippets below:

--- Latest PKG-INFO snippet ---
Metadata-Version: 1.1
Name: google-api-python-client
Version: 1.2
Summary: Google API Client Library for Python
Classifier: Development Status :: 5 - Production/Stable

--- App Engine SDK 1.8.5 PKG-INFO snippet ---
Metadata-Version: 1.0
Name: google-api-python-client
Version: 1.0beta6
Summary: Google API Client Library for Python
Classifier: Development Status :: 4 - Beta


On Thursday, September 19, 2013 2:03:44 PM UTC-4, Vinny P wrote:

 On Thu, Sep 19, 2013 at 7:46 AM, Ben Smithers 
 smithe...@googlemail.comjavascript:
  wrote:

 It looks like Vinny was correct - writes of a smaller size do not seem to 
 cause a problem. Specifically, writes from backend instances to datastore, 
 blobstore and cloud storage of 32KBs all display the hanging behaviour



 Your experiences parallel mine - backend instances seem to have more 
 difficulty communicating with other Google services compared to frontend 
 instances. I often speculate that there is an additional communications 
 layer within the backends system, or different architectural decisions.


 On Thu, Sep 19, 2013 at 11:53 AM, Ben Smithers 
 smithe...@googlemail.comjavascript:
  wrote:

 In light of Doug's comments, I have also confirmed the same behaviour 
 (success on writing 30KB; some hangs on writing =32KBs) occurs with the 
 latest release (r105) of the client library from SVN. I was expecting this, 
 as it doesn't look like the GCS library is the problem.



 +1. I see similar behavior when communicating directly to GCS via the JSON 
 API. 

  
 On Thu, Sep 19, 2013 at 11:50 AM, Ye Yuan y...@google.com javascript:
  wrote:

 Yeah from the evidences so far I am also hoping the hangs aren't 
 originated from the GCS lib.



 Ye, I want to emphasize something since you're reading this thread. I 
 believe there are two different issues here: One, backend instance urlfetch 
 to anything (not just GCS) seems to be less stable than frontend instance 
 urlfetch (perhaps there's something chunking requests from the backend?). 
 Second, GCS itself (not the library, but GCS servers) seems to have 
 difficulty with differing chunk sizes. You wrote that GCS has a minimum 
 chunk size of 256KB, but I've flushed chunks much smaller than that 
 reliably and without difficulty. 

 So the bottom line here is when you're testing, test from App Engine 
 frontend and backend instances, Compute Engine machines, and from an 
 external-to-Google-networks machine. I'd focus less on the client library. 
  
  
 -
 -Vinny P
 Technology  Media Advisor
 Chicago, IL

 App Engine Code Samples: http://www.learntogoogleit.com
  


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] 5-10 mins of downtime after deployment.

2013-09-19 Thread jay
Hey folks,

Over the last few months, every now and then when I deploy I have 5 -10 
minutes of downtime after I deploy a new version. 

I'm using python. View the app here: http://triton.ironhelmet.com

In the log I get the usual, Compilation Completed, Starting Deployment.. 
then it checks if the app version is serving for a long timer after that. 

A few weeks back it actually time out (I think after 15 minutes).

During this time all the uses see is Error: Not Found The requested URL / 
was not found on this server.

Now I'm a little afraid to deploy because 10 minutes of down time is a big 
deal for my users. 

Is there anything I can do to fix this? Is this an issue on my side?

Help!

Jay.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Re: Lots of warmup requests

2013-09-19 Thread Cesium
Lots of warmups started for me about 5 days ago.

It settled down for a day.

Now it's gone pear shaped again.

David

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [google-appengine] 5-10 mins of downtime after deployment.

2013-09-19 Thread Vinny P
On Thu, Sep 19, 2013 at 7:38 PM, jay kyburz@gmail.com wrote:

 Is this an issue on my side?



Are you doing any processing at instance startup, or any sort of
time-consuming operation?


On Thu, Sep 19, 2013 at 7:38 PM, jay kyburz@gmail.com wrote:

 Over the last few months, every now and then when I deploy I have 5 -10
 minutes of downtime after I deploy a new version.



What you can try to do is upload your new version as an entirely different
version - change the version number in your app.yaml and then upload it.
Then you'll have two different versions on App Engine servers - the old
current-production version and the newly uploaded version. Send some
requests to the newly uploaded version, let it spool up and work through
its 5 - 10 minute downtime. Then you can change it to the default version
on the Versions tab of the admin console. There should be no downtime from
the switch.



-
-Vinny P
Technology  Media Advisor
Chicago, IL

App Engine Code Samples: http://www.learntogoogleit.com

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.