RE: [google-appengine] Re: Startup time exceeded...on F4?!

2012-07-17 Thread Drake
Jeff,

Check the archive there are several check out lane analogies that I have
posted.

I agree that the Queue is sub optimal, but it is more sub optimal the
smaller you are.  When you get to 50 instances it is amazing how well the
load balancing works.  On the climb up to peak new instances spin up on
requests rather than causing cascading failures or dramatic spin ups. And on
the way down instances de-utilized and end of life gracefully.

Using your grocery store analogy, imagine that you are optimizing for a
guarantee that you will be checked out with in 30 seconds of entering the
queue. The ideal scenario is that when you get to a spot where you know you
are 15 seconds from being checked out, and it takes 15 seconds to "open a
new lane" you want to send users to go stand in line while the register
opens.

Your goal is to never have to pay on that guarantee, not to serve the
highest percentage in the least time.  When this is your ideal QoS the
current load balancing does really well. It does better if it has 10
registers and can open 2 at a time, rather than when it has 1 register and
needs to decide if it is going to double capacity.

-Brandon


-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Startup time exceeded...on F4?!

2012-07-17 Thread Jeff Schnitzer
On Tue, Jul 17, 2012 at 5:21 AM, Takashi Matsuo  wrote:
>
> On Tue, Jul 17, 2012 at 7:10 AM, Jeff Schnitzer  wrote:
>>
>> Hi Takashi.  I've read the performancesettings documentation a dozen
>> times and yet the scheduler behavior still seems flawed to me.
>
> I would rather not use a word 'flawed' here, but probably there is still
> room for improvement. First of all, is there any reason why you can not use
> min idle instances settings? Is it just because of the cost?

My goal is to have every request serviced with minimum latency possible.

Leaving aside implementation complexity, there doesn't seem to be any
circumstance when it is efficient to remove a request from the pending
queue and lock it into a cold start.  There are really two cases:

 1) The request is part of a sudden burst.  The request will be
processed by an active instance before any new instances come online.
It therefore should stay in the queue.

 2) The request is part of new, sustained traffic.  Whether the
request waits in the pending queue for new instances to warm up, or
waits at a specific cold instance, the request is still going to wait.
 At least if it's still in the pending queue there's a chance it will
get routed to the first available instance... which overall is likely
going to be better than any particular (mis-)behaving start.

Imagine you're at the supermarket checkout.  The ideal situation is to
have everyone waiting in one line and then route them off to cashiers
as they become available.  If the pending queue gets too long, you
open more cashiers (possibly multiple at once) until the line shrinks.
 If every cashier has a separate queue, it's really hard to optimize
the # of cashiers since you have some with a line 5 deep and some
sitting idle.

I'm fully prepared to believe there are implementation complexities
that make the single-pending-queue difficult, but what I'm hearing is
that Google deliberately *wants* to send requests to cold starts...
which seems "flawed" to me.  Am I missing something?

> If so, can 'introducing somewhat lower price for resident instance' be a
> workable feature request?
>
> Vaguely I have a feeling that, what you're trying to accomplish here is to
> save money while acquire good performance. If so, it is one of the most
> difficult thing to implement. However, in my opinion, it's worth trying to
> implement, so let's continue the discussion.

Let's forget price for a moment and just try to work towards the goal
of having an efficient system.  Presumably, a more efficient system
will be more cost-effective than a less efficient system that has lots
of idle instances sitting around hogging RAM.  Good for Google, good
for us.

> If you have an app with average +50s loading time, I totally understand that
> you strongly want to avoid sending requests to cold instances. On the other
> hand, there are also many well-behaved apps with <5 secs loading/warming
> time. Please understand that for those apps, it is still acceptable if we
> send requests to cold instances, so it's likely we can not prioritize the
> feature bellow over other things, however...

Someone at Google must have chart which has "Time" on the X axis and
"% of Startup Requests" on the Y axis - basically a chart of what
percentage of startup requests in the Real World are satisfied at
various time boundaries.  I have a pretty good idea, I think, of what
this chart looks like.  I'm also fairly certain that the Python chart
looks nothing like the Java chart.

For one thing, the Java chart _starts_ at 5s.  The bare minimum Hello,
World that creates a PersistenceManagerFactory with one class (the
Employee in the docs) takes 5-6s to start up.  And this is when GAE is
healthy; that time can easily double on a bad day.

So if you optimize GAE for apps with a <5s startup time, you're
optimizing for apps that don't exist - at least on the JVM.  I'd be
*very* surprised if the average real-world Java instance startup time
was less than 20s.  You just don't build apps that way in Java.  Given
a sophisticated application, I'm not even sure it's possible unless
the only datatypes you allow yourself are ArrayList and HashMap.

>> The min latency setting is actually working against us here.  What I
>> really want is a high (possibly infinite) minimum latency for moving
>> items from pending queue to a cold instance, but a low minimum latency
>> for warming up new instances.  I don't want requests waiting in the
>> pending queue, but it does me no good to have them sent to cold
>> instances.  I'd rather they wait in the queue until fresh instances
>> come online.
>
> For me, it look like a great idea. Can you file a feature request for this,
> so that we can get a rough idea of how many people want it, and start an
> internal discussion.

http://code.google.com/p/googleappengine/issues/detail?id=7865

I generalized it to "User-facing requests should never be locked to
cold instance starts".

Thanks,
Jeff

-- 
You received this message becaus

Re: [google-appengine] Can't start an instance of my application

2012-07-17 Thread Dan Holevoet
Hi,

There was an issue reported earlier today with application deployment:
http://code.google.com/p/googleappengine/issues/detail?id=7861. However, it
appears to be fixed now. If you are still having deployment issues, please
file a new issue in the tracker.

Thanks,
Dan


On Tue, Jul 17, 2012 at 2:40 PM, Alden  wrote:

> Hi,
>
> Anyone having issues deploying right now?  My app normally spins up fairly
> quickly, but today it's been noticeably slower.  And getting slower as the
> day went on.  Anecdotally, it seems like several hours ago a deply took a
> minute or two longer, then they have increased throughout the day, until
> now it's timing out completely, and I get this, which according to some
> older threads seems indicative of a maintenance period.
>
> java.lang.RuntimeException: Version not ready.
> Unable to update app: Version not ready.
>
> Is there anywhere else to look for more information?  Appengine system
> status dashboard says things are normal.
>
> app id: cover5api.appspot.com
>
> Thanks!
>
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/google-appengine/-/IQJS8ETPfdYJ.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>



-- 
Dan Holevoet
Google Developer Relations

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Can't start an instance of my application

2012-07-17 Thread Alden
Hi,

Anyone having issues deploying right now?  My app normally spins up fairly 
quickly, but today it's been noticeably slower.  And getting slower as the 
day went on.  Anecdotally, it seems like several hours ago a deply took a 
minute or two longer, then they have increased throughout the day, until 
now it's timing out completely, and I get this, which according to some 
older threads seems indicative of a maintenance period.

java.lang.RuntimeException: Version not ready.
Unable to update app: Version not ready.

Is there anywhere else to look for more information?  Appengine system 
status dashboard says things are normal.

app id: cover5api.appspot.com

Thanks!




-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/IQJS8ETPfdYJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Gengo releases Avalon, a Python localization/translation utility for App Engine

2012-07-17 Thread Brian McConnell
Hello everyone,

We're at OSCON this year to announce several new translation and localization 
tools for popular open source platforms and development environments, including 
Python. This week, we're releasing Avalon, a gettext-like utility that requests 
translations from machine and human translation APIs. Think of it as gettext 
for the cloud. With it, localizing your app or translating your content is as 
simple as:

  sl = 'en'
  tl = 'es'
  translation_order = ['gengo', 'google']

  self.response.out.write(_("Hello World!"))

The utility, along with a helpful how-to article that explains how to duplicate 
it in other environments, is at mygengo.github.com/avalon It currently works 
with Google Translate and Gengo's human translation API. We'll be adding 
support for Transifex (a hosted translation and localization management 
service) in the near future as well. (If you need a good localization solution 
for your projects, definitely check out Transifex). 

The utility implements a simple _() function which you use to request a 
translation for a string, but instead of consulting a static file with a prompt 
catalog (PO file), it does the following:

* checks memcache to see if there's a cached translation
* calls out to Gengo and/or Google to request a translation via an API call

The how-to article explains strategies for mimicking the utility in the 
development environment or framework of your choice. So check it out, and if 
you have suggestions, feedback or would like to contribute code, drop me a line.

Thanks!

Brian McConnell
Head of API Integration
Gengo Inc

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Project from python 2.5 upgrade to 2.7, datastore convert to 'High Replication' , all cron jobs not work.

2012-07-17 Thread 郁夫
hi,
My project  use the django, The use of  'Duplicate Application Settings'
tool ,
 from python 2.5 upgrade to 2.7,
datastored from 'Master/Salve' convert to 'High Replication' .
 New project is run succeed,
but all the cron job to fail.  cron url is normal, but logs display cron
job on time run fail.

Any idea? thank!



西安——深圳——上海


-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Channel API future improvements?

2012-07-17 Thread Kristopher Giesing
This is logged here:

http://code.google.com/p/googleappengine/issues/detail?id=7098

I patched the 1.7 channel API locally to deal with it; see the issue 
comments.

On Saturday, March 5, 2011 8:02:23 AM UTC-8, Westmark wrote:
>
> Is there any way for the client to terminate the connection while 
> running on the development server? I've tried calling close() and 
> every other function I can find on the socket and client javascript 
> object, but it keeps polling forever. I want to be able to kill it off 
> so I can setup a new connection with a fresh token without the browser 
> going nuts with requests. 
>
> BR // Fredrik 
>
>
> On 7 Feb, 18:15, Tim  wrote: 
> > On Monday, February 7, 2011 5:05:06 PM UTC, Peter Petrov wrote: 
> > 
> > > You can change the dev-server polling interval in your JavaScript code 
> > > quite easily. For example: 
> > 
> > > goog.appengine.Socket.POLLING_TIMEOUT_MS = 5000; // 5 seconds 
> > 
> > Ah, so you can... and here was me looking for ways to get hold of the 
> > goog.Timer objects or similar. 
> > 
> > And changing that value on the fly changes the timeout for the next call 
> (ie 
> > it's not something you can only change before creating a channel) - 
> > thanks... I feel quite foolish now :) 
> > 
> > -- 
> > T

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/8FzGn8jhStQJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] full search

2012-07-17 Thread alf
Is possible with new feature full search performace like queries similar to 

sect * from xx where id=%main%

thanks


pd. I have tried put quetions in stackoverflow but have been impossible 
always appear "not meet quality standard`s" ¿?

alberto

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/w-znmgYEEj0J.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Help. My app is down

2012-07-17 Thread buzzjourney
Hi,

I am unable to access my app (buzzjourney.appspot.com).
All I get is the Server Error message.
I am unable to deploy (keeps waiting until it gives up).
Please assist.
Is anyone else experiencing this issue? The system status page does not 
show any issues.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/tOSQIdTRnlwJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Getting python to run from the command line in Windows 7

2012-07-17 Thread Keith Elias
I am just starting out, with only a vague understanding of what's
going on.

I am trying to follow the tutorial here:
https://developers.google.com/appengine/docs/python/gettingstarted/helloworld

I've setup my test files in: "t:\website_mechanics\google_host
\helloworld"
and using the App Engine Launcher I am able to get the correct
response via http://localhost:8081/

However I would like to be able call the python script via the command
line.  But when I call using lines like:

dev_appserver.py t:\website_mechanics\google_host\helloworld
or just
dev_appserver.py --help

all that happens is that the file
"c:\program files (x86)\google\google_appengine\dev_appserver.py"

is opened in my editor

Presumably this is happening because I associated *.py files with my
editor.  What should *.py files be associated with, or is there
another way to make an explicit call via the command line?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Do F1 F2 F4 Google App Engine frontend instances really cost more

2012-07-17 Thread Barry Hunter
If you use instances to full capacity - yes. In an ideal enviroment

Using 2 F1s at 100% CPU is the same as 1 F2 at 100%


But a real application will rarely be completely CPU bound. Using
external APIs wont be much different, or even 'program startup' wont
be exactly double the speed (disk access will be similer for example)

On the other hand, the extra memory headroom, will allow F2 to run
somethings much quicker. (eg an algorithm could perhaps do a sort by
coping data, rather than sorting in place which would be required on
F1),

So the relationship between the two isnt linear. A F2 isnt exactly
twice that of F1. Everything wont be exactly 2x faster.

Tasks wont just execute twice as fast.

On Tue, Jul 17, 2012 at 3:05 PM, Marc Hacker  wrote:
> If F2 costs double as much as F1 per CPU hour but takes half the time to
> complete tasks shouldn't the total cost be about the same?
>
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/google-appengine/-/jnf-xzkuwD8J.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Do F1 F2 F4 Google App Engine frontend instances really cost more

2012-07-17 Thread Marc Hacker
If F2 costs double as much as F1 per CPU hour but takes half the time to 
complete tasks shouldn't the total cost be about the same?

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/jnf-xzkuwD8J.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Continuous Integration + Automated build tool

2012-07-17 Thread Prakhil Samar
Hi GAE Team,

I have been developing a project in "Python + Google App Engine" from few 
months and managing the code via SVN and using Jira tool to track the 
project related activites.

Could some one point me to the *BEST tool for Continuous Integration and 
Automated build tools* suitable for Python+GAE platform




-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/Opd_GeHsxeAJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Startup time exceeded...on F4?!

2012-07-17 Thread Takashi Matsuo
Hi Jeff,

On Tue, Jul 17, 2012 at 7:10 AM, Jeff Schnitzer  wrote:

> Hi Takashi.  I've read the performancesettings documentation a dozen
> times and yet the scheduler behavior still seems flawed to me.
>

I would rather not use a word 'flawed' here, but probably there is still
room for improvement. First of all, is there any reason why you can not use
min idle instances settings? Is it just because of the cost?

If so, can 'introducing somewhat lower price for resident instance' be a
workable feature request?

Vaguely I have a feeling that, what you're trying to accomplish here is to
save money while acquire good performance. If so, it is one of the most
difficult thing to implement. However, in my opinion, it's worth trying to
implement, so let's continue the discussion.

Once a request is taken from the pending queue and sent to an instance
> (cold or otherwise), it's dedicated to execution on that instance.  In
> the queue, it can still be routed to any instance that becomes
> available.  Why would we *ever* want to send a request to a cold
> instance, which has an unknown and unpredictable response time?  If I

were that request, I'd want to sit in the queue until a known-good
> instance becomes available.  Depending on the queue fill rate I might

still end up waiting for an instance to come online... but there's
> also a good chance I'll get handled by an existing instance,
> especially if traffic is bursty.
>

> "the scheduler starts a new dynamic instance because it is really
> needed at that moment."  -- this is not an accurate characterization,
> because new instances don't provide immediate value.  They only
> provide value 5+ (sometimes 50+) seconds after they start.  In the
> mean time, they have captured and locked up user-facing requests which
> might have been processed by running instances much faster.
>

If you have an app with average +50s loading time, I totally understand
that you strongly want to avoid sending requests to cold instances. On the
other hand, there are also many well-behaved apps with <5 secs
loading/warming time. Please understand that for those apps, it is still
acceptable if we send requests to cold instances, so it's likely we can not
prioritize the feature bellow over other things, however...


> The min latency setting is actually working against us here.  What I
> really want is a high (possibly infinite) minimum latency for moving
> items from pending queue to a cold instance, but a low minimum latency
> for warming up new instances.  I don't want requests waiting in the
> pending queue, but it does me no good to have them sent to cold
> instances.  I'd rather they wait in the queue until fresh instances
> come online.


For me, it look like a great idea. Can you file a feature request for this,
so that we can get a rough idea of how many people want it, and start an
internal discussion.

Thanks as always,

-- Takashi


>
> Jeff
>
> On Mon, Jul 16, 2012 at 1:15 PM, Takashi Matsuo 
> wrote:
> >
> > Richard,
> >
> >> But Tom seems to think that "1" is an appropriate number for his app.
> Why
> >> offer that option if it's automatically wrong?
> >
> > If his purpose is reduce the number of user-facing loading requests, and
> he
> > still sees many user-facing loading requests, the current settings is not
> > enough.
> >
> > Jeff,
> >
> >> I vaguely expect something like this:
> >>
> >>  * All incoming requests go into a pending queue.
> >>  * Requests in this queue are handed off to warm instances only.
> >>  * Requests in the pending queue are only sent to warmed up instances.
> >>  * New instances can be started up based on (adjustable) depth of the
> >> pending queue.
> >>  * If there aren't enough instances to serve load, the pending queue
> >> will back up until more instances come online.
> >>
> >> Isn't this fairly close to the way appengine works?  What puzzles me
> >> is why requests would ever be removed from the pending queue and sent
> >> to a cold instance.  Even in Pythonland, 5-10s startup times are
> >> common.  Seems like the request is almost certainly better off waiting
> >> in the queue.
> >
> > Probably reading the following section woud help understanding the
> > scheduler:
> >
> https://developers.google.com/appengine/docs/adminconsole/performancesettings#scheduler
> >
> > A request comes in, if there's available dynamic instance, he'll be
> handled
> > by that dynamic instance. Then, if there's available resident instance,
> > he'll be handled by that resident instance. Then he goes to the pending
> > queue. He can be sent any available instances at any time(it's fortunate
> for
> > him). Then according to the pending latency settings, he will be sent to
> a
> > new cold instance.
> >
> > So, if you prefer pending queue rather than a cold instance, you can set
> > high minimum latency, however, it might not be what you really want
> because
> > it will cause a bad performance on subsequent requests.
> >
> > Generally speaking, just looking a