[google-appengine] Re: Missing about 7 hours in Dashboard Charts

2011-09-13 Thread Steve
I also have this issue.

Cheers,
Steve

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/T0kDhL4XpysJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Missing about 7 hours in Dashboard Charts

2011-09-13 Thread keakon lolicon
Hi GAE Team,

I met this issue several months ago. I also tried to change timezone, but
not got fixed.

Thank you for looking into it.

--
keakon

My blog(Chinese): www.keakon.net
Blog source code: https://bitbucket.org/keakon/doodle/

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

<>

[google-appengine] Re: New Pricing + Long running requests optimization

2011-09-13 Thread ESPR!T
1) I am fetching the price for book only once a day when someone ask
for it (after that is being served from database)
2) there is nothing like most common books - user comes in and ask for
anything - I can't show him blank prices and tell him come again in an
hour
3) 5-10% of books is more than 500 000 books * 50 suppliers =>
millions of request just as a prefetch (I don't want to fetch that
each day just because I suppose that some1 would like to see the
prices for particular book)

Let say I will have 2000 common books I am doing prefetch on backend
each day - thats fine but my issue is how to serve those people who
ask for book I do not prefetch. its nromal that I will need to process
hundred fetches each 2 minutes - this is not going to take a few
seconds as each req to outside will take ~1-2 seconds in average (and
I guess I can run 200 fetches to outside at one time to get it done in
10 secs).

BTW the url is http://www.librari.st/ if you want to check how is it
working now (don't laugh, its not ideal and its just a project for
fun :)

Cheers and thanks for your ideas - I really appreciate that.

On Sep 14, 4:20 pm, Rishi Arora  wrote:
> Just out of curiosity - do book prices really change more than once every
> hour?  Surely, you could have some kind of a hybrid approach where you
> pre-fetch the prices of 5-10% of the most common books, so you don't have to
> fetch them when a user wants to see the prices.  This will alleviate some of
> the front-end instance time by offloading it to the backend that runs every
> hour.  Also, perhaps not all suppliers change their prices every hour -
> maybe only some fraction of them do.  So, run the backend only once a day
> for slow changing prices, and once an hour for faster moving prices, and for
> the 0.001% of the books that need more dynamic pricing, run a cron job every
> 2 minutes on a front-end, assuming it will get done in just a handful of
> seconds, so that the front-end can process user-facing requests at the same
> time.
>
> I apologize if I don't fully understand your problem set, but I'm trying to
> throw out as many ideas as possible.
>
>
>
>
>
>
>
> On Tue, Sep 13, 2011 at 10:40 PM, ESPR!T  wrote:
> > With the backends I can't display data to use when he ask for them -
> > he can't wait next hour for getting the prices of books as he wants
> > them now. So I need to run the backend all the item and process price
> > requests as soon as possible (and as there is 5+ million books I can't
> > do a pre-fetch of some books on backend and then shut it down).
>
> > I will try to implement that async url fetch for batch of requests,
> > that can by actually fast - just some suppliers are really slow and
> > some of them fast so maybe I could also do two queues (one for slow
> > response and one for fast) so the user get info from fast responsive
> > suppliers quickly and doesnt need to wait) - the only issue is that
> > technically I should process the pull queue from backend so that means
> > I would need it to run all the time ;/ (I really need to display the
> > prices as soon as possible and can't wait with processing).
>
> > I will try to lower the latency so GAE can spin up more instance and
> > will see how it affect the instance time.
>
> > Thank you guys!
>
> > On Sep 14, 1:57 pm, Rishi Arora  wrote:
> > > My app is nearly identical to yours - several concurrent URL fetches are
> > > performed to "gather" content.  And when users access my site, that
> > content
> > > is nicely formatted for them for display.  My solution - part of if has
> > > already been suggested by Jeff - use async URL fetch.  Spinning an
> > instance
> > > for 5 to 10 seconds waiting for your supplier's website to return book
> > > pricing data is a waste of resources.  So, whenever you need to do a URL
> > > fetch, consider deferring it by queueing it up in a pull queue.  Then
> > once
> > > you have enough deferred (10 is a good number), then call URL fetch
> > > simultaneously for all the 10 requests.  Another optimization is to use
> > by
> > > the 9 free hours of backend time.  I agree you can't have the backend
> > > running all the time.  So, wake up the backend by a cron job that runs
> > once
> > > every hour.  This will incur a minimum cost of 15 minutes per hour = 6
> > > instance hours - which is under the free quota.  Each time your backend
> > > wakes up, it looks up all the URL fetch requests deferred in the last
> > hour,
> > > and processes them.  My app does exactly this, and it takes me about 45
> > > seconds to fetch and process all data for ~300 URL fetches every hour.
>
> > > If you are attempting to stay within the free quota, absolutely use the
> > > backend hours in any way you can.  it'll be a pity to not use those free
> > 9
> > > instance hours.
>
> > > On Tue, Sep 13, 2011 at 8:43 PM, Tim Hoffman  wrote:
> > > > Hi
>
> > > > You could submit the request via ajax back to your appengine app
> > > > and it can then do an async requuest on all t

Re: [google-appengine] Re: New Pricing + Long running requests optimization

2011-09-13 Thread Rishi Arora
Just out of curiosity - do book prices really change more than once every
hour?  Surely, you could have some kind of a hybrid approach where you
pre-fetch the prices of 5-10% of the most common books, so you don't have to
fetch them when a user wants to see the prices.  This will alleviate some of
the front-end instance time by offloading it to the backend that runs every
hour.  Also, perhaps not all suppliers change their prices every hour -
maybe only some fraction of them do.  So, run the backend only once a day
for slow changing prices, and once an hour for faster moving prices, and for
the 0.001% of the books that need more dynamic pricing, run a cron job every
2 minutes on a front-end, assuming it will get done in just a handful of
seconds, so that the front-end can process user-facing requests at the same
time.

I apologize if I don't fully understand your problem set, but I'm trying to
throw out as many ideas as possible.


On Tue, Sep 13, 2011 at 10:40 PM, ESPR!T  wrote:

> With the backends I can't display data to use when he ask for them -
> he can't wait next hour for getting the prices of books as he wants
> them now. So I need to run the backend all the item and process price
> requests as soon as possible (and as there is 5+ million books I can't
> do a pre-fetch of some books on backend and then shut it down).
>
> I will try to implement that async url fetch for batch of requests,
> that can by actually fast - just some suppliers are really slow and
> some of them fast so maybe I could also do two queues (one for slow
> response and one for fast) so the user get info from fast responsive
> suppliers quickly and doesnt need to wait) - the only issue is that
> technically I should process the pull queue from backend so that means
> I would need it to run all the time ;/ (I really need to display the
> prices as soon as possible and can't wait with processing).
>
> I will try to lower the latency so GAE can spin up more instance and
> will see how it affect the instance time.
>
> Thank you guys!
>
> On Sep 14, 1:57 pm, Rishi Arora  wrote:
> > My app is nearly identical to yours - several concurrent URL fetches are
> > performed to "gather" content.  And when users access my site, that
> content
> > is nicely formatted for them for display.  My solution - part of if has
> > already been suggested by Jeff - use async URL fetch.  Spinning an
> instance
> > for 5 to 10 seconds waiting for your supplier's website to return book
> > pricing data is a waste of resources.  So, whenever you need to do a URL
> > fetch, consider deferring it by queueing it up in a pull queue.  Then
> once
> > you have enough deferred (10 is a good number), then call URL fetch
> > simultaneously for all the 10 requests.  Another optimization is to use
> by
> > the 9 free hours of backend time.  I agree you can't have the backend
> > running all the time.  So, wake up the backend by a cron job that runs
> once
> > every hour.  This will incur a minimum cost of 15 minutes per hour = 6
> > instance hours - which is under the free quota.  Each time your backend
> > wakes up, it looks up all the URL fetch requests deferred in the last
> hour,
> > and processes them.  My app does exactly this, and it takes me about 45
> > seconds to fetch and process all data for ~300 URL fetches every hour.
> >
> > If you are attempting to stay within the free quota, absolutely use the
> > backend hours in any way you can.  it'll be a pity to not use those free
> 9
> > instance hours.
> >
> >
> >
> >
> >
> >
> >
> > On Tue, Sep 13, 2011 at 8:43 PM, Tim Hoffman  wrote:
> > > Hi
> >
> > > You could submit the request via ajax back to your appengine app
> > > and it can then do an async requuest on all the urls, .
> >
> > > In your case you have some of the info already  and have to fetch some
> of
> > > it.
> > > So it might be two ajax calls, one to get the list of books, the result
> is
> > > book prices for stuff you know, plus an indicator of the books that a
> > > further request
> > > will be required, your front end can then display the details you have,
> > > submit another
> > > ajax request to appengine to fetch results for the books you currently
> have
> > > no info on.
> > > Which can then  async urlfetch the rest of the details.
> >
> > > This way user gets some info straight away and you get to keep you
> requests
> > > to a minimum
> > > and fill in the results later.
> >
> > > Just a thought ;-)
> >
> > > T
> >
> > >  --
> > > You received this message because you are subscribed to the Google
> Groups
> > > "Google App Engine" group.
> > > To view this discussion on the web visit
> > >https://groups.google.com/d/msg/google-appengine/-/3dA05F9-QDsJ.
> >
> > > To post to this group, send email to google-appengine@googlegroups.com
> .
> > > To unsubscribe from this group, send email to
> > > google-appengine+unsubscr...@googlegroups.com.
> > > For more options, visit this group at
> > >http://groups.google.com/group/google-appengin

[google-appengine] Re: New Pricing + Long running requests optimization

2011-09-13 Thread ESPR!T
With the backends I can't display data to use when he ask for them -
he can't wait next hour for getting the prices of books as he wants
them now. So I need to run the backend all the item and process price
requests as soon as possible (and as there is 5+ million books I can't
do a pre-fetch of some books on backend and then shut it down).

I will try to implement that async url fetch for batch of requests,
that can by actually fast - just some suppliers are really slow and
some of them fast so maybe I could also do two queues (one for slow
response and one for fast) so the user get info from fast responsive
suppliers quickly and doesnt need to wait) - the only issue is that
technically I should process the pull queue from backend so that means
I would need it to run all the time ;/ (I really need to display the
prices as soon as possible and can't wait with processing).

I will try to lower the latency so GAE can spin up more instance and
will see how it affect the instance time.

Thank you guys!

On Sep 14, 1:57 pm, Rishi Arora  wrote:
> My app is nearly identical to yours - several concurrent URL fetches are
> performed to "gather" content.  And when users access my site, that content
> is nicely formatted for them for display.  My solution - part of if has
> already been suggested by Jeff - use async URL fetch.  Spinning an instance
> for 5 to 10 seconds waiting for your supplier's website to return book
> pricing data is a waste of resources.  So, whenever you need to do a URL
> fetch, consider deferring it by queueing it up in a pull queue.  Then once
> you have enough deferred (10 is a good number), then call URL fetch
> simultaneously for all the 10 requests.  Another optimization is to use by
> the 9 free hours of backend time.  I agree you can't have the backend
> running all the time.  So, wake up the backend by a cron job that runs once
> every hour.  This will incur a minimum cost of 15 minutes per hour = 6
> instance hours - which is under the free quota.  Each time your backend
> wakes up, it looks up all the URL fetch requests deferred in the last hour,
> and processes them.  My app does exactly this, and it takes me about 45
> seconds to fetch and process all data for ~300 URL fetches every hour.
>
> If you are attempting to stay within the free quota, absolutely use the
> backend hours in any way you can.  it'll be a pity to not use those free 9
> instance hours.
>
>
>
>
>
>
>
> On Tue, Sep 13, 2011 at 8:43 PM, Tim Hoffman  wrote:
> > Hi
>
> > You could submit the request via ajax back to your appengine app
> > and it can then do an async requuest on all the urls, .
>
> > In your case you have some of the info already  and have to fetch some of
> > it.
> > So it might be two ajax calls, one to get the list of books, the result is
> > book prices for stuff you know, plus an indicator of the books that a
> > further request
> > will be required, your front end can then display the details you have,
> > submit another
> > ajax request to appengine to fetch results for the books you currently have
> > no info on.
> > Which can then  async urlfetch the rest of the details.
>
> > This way user gets some info straight away and you get to keep you requests
> > to a minimum
> > and fill in the results later.
>
> > Just a thought ;-)
>
> > T
>
> >  --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To view this discussion on the web visit
> >https://groups.google.com/d/msg/google-appengine/-/3dA05F9-QDsJ.
>
> > To post to this group, send email to google-appengine@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com.
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: backend vs frontend instances

2011-09-13 Thread Michael Quartly
When is the memory expected to be lowered?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/1rgi9mJrnfkJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: why Datastore Key Fetch Ops is so many?

2011-09-13 Thread Gerald Tan
Use an entity to store a running count, and increment it within a 
transaction.
If your application can write entities at > 5/s, you will need to use shard 
counters to get around the limit for an entity being modified.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/iNazwOxLMoYJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: New Pricing + Long running requests optimization

2011-09-13 Thread Gerald Tan
You can try having your Max Idle Instances at 1, but leave your Min Pending 
Latency to a small number. This allows additional instances to spin up to 
handle other requests, but will not increase your Total Instance Hour 
significantly, as you will now only be charged for Active Instance (the 
orange line) instead of Total Instance (the blue line). The Scheduler will 
spin up additional instances to handle your request but you only pay for 
them when they are actually active.

Once the free quota for Total Instance Hours increase to 28 it shouldn't be 
too hard to remain under that.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/qMPxgEbCcpgJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: New Pricing + Long running requests optimization

2011-09-13 Thread Rishi Arora
My app is nearly identical to yours - several concurrent URL fetches are
performed to "gather" content.  And when users access my site, that content
is nicely formatted for them for display.  My solution - part of if has
already been suggested by Jeff - use async URL fetch.  Spinning an instance
for 5 to 10 seconds waiting for your supplier's website to return book
pricing data is a waste of resources.  So, whenever you need to do a URL
fetch, consider deferring it by queueing it up in a pull queue.  Then once
you have enough deferred (10 is a good number), then call URL fetch
simultaneously for all the 10 requests.  Another optimization is to use by
the 9 free hours of backend time.  I agree you can't have the backend
running all the time.  So, wake up the backend by a cron job that runs once
every hour.  This will incur a minimum cost of 15 minutes per hour = 6
instance hours - which is under the free quota.  Each time your backend
wakes up, it looks up all the URL fetch requests deferred in the last hour,
and processes them.  My app does exactly this, and it takes me about 45
seconds to fetch and process all data for ~300 URL fetches every hour.

If you are attempting to stay within the free quota, absolutely use the
backend hours in any way you can.  it'll be a pity to not use those free 9
instance hours.

On Tue, Sep 13, 2011 at 8:43 PM, Tim Hoffman  wrote:

> Hi
>
> You could submit the request via ajax back to your appengine app
> and it can then do an async requuest on all the urls, .
>
> In your case you have some of the info already  and have to fetch some of
> it.
> So it might be two ajax calls, one to get the list of books, the result is
> book prices for stuff you know, plus an indicator of the books that a
> further request
> will be required, your front end can then display the details you have,
> submit another
> ajax request to appengine to fetch results for the books you currently have
> no info on.
> Which can then  async urlfetch the rest of the details.
>
> This way user gets some info straight away and you get to keep you requests
> to a minimum
> and fill in the results later.
>
> Just a thought ;-)
>
> T
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/google-appengine/-/3dA05F9-QDsJ.
>
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: why Datastore Key Fetch Ops is so many?

2011-09-13 Thread saintthor
yes i use count.

what can instead it?

On 9月14日, 上午2时28分, JH  wrote:
> Yes I found counts absolutely kill your small datastore ops.  Of
> course it has never been recommended to .count()... but if you are you
> will not be able to stay in free quota...
>
> On Sep 13, 12:51 pm, "Gregory D'alesandre"  wrote:
>
>
>
>
>
>
>
> > Doing a 
> > count
> > uses
> > key fetch ops, is it possible you have a few counts in your code?
>
> > Greg
>
> > On Tue, Sep 13, 2011 at 10:14 AM, saintthor  wrote:
> > > now, my quota:
>
> > > Datastore Entity Fetch Ops
> > > 0%
> > >0%  17,400 of Unlimited Okay
> > > Datastore Entity Put Ops
> > > 0%
> > >0%  136 of UnlimitedOkay
> > > Datastore Entity Delete Ops
> > > 0%
> > >0%  0 of Unlimited  Okay
> > > Datastore Index Write Ops
> > > 0%
> > >0%  240 of UnlimitedOkay
> > > Datastore Query Ops
> > > 0%
> > >0%  343 of UnlimitedOkay
> > > Datastore Key Fetch Ops
> > > 0%
> > >0%  208,358 of UnlimitedOkay
>
> > > Datastore Key Fetch Ops is much more than others. what may cause this?
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > > "Google App Engine" group.
> > > To post to this group, send email to google-appengine@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > > google-appengine+unsubscr...@googlegroups.com.
> > > For more options, visit this group at
> > >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Min Pending Latency -- does it really do anything?

2011-09-13 Thread Rishi Arora
It was announced just 4 days ago, in response to developer concerns for
small size apps.
http://googleappengine.blogspot.com/2011/09/few-adjustments-to-app-engines-upcoming.html

On Tue, Sep 13, 2011 at 5:52 PM, dloomer  wrote:

> Interesting.  I didn't know about the 28 instance hours, since under the
> "Estimated Charges Under New Pricing" section of my billing history it still
> shows 24 free instance hours.  Thanks for that info.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/google-appengine/-/bT9ppc6NQPcJ.
>
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: New Pricing + Long running requests optimization

2011-09-13 Thread Tim Hoffman
Hi

You could submit the request via ajax back to your appengine app
and it can then do an async requuest on all the urls, .

In your case you have some of the info already  and have to fetch some of 
it.
So it might be two ajax calls, one to get the list of books, the result is 
book prices for stuff you know, plus an indicator of the books that a 
further request
will be required, your front end can then display the details you have, 
submit another 
ajax request to appengine to fetch results for the books you currently have 
no info on.
Which can then  async urlfetch the rest of the details.

This way user gets some info straight away and you get to keep you requests 
to a minimum
and fill in the results later.

Just a thought ;-)

T

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/3dA05F9-QDsJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Can i use game engines like jMonkey or Panda3d?

2011-09-13 Thread Jayjay
Can i use games engines like jMonkey (java) or 
Panda3D 
(python) or any similar game engine?

I know that most of them have networking libraries using sockets, which is 
not supported by gae, but what if i don't use these libraries. I am going to 
use the channel api.

Basically i want to use these engines by not calling the classes that are 
not supported by gae. Is this possible or gae will produce an error just 
because they are referenced in the program ?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/fHkzQU1KhEkJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: New Pricing + Long running requests optimization

2011-09-13 Thread ESPR!T
Sorry could you be more specific?

I have to display a page to user while I am fetching the prices on
background (sometimes I have prices for 10 suppliers already in db and
I need to fetch just 10 more). I though that async url fetch is more
like like that you can call a fetch from url on background and then
display the prices so in my case if the fastest supplier answer in 1
sec and slowest in 30 secs the user would have to wait 30 sec for the
results) - or maybe just didn't get what did you mean.

On Sep 14, 12:09 pm, JH  wrote:
> sounds like you could use async urlfetch instead of 20 separate tasks?
>
> On Sep 13, 6:43 pm, "ESPR!T"  wrote:
>
>
>
>
>
>
>
> > Hello guys,
>
> > I would like to get an advice how to deal with long running request
> > when the new pricing will apply.
>
> > My app is searching for the cheapest book prices on the internet. So
> > when user searches for the book and the price is not in database, I
> > have to check 20+ sellers for the current price (some of those have
> > API and sometime I get the info from their web by parsing the page). I
> > have to display these prices instantly to user - basically I would
> > like to get to 10 seconds at max). I am using tasks so when user
> > displays a page, I create a task for each supplier and update the page
> > trhough ajax while user is waiting for results. On the old pricing
> > scheme I haven't had a reason for any optimization as these request
> > are low CPU intensive, they just take 1-10 seconds to finish depending
> > on speed of supplier source data. App engine just started multiple
> > instances when needed and I've never get over 1 hour of CPU in a day.
>
> > But now when they switch to instance time my prognozed payments went
> > up from $0 to $2+ per day (just for a instance time + plus another
> > fees for database writes/reads). I've put the max idle instance to 1
> > and raised the latency to max which got me back 15 hours of instance
> > time (so I would pay 0-5 cents per day just for DB operations now).
> > Then I've implemented memcache + I am going to play with cach headers
> > for generated pages.
>
> > The main issue for me is now that my one instance is not able to
> > handle 20+ task requests + users browsing so its slow and it
> > eventually still starts new instance on peaks (but I was expecting
> > this). So my idea is to have one fron instance for all user related
> > stuff and process tasks instantly by other instance. On the app engine
> > I could use backends but I am not really keen to pay almost $2 per day
> > for running a minimal backend for this type of tasks.
>
> > So I am deciding to 'outsource' the task processing to some external
> > service which is more userfriendly for low CPU/long latency requests.
> > AFAIK there is an Amazon with the AWS Elastic Beans (http://
> > aws.amazon.com/elasticbeanstalk/) and the Heroku (which is capable to
> > run java now) - I also still have an option to put my little worker
> > app to some very cheap VPS or my own machine. So basically my GAE
> > instance will queue up all taks and send them to my external workers
> > which then will call my app back with an HTTP post with results.
>
> > Do you think this is a good approach for my task or can you see some
> > issues in it? Or maybe you have some other ideas for other providers
> > compatible with java environment?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: New Pricing + Long running requests optimization

2011-09-13 Thread Jeff Schnitzer
+1 to this.

There's currently a limit of 10 async requests pending at once, but
the worst case will still be the latency of two calls.

Jeff

On Tue, Sep 13, 2011 at 5:09 PM, JH  wrote:
> sounds like you could use async urlfetch instead of 20 separate tasks?
>
> On Sep 13, 6:43 pm, "ESPR!T"  wrote:
>> Hello guys,
>>
>> I would like to get an advice how to deal with long running request
>> when the new pricing will apply.
>>
>> My app is searching for the cheapest book prices on the internet. So
>> when user searches for the book and the price is not in database, I
>> have to check 20+ sellers for the current price (some of those have
>> API and sometime I get the info from their web by parsing the page). I
>> have to display these prices instantly to user - basically I would
>> like to get to 10 seconds at max). I am using tasks so when user
>> displays a page, I create a task for each supplier and update the page
>> trhough ajax while user is waiting for results. On the old pricing
>> scheme I haven't had a reason for any optimization as these request
>> are low CPU intensive, they just take 1-10 seconds to finish depending
>> on speed of supplier source data. App engine just started multiple
>> instances when needed and I've never get over 1 hour of CPU in a day.
>>
>> But now when they switch to instance time my prognozed payments went
>> up from $0 to $2+ per day (just for a instance time + plus another
>> fees for database writes/reads). I've put the max idle instance to 1
>> and raised the latency to max which got me back 15 hours of instance
>> time (so I would pay 0-5 cents per day just for DB operations now).
>> Then I've implemented memcache + I am going to play with cach headers
>> for generated pages.
>>
>> The main issue for me is now that my one instance is not able to
>> handle 20+ task requests + users browsing so its slow and it
>> eventually still starts new instance on peaks (but I was expecting
>> this). So my idea is to have one fron instance for all user related
>> stuff and process tasks instantly by other instance. On the app engine
>> I could use backends but I am not really keen to pay almost $2 per day
>> for running a minimal backend for this type of tasks.
>>
>> So I am deciding to 'outsource' the task processing to some external
>> service which is more userfriendly for low CPU/long latency requests.
>> AFAIK there is an Amazon with the AWS Elastic Beans (http://
>> aws.amazon.com/elasticbeanstalk/) and the Heroku (which is capable to
>> run java now) - I also still have an option to put my little worker
>> app to some very cheap VPS or my own machine. So basically my GAE
>> instance will queue up all taks and send them to my external workers
>> which then will call my app back with an HTTP post with results.
>>
>> Do you think this is a good approach for my task or can you see some
>> issues in it? Or maybe you have some other ideas for other providers
>> compatible with java environment?
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: New Pricing + Long running requests optimization

2011-09-13 Thread JH
sounds like you could use async urlfetch instead of 20 separate tasks?

On Sep 13, 6:43 pm, "ESPR!T"  wrote:
> Hello guys,
>
> I would like to get an advice how to deal with long running request
> when the new pricing will apply.
>
> My app is searching for the cheapest book prices on the internet. So
> when user searches for the book and the price is not in database, I
> have to check 20+ sellers for the current price (some of those have
> API and sometime I get the info from their web by parsing the page). I
> have to display these prices instantly to user - basically I would
> like to get to 10 seconds at max). I am using tasks so when user
> displays a page, I create a task for each supplier and update the page
> trhough ajax while user is waiting for results. On the old pricing
> scheme I haven't had a reason for any optimization as these request
> are low CPU intensive, they just take 1-10 seconds to finish depending
> on speed of supplier source data. App engine just started multiple
> instances when needed and I've never get over 1 hour of CPU in a day.
>
> But now when they switch to instance time my prognozed payments went
> up from $0 to $2+ per day (just for a instance time + plus another
> fees for database writes/reads). I've put the max idle instance to 1
> and raised the latency to max which got me back 15 hours of instance
> time (so I would pay 0-5 cents per day just for DB operations now).
> Then I've implemented memcache + I am going to play with cach headers
> for generated pages.
>
> The main issue for me is now that my one instance is not able to
> handle 20+ task requests + users browsing so its slow and it
> eventually still starts new instance on peaks (but I was expecting
> this). So my idea is to have one fron instance for all user related
> stuff and process tasks instantly by other instance. On the app engine
> I could use backends but I am not really keen to pay almost $2 per day
> for running a minimal backend for this type of tasks.
>
> So I am deciding to 'outsource' the task processing to some external
> service which is more userfriendly for low CPU/long latency requests.
> AFAIK there is an Amazon with the AWS Elastic Beans (http://
> aws.amazon.com/elasticbeanstalk/) and the Heroku (which is capable to
> run java now) - I also still have an option to put my little worker
> app to some very cheap VPS or my own machine. So basically my GAE
> instance will queue up all taks and send them to my external workers
> which then will call my app back with an HTTP post with results.
>
> Do you think this is a good approach for my task or can you see some
> issues in it? Or maybe you have some other ideas for other providers
> compatible with java environment?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Does read/write an entity with arraylist cost more than an entity w/o arraylist

2011-09-13 Thread Bendanpa
My case is unindexed. I also want to know if it matters with the size of the 
list. Anyone please help

Thanks,
Bendanpa

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/h7UGRA4hVG4J.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] New Pricing + Long running requests optimization

2011-09-13 Thread ESPR!T
Hello guys,

I would like to get an advice how to deal with long running request
when the new pricing will apply.

My app is searching for the cheapest book prices on the internet. So
when user searches for the book and the price is not in database, I
have to check 20+ sellers for the current price (some of those have
API and sometime I get the info from their web by parsing the page). I
have to display these prices instantly to user - basically I would
like to get to 10 seconds at max). I am using tasks so when user
displays a page, I create a task for each supplier and update the page
trhough ajax while user is waiting for results. On the old pricing
scheme I haven't had a reason for any optimization as these request
are low CPU intensive, they just take 1-10 seconds to finish depending
on speed of supplier source data. App engine just started multiple
instances when needed and I've never get over 1 hour of CPU in a day.

But now when they switch to instance time my prognozed payments went
up from $0 to $2+ per day (just for a instance time + plus another
fees for database writes/reads). I've put the max idle instance to 1
and raised the latency to max which got me back 15 hours of instance
time (so I would pay 0-5 cents per day just for DB operations now).
Then I've implemented memcache + I am going to play with cach headers
for generated pages.

The main issue for me is now that my one instance is not able to
handle 20+ task requests + users browsing so its slow and it
eventually still starts new instance on peaks (but I was expecting
this). So my idea is to have one fron instance for all user related
stuff and process tasks instantly by other instance. On the app engine
I could use backends but I am not really keen to pay almost $2 per day
for running a minimal backend for this type of tasks.

So I am deciding to 'outsource' the task processing to some external
service which is more userfriendly for low CPU/long latency requests.
AFAIK there is an Amazon with the AWS Elastic Beans (http://
aws.amazon.com/elasticbeanstalk/) and the Heroku (which is capable to
run java now) - I also still have an option to put my little worker
app to some very cheap VPS or my own machine. So basically my GAE
instance will queue up all taks and send them to my external workers
which then will call my app back with an HTTP post with results.

Do you think this is a good approach for my task or can you see some
issues in it? Or maybe you have some other ideas for other providers
compatible with java environment?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Min Pending Latency -- does it really do anything?

2011-09-13 Thread dloomer
Serving the images by itself isn't much of an issue.  But every request to 
my page is still going to need a call to a regular webapp handler to serve 
up the HTML surrounding the images.  And if I use blobstore for the uploads, 
that still will require HTTP requests.  So I think I'd end up using the same 
number of HTTP requests for both the web pages and the uploader as I'm 
currently using (and actually the uploader might require an extra round trip 
to get the blobstore upload URL).

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/ujUULmEdSYQJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Min Pending Latency -- does it really do anything?

2011-09-13 Thread dloomer
Interesting.  I didn't know about the 28 instance hours, since under the 
"Estimated Charges Under New Pricing" section of my billing history it still 
shows 24 free instance hours.  Thanks for that info.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/bT9ppc6NQPcJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Min Pending Latency -- does it really do anything?

2011-09-13 Thread prgmratlarge
Have you looked into using the Blobstore? You can serve the images
without using instances (or cpu).

On Sep 13, 5:30 pm, dloomer  wrote:
> I have a simple webcam app used by a maximum of maybe 3 people at any time,
> which also handles requests from a batch process initiated from my house
> which uploads a new image to my app every 15 seconds via HTTP call to my
> app's frontend.  My goal is to get the app running on just a single frontend
> instance at all times, making this as close to a free app as possible, but
> this is proving much more difficult than I thought it would.
>
> When no one is connected to my app, all the uploads go to a single instance.
>  All the requests complete with around 200ms latency.
>
> However, as soon as one user accesses the main page of my app, a new
> instance spins up.  *This in spite of the fact that a browser request to my
> app typically completes in well under a second, and Min Pending Latency is
> set to 15 seconds.*
>
> What is it that would make the scheduler think that one instance won't
> handle both sets of requests, when the Min Pending Latency is set so high
> and none of the requests come anywhere near this threshold? One theory: I
> remember reading on these forums a while back, under a topic regarding
> keeping an instance "always on", that the scheduler has strategies to
> prevent you from keeping an instance "always on" by constantly pinging it.
> My 15-second periodic upload is similar to a ping in this sense.  I don't
> intend anything nefarious, but the scheduler wouldn't know this, and maybe
> is just trying to close a loophole that someone else could exploit.
>
> I'd use a backend to handle the image uploads, but I need it running 24
> hours and a 24-hour backend isn't free.
>
> I don't think this would work as a "free" app by Google's billing terms
> (which I believe would restrict me to a single frontend by default), as it's
> likely I'll go over quota on datastore operations farily regularly.
>
> Any ideas on how I can keep my app as cheap as possible without sacrificing
> functionality?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Min Pending Latency -- does it really do anything?

2011-09-13 Thread Rishi Arora
It really doesn't matter it a second instance kicks in to process your
user-facing requests.  If your max_idle_instances is set to 1, then you're
only paying for one idle instance at any given time.  Remember that
"max_idle_instances=1" doesn't mean "max_instances=1".  I do agree your
concerns, and I'm curious to know too, why the scheduler is starting a
second instance, even though min_latency is set to 15 seconds.
 Nevertheless, you should still be able to stay under the free quota of 28
instance hours.  The periodic image uploads will take up 24 instance hours,
and any extra processing time that those uploads require, and any
user-facing requests require will most likely fit into the remaining 4
instance hours.  Any idle time for the second (or third, or fourth) instance
will not be billed to you (only the processing time of the extra instances
will be billed).  This was in fact the motivation behind increasing the free
instance hours from 24 to 28.

On Tue, Sep 13, 2011 at 4:30 PM, dloomer  wrote:

> I have a simple webcam app used by a maximum of maybe 3 people at any time,
> which also handles requests from a batch process initiated from my house
> which uploads a new image to my app every 15 seconds via HTTP call to my
> app's frontend.  My goal is to get the app running on just a single frontend
> instance at all times, making this as close to a free app as possible, but
> this is proving much more difficult than I thought it would.
>
> When no one is connected to my app, all the uploads go to a single
> instance.  All the requests complete with around 200ms latency.
>
> However, as soon as one user accesses the main page of my app, a new
> instance spins up.  *This in spite of the fact that a browser request to
> my app typically completes in well under a second, and Min Pending Latency
> is set to 15 seconds.*
>
> What is it that would make the scheduler think that one instance won't
> handle both sets of requests, when the Min Pending Latency is set so high
> and none of the requests come anywhere near this threshold? One theory: I
> remember reading on these forums a while back, under a topic regarding
> keeping an instance "always on", that the scheduler has strategies to
> prevent you from keeping an instance "always on" by constantly pinging it.
> My 15-second periodic upload is similar to a ping in this sense.  I don't
> intend anything nefarious, but the scheduler wouldn't know this, and maybe
> is just trying to close a loophole that someone else could exploit.
>
> I'd use a backend to handle the image uploads, but I need it running 24
> hours and a 24-hour backend isn't free.
>
> I don't think this would work as a "free" app by Google's billing terms
> (which I believe would restrict me to a single frontend by default), as it's
> likely I'll go over quota on datastore operations farily regularly.
>
> Any ideas on how I can keep my app as cheap as possible without sacrificing
> functionality?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/google-appengine/-/-OicXg1wyQIJ.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Happy Programmer Day

2011-09-13 Thread Rohan Chandiramani
Happy Programmer Day!

Let's see... my app is pretty much done so the only issue i'm tackling is:

 * Getting adsense approved, this is my worst fear of all since i don't 
have any 'original content'. :(


-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/Pqr_0LOzClAJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Min Pending Latency -- does it really do anything?

2011-09-13 Thread dloomer
I have a simple webcam app used by a maximum of maybe 3 people at any time, 
which also handles requests from a batch process initiated from my house 
which uploads a new image to my app every 15 seconds via HTTP call to my 
app's frontend.  My goal is to get the app running on just a single frontend 
instance at all times, making this as close to a free app as possible, but 
this is proving much more difficult than I thought it would.

When no one is connected to my app, all the uploads go to a single instance. 
 All the requests complete with around 200ms latency.

However, as soon as one user accesses the main page of my app, a new 
instance spins up.  *This in spite of the fact that a browser request to my 
app typically completes in well under a second, and Min Pending Latency is 
set to 15 seconds.*

What is it that would make the scheduler think that one instance won't 
handle both sets of requests, when the Min Pending Latency is set so high 
and none of the requests come anywhere near this threshold? One theory: I 
remember reading on these forums a while back, under a topic regarding 
keeping an instance "always on", that the scheduler has strategies to 
prevent you from keeping an instance "always on" by constantly pinging it. 
My 15-second periodic upload is similar to a ping in this sense.  I don't 
intend anything nefarious, but the scheduler wouldn't know this, and maybe 
is just trying to close a loophole that someone else could exploit.

I'd use a backend to handle the image uploads, but I need it running 24 
hours and a 24-hour backend isn't free.

I don't think this would work as a "free" app by Google's billing terms 
(which I believe would restrict me to a single frontend by default), as it's 
likely I'll go over quota on datastore operations farily regularly.

Any ideas on how I can keep my app as cheap as possible without sacrificing 
functionality?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/-OicXg1wyQIJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] SSL communication with external app not working.

2011-09-13 Thread Hynek
Hi,

at first let me say that i am a newbie in developing apps for the "Google 
App Engine".
What i try to achieve is to port our existing application (Java) to run on 
the "Google App Engine".

However the application will need to transfer data via SSL with an external 
site.
Now i am using classes from "javax.net.ssl.*" to achieve this.
But when i try to use them i get the error "javax.net.ssl.SSLSocketFactory 
is not supported by Google App Engine's Java runtime environment"
(I did find this 
whitelistof 
supported classes and "javax.net.*" is not listed.)

Is there a workaround of how i could use these classes, or is this 
prohibited ?

Moreover my current app needs that additional "root certificates" are added 
to the JRE's cert store. (Now i do this with "keytool.exe" )
Would this even be possible to add these in a "Google App 
Engine"-application ?

Thank you in advance

Hynek



-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/XpFD9WIP_QMJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Sqlite3 for backends

2011-09-13 Thread Andrin von Rechenberg
The idea would be to use sqlite in memory ( sqlite.connect(":memory:") ) so
it would be ultra fast.
That's why I was talking about using it in backends... :)

-Andrin

On Mon, Sep 12, 2011 at 4:18 PM, Bart Thate  wrote:

> Oi !
>
> Agree completely, in fact really forgot about it ;]
> I thought it was on the roadmap, not sure though .. maybe 2.7 supports
> sqlite as the python module is already builtin ?
> Hmm just checked the roadmap and only 2.7 support is on it, lets hope they
> dont exclude "import sqlite3" from the deal.
>
> Bart
>
> programming schizofrenic -  http://tinyurl.com/bart-thate
>
>
>
>
> On Mon, Sep 12, 2011 at 4:07 PM, Andrin von Rechenberg  > wrote:
>
>> Hi there
>>
>> Is there any plan to support the sqlite3 module in python in GAE?
>> It would be very useful if someone would want to build an SQL like
>> backend.
>>
>> -Andrin
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine" group.
>> To post to this group, send email to google-appengine@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine+unsubscr...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Error "javax.net.ssl.SSLSocketFactory not supported by Google App Engine's Java runtime environment

2011-09-13 Thread Hynek
Hi all,

i am a newbie to developing applications for "Google App Engine".
What i try to achieve is to transfer one of our apps so that it could
run on the "Google App Engine".

My existing code needs to transfer data via SSL from an other
application.
For this i am using classes from "javax.net.ssl.*".
However i immediately get the error that this class is not supported
by Google App Engine's JRE.

In your documentation i found this white-list of supported classes:
http://code.google.com/appengine/docs/java/jrewhitelist.html
(javax.net.ssl.* is not listed)

Is there a workaround for how to use these unlisted classes ?
Any other way how to transfer data via SSL from an other application ?

Thank you in advance.

Hynek

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: backend vs frontend instances

2011-09-13 Thread Michael Quartly
my main apps id is xanthus-ms and my test app is xq-remake

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/WEmz9Ifo-EoJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Request queue for dynamic backends - how does it function specifically?

2011-09-13 Thread Jason Collins
That's not what I was asking, though thanks for the information.

It appears that my task queues that feed the backends are backing up
(because there are not enough instances, by configuration), so I'm
guessing that's my answer.

j

On Sep 13, 1:38 pm, Rishi Arora  wrote:
> Check this 
> out:http://code.google.com/appengine/docs/python/taskqueue/overview-push
>
> This suggests that tasks pushed onto taskqueues must get executed within 10
> minutes, in the case of front-end instances.  But backends are exempt from
> this limit.  This implies that when you call
> tasqueue.add(taskqueue.Task(url=[xyz], target=[my backend name])), then you
> have more than 10 minutes to execute the task.  I don't know the exact
> deadline, but it seems safe to assume it is a lot longer than 10 minutes.
>
> Also, one alternative for driving your backends is through cron jobs and
> pull queues.  This is how we recently re-designed our app to try to lower
> our billable instance hours.  We have one backend configured, with num
> instances set to 1.  This means we will never have more than 1 backend
> instances.  Any tasks that we want to have this backend process, we enqueue
> them to a "pull queue". This enqueue operation does not trigger the backend.
>  We trigger the backend once every hour using a cron job.  When the backend
> wakes up form this trigger, it starts calling "lease_tasks" method on the
> pull queue to start dequeueing pending requests one at a time.  When all
> requests are processed serially, the backend goes idle.  The GAE scheduler
> will wait around 5 minutes to stop this backend.  But you'll be billed for
> 15 minutes after the backend goes idle.  Of course this design assumes that
> the kind of requests we enqueue to the pull queue don't require immediate
> processing.  In our case, we can tolerate an hour of delay before these
> requests are processed.  But these are the kinds of things backends are most
> suitable for anyways.
>
> Hope this helps.
>
> Rishi
>
> On Tue, Sep 13, 2011 at 12:28 PM, Jason Collins
> wrote:
>
>
>
>
>
>
>
> > We are moving much of our taskqueue work to dynamic backends.
>
> > One obvious question we're faced with is "how many (max) instances do
> > we need for our background work?"
>
> > If we are feeding all of our work to our dynamic backends via
> > taskqueue, will we see the queues get backed up if the backend
> > instances cannot keep up?
>
> > Or, alternatively, do the queued tasks pop off at their configured
> > rate and drop into a different request queue for the backend pool?
>
> > If the latter, how long will the requests stay on this other "backend
> > pool request queue" (e.g., they will stay for 10s on the front-end
> > instance request queue, or have in the past; is there an equivalent
> > timeout for backends)? Is there any way to get visibility into this
> > other queue, if it exists?
>
> > Thanks for any info,
> > j
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appengine@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com.
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Cron job every 30 minutes only on work days

2011-09-13 Thread Ian Marshall
The GAE notes are confusing to me for your scenario. Have you tried:

1.  Setting up 19 cron jobs of the form
  every mon,tue,wed,thu,fri HH:MM
where HH:MM is: 09:00, 09:30, 10:00, ..., 15:00

1.  Setting up 1 cron job of the form
  every mon,tue,wed,thu,fri 09:00
and using this cron job to do stuff and then set a deferred task
for 30 minutes' time (if earlier that 15:00).


On Sep 13, 2:52 pm, Matija  wrote:
> Hi,
> is there a way to define something like this ?
>
> 
>         /cron/odgodjeni
>         Jada jada
>         every 30 minutes *every mon,tue,wed,thu,fri *from 06:00 to
> 15:00
>         Europe/Zagreb
> 
>
> I don't want my cron job to starts on Saturdays and Sundays especially with
> new 15 minutes idle instance billing window.
>
> Tnx.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Happy Programmer Day

2011-09-13 Thread NG
Same to you :)

Even though the additional work-load is not quite appreciated at the moment, 
I think if someone can deal with it... ;)

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/Z-P4ZlIpv1gJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] DeadlineExceededError: The API call mail.Send() took too long to respond and was cancelled

2011-09-13 Thread Rishi Arora
Cool.  Thanks.  Then I can safely set my max_idle_instances back to 1.  I
also noticed high email latencies on the receive side.  I have a user facing
email address on a personal domain hosted on Google, whose sole purpose is
to automatically forward emails to my app at appspotmail.com (using a Gmail
filter).  I noticed that an email showed up in my app's logs nearly 18 hours
after it was sent.  I can imagine that there are several components at play,
many outside of GAE (such as Gmail, etc), but it seems more than a
coincidence that I'm seeing delays and timeouts on the mail.send() side at
the same time.


On Tue, Sep 13, 2011 at 2:57 PM, Joshua Smith wrote:

> Coincidence.  I've noticed a surge in mail timeouts over the past few days.
>  As I said on a different thread, this is really stupid - google should be
> able to send mail without EVER having a timeout. For now, you need to always
> send mail from a task, because of these ridiculous exceptions.
>
> On Sep 13, 2011, at 3:44 PM, Rishi Arora wrote:
>
> > I received at least 10 instances of these errors in one day yesterday,
> out of around ~50 emails that were sent through the day.  In the last few
> months that my app has been executing, I have never seen this.  Searching on
> Google revealed that the best cure is to send emails in the context of a
> task-queue request instead of a user facing request.  That makes sense.
>  However, yesterday was also the day I rolled out instance-hour-saving
> optimizations, and reduced my max_idle_instances to 1, while setting
> min_pending_latency to 200ms.  This scheduler parameter change has not had
> any effect on overall performance of my app, in terms of average latencies,
> etc.  I'm wondering if it is just a coincidence that my mail.send() call
> timedout, or was it somehow related to my scheduler parameter changes.  Any
> thoughts?
> >
> > Thanks in advance.
> > Rishi
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> > To post to this group, send email to google-appengine@googlegroups.com.
> > To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> > For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] DeadlineExceededError: The API call mail.Send() took too long to respond and was cancelled

2011-09-13 Thread Joshua Smith
Coincidence.  I've noticed a surge in mail timeouts over the past few days.  As 
I said on a different thread, this is really stupid - google should be able to 
send mail without EVER having a timeout. For now, you need to always send mail 
from a task, because of these ridiculous exceptions.

On Sep 13, 2011, at 3:44 PM, Rishi Arora wrote:

> I received at least 10 instances of these errors in one day yesterday, out of 
> around ~50 emails that were sent through the day.  In the last few months 
> that my app has been executing, I have never seen this.  Searching on Google 
> revealed that the best cure is to send emails in the context of a task-queue 
> request instead of a user facing request.  That makes sense.  However, 
> yesterday was also the day I rolled out instance-hour-saving optimizations, 
> and reduced my max_idle_instances to 1, while setting min_pending_latency to 
> 200ms.  This scheduler parameter change has not had any effect on overall 
> performance of my app, in terms of average latencies, etc.  I'm wondering if 
> it is just a coincidence that my mail.send() call timedout, or was it somehow 
> related to my scheduler parameter changes.  Any thoughts?
> 
> Thanks in advance.
> Rishi
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Unexpected Entity Group transaction contention

2011-09-13 Thread Mike Wesner
Any time you write/update an entity in a group (share a common
ancestor) you lock the entire entity group all the way to the top most
parent.  All branches under that parent... the entire tree.

Multi eg transactions will allow you to write to multiple groups but
it will not reduce contention in this scenario at all.

Mike

On Sep 13, 2:23 pm, Brian Olson  wrote:
> Re-reading the documentation, this kinda makes sense, but it bit me recently
> so I want to tell the story and see what others think.
>
> I make an entity Parent(). Some time later I make an entity
> Child(parent=some_parent) and I do this in a transaction. I do this a bunch,
> concurrently from task-queue entries.
>
> I was surprised to learn that simply creating a Child in a transaction,
> without otherwise doing anything to the parent, neither .get() nor .put(),
> locks the parent and all its children.
>
> def txn_make_child(some_parent):
>   foo = Child(parent=some_parent)
>   foo.put()
>   # also transactionally enqueue a task to operate on the Child instance foo
>
> Code very much like that was failing out due to too many transaction
> retries. I didn't expect *any* transaction contention, because I thought I
> was just creating an object and enqueueing a task, and those were the only
> two things in the transaction in my head. But it turns out the above code
> locks some_parent and all its children. Boo.
>
> I think I was expecting things like this to lock parent and all its
> children:
> def txn_p_c_example(parent_key, child_key):
>   parent = db.get(parent_key)
>   child = db.get(child_key)
>   # now they're clearly both involved, and involving the parent winds up
> locking all the children. I can accept that.
>   parent.put()
>   child.put()
>
> I was able to re-code it to make Child have no ancestor, but there are still
> times when I would much rather still commit parent and child at exactly the
> same time.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] DeadlineExceededError: The API call mail.Send() took too long to respond and was cancelled

2011-09-13 Thread Rishi Arora
I received at least 10 instances of these errors in one day yesterday, out
of around ~50 emails that were sent through the day.  In the last few months
that my app has been executing, I have never seen this.  Searching on Google
revealed that the best cure is to send emails in the context of a task-queue
request instead of a user facing request.  That makes sense.  However,
yesterday was also the day I rolled out instance-hour-saving optimizations,
and reduced my max_idle_instances to 1, while setting min_pending_latency to
200ms.  This scheduler parameter change has not had any effect on overall
performance of my app, in terms of average latencies, etc.  I'm wondering if
it is just a coincidence that my mail.send() call timedout, or was it
somehow related to my scheduler parameter changes.  Any thoughts?

Thanks in advance.
Rishi

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Request queue for dynamic backends - how does it function specifically?

2011-09-13 Thread Rishi Arora
Check this out:
http://code.google.com/appengine/docs/python/taskqueue/overview-push.html#Push_Queues_and_Backends

This suggests that tasks pushed onto taskqueues must get executed within 10
minutes, in the case of front-end instances.  But backends are exempt from
this limit.  This implies that when you call
tasqueue.add(taskqueue.Task(url=[xyz], target=[my backend name])), then you
have more than 10 minutes to execute the task.  I don't know the exact
deadline, but it seems safe to assume it is a lot longer than 10 minutes.

Also, one alternative for driving your backends is through cron jobs and
pull queues.  This is how we recently re-designed our app to try to lower
our billable instance hours.  We have one backend configured, with num
instances set to 1.  This means we will never have more than 1 backend
instances.  Any tasks that we want to have this backend process, we enqueue
them to a "pull queue". This enqueue operation does not trigger the backend.
 We trigger the backend once every hour using a cron job.  When the backend
wakes up form this trigger, it starts calling "lease_tasks" method on the
pull queue to start dequeueing pending requests one at a time.  When all
requests are processed serially, the backend goes idle.  The GAE scheduler
will wait around 5 minutes to stop this backend.  But you'll be billed for
15 minutes after the backend goes idle.  Of course this design assumes that
the kind of requests we enqueue to the pull queue don't require immediate
processing.  In our case, we can tolerate an hour of delay before these
requests are processed.  But these are the kinds of things backends are most
suitable for anyways.

Hope this helps.

Rishi


On Tue, Sep 13, 2011 at 12:28 PM, Jason Collins
wrote:

> We are moving much of our taskqueue work to dynamic backends.
>
> One obvious question we're faced with is "how many (max) instances do
> we need for our background work?"
>
> If we are feeding all of our work to our dynamic backends via
> taskqueue, will we see the queues get backed up if the backend
> instances cannot keep up?
>
> Or, alternatively, do the queued tasks pop off at their configured
> rate and drop into a different request queue for the backend pool?
>
> If the latter, how long will the requests stay on this other "backend
> pool request queue" (e.g., they will stay for 10s on the front-end
> instance request queue, or have in the past; is there an equivalent
> timeout for backends)? Is there any way to get visibility into this
> other queue, if it exists?
>
> Thanks for any info,
> j
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



RE: [google-appengine] Re: Something to pass along to the google search team

2011-09-13 Thread Brandon Wirtz
That's all good info, but it doesn't apply if you are on GAE. If you are on
GAE you can't specify your crawl rate.  It is assigned a special Crawl rate.

 

From: google-appengine@googlegroups.com
[mailto:google-appengine@googlegroups.com] On Behalf Of Tim
Sent: Tuesday, September 13, 2011 7:55 AM
To: google-appengine@googlegroups.com
Subject: [google-appengine] Re: Something to pass along to the google search
team

 

 

Google webmaster tools 

 

  https://www.google.com/webmasters/tools/home

 

lets you (amongst other things) submit sitemaps and see the crawl rate for
your site (for the previous 90 days). There's also a form to report problems
with how googlebot is accessing your site

 

  https://www.google.com/webmasters/tools/googlebot-report

 

The crawl rate is modified to try to avoid overloading your site, but given
that GAE will just fire up more instances, then I guess googlebot thinks
your site is built for such traffic and just keeps upping the crawl rate.
You could try and mimic a site being killed by the crawler keep basic
stats in memcache every time you get hit by googlebot (as idenified by
request headers) and if the requests come too thick and fast, delay the
responses, or simply return a 408 or maybe a 503 or 509 response, and my
guess is you'll see the crawl rate back off pretty quickly.

 

  http://en.wikipedia.org/wiki/List_of_HTTP_status_codes

 

Would be nice if robots.txt or sitemap files let you specify a maximum crawl
rate (cf RSS files), or perhaps people agreed on an HTTP status code for
"we're close, but not THAT close..." response to tell crawlers to back off
(418 perhaps:) but I don't expect those standards have moved very much
recently...

 

--

T

 

-- 
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To view this discussion on the web visit
https://groups.google.com/d/msg/google-appengine/-/92F2o_-16zMJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Unexpected Entity Group transaction contention

2011-09-13 Thread Steve Sherrie
Testing is happening presently for multi-entity-group transactions I 
believe. I saw a thread on it about a week ago.


Steve



On 11-09-13 03:23 PM, Brian Olson wrote:
Re-reading the documentation, this kinda makes sense, but it bit me 
recently so I want to tell the story and see what others think.


I make an entity Parent(). Some time later I make an entity 
Child(parent=some_parent) and I do this in a transaction. I do this a 
bunch, concurrently from task-queue entries.


I was surprised to learn that simply creating a Child in a 
transaction, without otherwise doing anything to the parent, neither 
.get() nor .put(), locks the parent and all its children.


def txn_make_child(some_parent):
  foo = Child(parent=some_parent)
  foo.put()
  # also transactionally enqueue a task to operate on the Child 
instance foo


Code very much like that was failing out due to too many transaction 
retries. I didn't expect /any/ transaction contention, because I 
thought I was just creating an object and enqueueing a task, and those 
were the only two things in the transaction in my head. But it turns 
out the above code locks some_parent and all its children. Boo.


I think I was expecting things like this to lock parent and all its 
children:

def txn_p_c_example(parent_key, child_key):
  parent = db.get(parent_key)
  child = db.get(child_key)
  # now they're clearly both involved, and involving the parent winds 
up locking all the children. I can accept that.

  parent.put()
  child.put()

I was able to re-code it to make Child have no ancestor, but there are 
still times when I would much rather still commit parent and child at 
exactly the same time.

--
You received this message because you are subscribed to the Google 
Groups "Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/NsvS8Fcq_EwJ.

To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.


--
You received this message because you are subscribed to the Google Groups "Google 
App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Unexpected Entity Group transaction contention

2011-09-13 Thread Brian Olson
Re-reading the documentation, this kinda makes sense, but it bit me recently 
so I want to tell the story and see what others think.

I make an entity Parent(). Some time later I make an entity 
Child(parent=some_parent) and I do this in a transaction. I do this a bunch, 
concurrently from task-queue entries.

I was surprised to learn that simply creating a Child in a transaction, 
without otherwise doing anything to the parent, neither .get() nor .put(), 
locks the parent and all its children.

def txn_make_child(some_parent):
  foo = Child(parent=some_parent)
  foo.put()
  # also transactionally enqueue a task to operate on the Child instance foo

Code very much like that was failing out due to too many transaction 
retries. I didn't expect *any* transaction contention, because I thought I 
was just creating an object and enqueueing a task, and those were the only 
two things in the transaction in my head. But it turns out the above code 
locks some_parent and all its children. Boo.

I think I was expecting things like this to lock parent and all its 
children:
def txn_p_c_example(parent_key, child_key):
  parent = db.get(parent_key)
  child = db.get(child_key)
  # now they're clearly both involved, and involving the parent winds up 
locking all the children. I can accept that.
  parent.put()
  child.put()

I was able to re-code it to make Child have no ancestor, but there are still 
times when I would much rather still commit parent and child at exactly the 
same time.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/NsvS8Fcq_EwJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Happy Programmer Day

2011-09-13 Thread Steve
So slashdot is telling me today is Programmer 
Day.
 
 Yay for us!  After heavily optimizing my app engine app, I was lucky enough 
to not need to update it for about a year.  Now, with pricing changes coming 
down the line, I'm doing massive refactoring to bring it inline with the new 
regime.  My head is throbbing from all the changes I'm tackling:

   - GAE 1.3.8 -> 1.5.4
   - M/S -> HR
   - Kay / werkzeug -> webapp2
   - memcache sync calls -> memcache async
   - db.sync -> ndb.async
   - py25 -> py27
   - single-threaded -> multi-threaded
   - min(CPU) -> min(idle time)
   - min(entity size) -> min(separate entities)
   - mapreduce w fanout -> taskqueues (single instance chain)

Happy Programmer Day!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/IUBbBopolkAJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: backend vs frontend instances

2011-09-13 Thread Gregory D'alesandre
The memory allocation will be set to 128M, but our analysis is that should
impact very few apps.  What is your appid?  I'll see if it is on the list
that we were going to use to contact apps that would be affected.

Greg

On Tue, Sep 13, 2011 at 11:25 AM, Michael Quartly <
pleasedontdisablemyacco...@gmail.com> wrote:

> Will the memory be decreased? Because my app is using JRuby, which is very
> memory intensive and I sit around 200MB memory utilisation per app.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/google-appengine/-/6UrbJ_1_CIEJ.
>
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: why Datastore Key Fetch Ops is so many?

2011-09-13 Thread JH
Yes I found counts absolutely kill your small datastore ops.  Of
course it has never been recommended to .count()... but if you are you
will not be able to stay in free quota...

On Sep 13, 12:51 pm, "Gregory D'alesandre"  wrote:
> Doing a 
> count
> uses
> key fetch ops, is it possible you have a few counts in your code?
>
> Greg
>
>
>
>
>
>
>
> On Tue, Sep 13, 2011 at 10:14 AM, saintthor  wrote:
> > now, my quota:
>
> > Datastore Entity Fetch Ops
> > 0%
> >        0%      17,400 of Unlimited     Okay
> > Datastore Entity Put Ops
> > 0%
> >        0%      136 of Unlimited        Okay
> > Datastore Entity Delete Ops
> > 0%
> >        0%      0 of Unlimited  Okay
> > Datastore Index Write Ops
> > 0%
> >        0%      240 of Unlimited        Okay
> > Datastore Query Ops
> > 0%
> >        0%      343 of Unlimited        Okay
> > Datastore Key Fetch Ops
> > 0%
> >        0%      208,358 of Unlimited    Okay
>
> > Datastore Key Fetch Ops is much more than others. what may cause this?
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appengine@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com.
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: backend vs frontend instances

2011-09-13 Thread Michael Quartly
Will the memory be decreased? Because my app is using JRuby, which is very 
memory intensive and I sit around 200MB memory utilisation per app.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/6UrbJ_1_CIEJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: backend vs frontend instances

2011-09-13 Thread Gregory D'alesandre
All frontends already only have 600MHz allocated and there is no plan to
change this.  So, no, frontends won't be slower after the new billing is
enabled.

Greg

On Tue, Sep 13, 2011 at 11:12 AM, Timofey Koolin
wrote:

> Now frontend have 2.4GHz CPU - I can use 2CPU second per clock second in
> cpu-usage task.
>
> Will frontend slower than now after new billing enable?
>
> 2011/9/13 Timofey Koolin 
>
>> Now frontend have 2.4GHz CPU - I can use 2CPU second per clock second in
>> cpu-usage task.
>>
>> Will frontend slower than now after new billing enable?
>>
>>
>> 2011/9/12 Gregory D'alesandre 
>>
>>> Frontend specs are the same as a B1 backend as defined here:
>>> http://code.google.com/appengine/docs/python/config/backends.html#Instance_Classes
>>>
>>> This is 128M memory limit and 600MHz CPU limit
>>>
>>> Greg
>>>
>>>
>>> On Fri, Sep 2, 2011 at 2:28 PM, GR  wrote:
>>>
 Google, where can we get the frontend instance specs?

 --
 You received this message because you are subscribed to the Google
 Groups "Google App Engine" group.
 To view this discussion on the web visit
 https://groups.google.com/d/msg/google-appengine/-/fTW-5wf5oTQJ.
 To post to this group, send email to google-appengine@googlegroups.com.
 To unsubscribe from this group, send email to
 google-appengine+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.


>>>  --
>>> You received this message because you are subscribed to the Google Groups
>>> "Google App Engine" group.
>>> To post to this group, send email to google-appengine@googlegroups.com.
>>> To unsubscribe from this group, send email to
>>> google-appengine+unsubscr...@googlegroups.com.
>>> For more options, visit this group at
>>> http://groups.google.com/group/google-appengine?hl=en.
>>>
>>
>>
>>
>> --
>> С уважением,
>> Кулин Тимофей.
>>
>>
>> ICQ: 114902104
>> email: timo...@koolin.ru
>> Blog: http://timofey.koolin.ru
>>
>>
>
>
> --
> С уважением,
> Кулин Тимофей.
>
> Телефон: +7 (4852) 974793
> ICQ: 114902104
> email: timo...@koolin.ru
> Blog: http://timofey.koolin.ru
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: backend vs frontend instances

2011-09-13 Thread Timofey Koolin
Now frontend have 2.4GHz CPU - I can use 2CPU second per clock second in
cpu-usage task.

Will frontend slower than now after new billing enable?

2011/9/13 Timofey Koolin 

> Now frontend have 2.4GHz CPU - I can use 2CPU second per clock second in
> cpu-usage task.
>
> Will frontend slower than now after new billing enable?
>
>
> 2011/9/12 Gregory D'alesandre 
>
>> Frontend specs are the same as a B1 backend as defined here:
>> http://code.google.com/appengine/docs/python/config/backends.html#Instance_Classes
>>
>> This is 128M memory limit and 600MHz CPU limit
>>
>> Greg
>>
>>
>> On Fri, Sep 2, 2011 at 2:28 PM, GR  wrote:
>>
>>> Google, where can we get the frontend instance specs?
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "Google App Engine" group.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msg/google-appengine/-/fTW-5wf5oTQJ.
>>> To post to this group, send email to google-appengine@googlegroups.com.
>>> To unsubscribe from this group, send email to
>>> google-appengine+unsubscr...@googlegroups.com.
>>> For more options, visit this group at
>>> http://groups.google.com/group/google-appengine?hl=en.
>>>
>>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine" group.
>> To post to this group, send email to google-appengine@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine+unsubscr...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>
>
>
> --
> С уважением,
> Кулин Тимофей.
>
>
> ICQ: 114902104
> email: timo...@koolin.ru
> Blog: http://timofey.koolin.ru
>
>


-- 
С уважением,
Кулин Тимофей.

Телефон: +7 (4852) 974793
ICQ: 114902104
email: timo...@koolin.ru
Blog: http://timofey.koolin.ru

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] why Datastore Key Fetch Ops is so many?

2011-09-13 Thread Gregory D'alesandre
Doing a 
count
uses
key fetch ops, is it possible you have a few counts in your code?

Greg

On Tue, Sep 13, 2011 at 10:14 AM, saintthor  wrote:

> now, my quota:
>
> Datastore Entity Fetch Ops
> 0%
>0%  17,400 of Unlimited Okay
> Datastore Entity Put Ops
> 0%
>0%  136 of UnlimitedOkay
> Datastore Entity Delete Ops
> 0%
>0%  0 of Unlimited  Okay
> Datastore Index Write Ops
> 0%
>0%  240 of UnlimitedOkay
> Datastore Query Ops
> 0%
>0%  343 of UnlimitedOkay
> Datastore Key Fetch Ops
> 0%
>0%  208,358 of UnlimitedOkay
>
>
> Datastore Key Fetch Ops is much more than others. what may cause this?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Min idle instances setting not working at all?

2011-09-13 Thread Pol
I typed too fast, I meant *max* idle instances in the Dashboard. Sorry
about the confusion.

That said, the description of the slider is not very clear: it's
called "max" which is correct from a billing perspective, but from a
functionality perspective, it really looks like you're setting the
"default" almost "min" number of idle instances, especially
considering that "Always On" disappears in the new system, right?

On Sep 13, 12:48 am, Daniel Florey  wrote:
> Where did you find the min idle instances setting?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Something to pass along to the google search team

2011-09-13 Thread Tim


On Tuesday, 13 September 2011 16:04:41 UTC+1, Joshua Smith wrote:
>
> Sure, but if they just went breadth-first (putting pages to crawl into the 
> tail of a work queue that spans hundreds of sites), then there wouldn't be a 
> spike at all.
>
>
I expect there's something about wanting to pull back a series of pages from 
a single site together to get a consistent series of pages (especially with 
session cookies and sessions encoded in URLs and the like) not to mention 
little things like HTTP pipelining requests and the internal management of 
assigning machines (including timeouts, failovers and retries), updating 
databases with results and meta-results and 101 other things that I can't 
even start to think about - not to say it can't be done, but I think it'd 
have a lot of hidden implications.

Still, you did say "dunno if it's practical" - I was just wondering about 
other ways to make googlebot more compatible with GAE and GAE like systems.

--
T

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/RFQj3mK4mzwJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Datastore async ops, query vs get, and batches

2011-09-13 Thread Tim

I'm trying to get rid of queries as much as I can (anticipating the move to 
HR datastore), and using pre-computed keys and lists of keys much more so I 
can simply db.get() items I need without needing to worry about eventual 
consistency of queries (or put my entire datastore into one huge entity 
group).

But one nice thing about using a db.Query as an iterable (this is python 
terminology) is that the query pre-fetches a few results and then lets you 
process them while more results are fetched, which sounds nice and 
efficient, whereas db.get() blocks until it returns a complete list and 
db.get_async() returns a future of a similarly completely fulfilled list.

Short of breaking up my list of keys and doing my own chunking (feels ugly - 
and how do I know if the chunksize I use is counter-productive), the other 
option would be to do a query with "KEY IN" but I note that use of "IN" is 
subject to a maximum of 30 items.

So what I'm really after is something like db.get_iterable() (returns 
immediately the iterator blocks until each value is returned, I'd suggest 
this isn't obliged to return results in the same order as the request list) 
or an async API that lets me get started with partial results without having 
to wait for everything ("fetch()" ?).

Is there a better way for me to do this now?
Am I plain wrong to even worry about this?
Or is this maybe a "yeah, we know... coming soon" type question?

Cheers

--
Tim

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/Pg1oKL-RFqoJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Long running tasks impact on scheduler ?

2011-09-13 Thread stevep
Thanks for the explanation Jon (sorry I has used John before).

Hopefully you all can continue to explore how TQ tasks can be managed
separately by The Scheduler** for better instance optimization.

cheers,
stevep

**Caps pun intended: http://www.imdb.com/title/tt0113762/

On Sep 12, 10:32 am, Jon McAlister  wrote:
> Backends are one good way to do this. You can direct tasks at a
> backend, and then control the number of instances for that backend
> directly.http://code.google.com/appengine/docs/python/backends/overview.html
>
> Once the billing rollout is complete, the treatment of tasks and
> non-task requests, regardless of their latency, will become pretty
> much the same. The scheduler will try to find an instance for them.
> The only difference for tasks is that by default the scheduler will
> accept a bit more pending latency (proportional to the average request
> latency for that queue) than it would for non-task requests. The "1s
> rule" (although in reality, it was much more nuanced) will be removed,
> the app, regardless of request latencies, will be able to get as many
> instances as it wants (and can pay for). If you want to limit the
> throughput of a task (to limit the number of instances it turns up),
> use the queue configuration to do 
> so:http://code.google.com/appengine/docs/python/config/queue.html#Queue_...
>
>
>
>
>
>
>
> On Sat, Sep 10, 2011 at 10:41 AM, Robert Kluin  wrote:
> > I'd very much like to know how long-running (over 1000ms) requests are
> > treated by the new scheduler as well.  Previously I believe they were
> > basically ignored, and hence would not cause new instances to be spun
> > up.
>
> > And, yes I would very much like to have control over how task queues
> > are treated with regards to the scheduler.  We've currently got the
> > fail-fast header (X-AppEngine-FailFast), which helps quite a bit.
> > But, I'd really love to let my queues spin up new instances once the
> > latency hits a certain point while always serving user requests with
> > high priority.
>
> > Robert
>
> > On Sat, Sep 10, 2011 at 12:04, stevep  wrote:
> >> +1 However please include sub-second tasks
>
> >> Just today I was looking at my logs/appstats. A client "new recod"
> >> write function I have that consists of three separate new kinds being
> >> put. It seems to run consistently at 250-300ms per HR put(). These
> >> occur serially: first one in my on-line handler, second in a high-rate/
> >> high-token task queue, third in a low-rate/low-token queue. It is fine
> >> if the second and third puts occur minutes after the first. Seems much
> >> better than a 750 ms on-line handler function.
>
> >> Looking at my logs, nearly every write I do for indexed kinds is in
> >> this ballpark for latency. Only one on-line handler task is up around
> >> 500 ms because I have to do two puts in it. Everything else is 300 ms
> >> or less. So I am very happy with this setup. The recent thread where
> >> Brandon/John analyzed high instance rates shows what might happen if
> >> average latency viewed by the scheduler is skewed by a few very high
> >> latency functions. (Fortunately for my read/query/write client needs,
> >> I can avoid big OLH functions, but it is a serious design challenge.)
> >> However, the downside right now is that I do not know how the Task
> >> Queue scheduler interacts with the Instance Scheduler.
>
> >> My imagined ideal would be for developers to eventually be able to
> >> specify separate TQ instances (I believe Robert K. asked for this when
> >> he suggested TQ calls could be made to a separate version.) The
> >> Scheduler for these separate TQ instances would need to analyze
> >> cumulative pending queue tasks (I think the current TQ Scheduler does
> >> some of this), and only spawns new instances when the cumulative total
> >> exceeded a developer set value -- which would allow minute values
> >> rather than seconds.
>
> >> thanks,
> >> stevep
>
> >> On Sep 10, 6:03 am, John  wrote:
> >>> I'd like to know what is the impact of tasks on the scheduler.
>
> >>> Obviously tasks have very high latency (up to 10 minutes, but not using 
> >>> much
> >>> cpu - mostly I/O). What is their impact on the scheduler if any ?
> >>> Would be nice to have some sample use cases on how the scheduler is 
> >>> supposed
> >>> to react. For example if I have 1 task which takes 1 minute, spawn every 
> >>> 1s,
> >>> vs every 10s, vs 1 min ?
>
> >>> Since the tasks use very low cpu, technically an instance could easily run
> >>> 60 of them concurrently so 1 qps with 1-min tasks could take only one
> >>> instance. But I doubt the scheduler would spawn only one instance.
>
> >>> App Engine team, any insights ?
>
> >>> Thanks
>
> >> --
> >> You received this message because you are subscribed to the Google Groups 
> >> "Google App Engine" group.
> >> To post to this group, send email to google-appengine@googlegroups.com.
> >> To unsubscribe from this group, send email to 
> >> google-appengine+unsubscr...@

[google-appengine] Request queue for dynamic backends - how does it function specifically?

2011-09-13 Thread Jason Collins
We are moving much of our taskqueue work to dynamic backends.

One obvious question we're faced with is "how many (max) instances do
we need for our background work?"

If we are feeding all of our work to our dynamic backends via
taskqueue, will we see the queues get backed up if the backend
instances cannot keep up?

Or, alternatively, do the queued tasks pop off at their configured
rate and drop into a different request queue for the backend pool?

If the latter, how long will the requests stay on this other "backend
pool request queue" (e.g., they will stay for 10s on the front-end
instance request queue, or have in the past; is there an equivalent
timeout for backends)? Is there any way to get visibility into this
other queue, if it exists?

Thanks for any info,
j

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] why Datastore Key Fetch Ops is so many?

2011-09-13 Thread saintthor
now, my quota:

Datastore Entity Fetch Ops
0%
0%  17,400 of Unlimited Okay
Datastore Entity Put Ops
0%
0%  136 of UnlimitedOkay
Datastore Entity Delete Ops
0%
0%  0 of Unlimited  Okay
Datastore Index Write Ops
0%
0%  240 of UnlimitedOkay
Datastore Query Ops
0%
0%  343 of UnlimitedOkay
Datastore Key Fetch Ops
0%
0%  208,358 of UnlimitedOkay


Datastore Key Fetch Ops is much more than others. what may cause this?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Min idle instances setting not working at all?

2011-09-13 Thread Jon McAlister
You are conflating min-idle-instances and max-idle-instances, which
are different concepts. My statement was only with respect to
min-idle-instances, and is still correct. min-idle-instances was
removed on 2011/08/31-19:49:59 (by you disabling Always-On) and then
set to 3 on 2011/09/11-09:47:38 (by you re-enabling Always-On).

Your previously posted graph covered the time period before this last
change, which is why it shows 0 instances at times. Your instance
graph now (or otherwise after your latest change) looks correct, with
the number of instances hovering between 3 and 6.

On Mon, Sep 12, 2011 at 11:56 PM, Pol  wrote:
> Hi Jon,
>
> I don't know where you got that information, the admin logs clearly
> reflect our actions and the feedback at the time in the Dashboard:
>
> Originally set to 5:
> 2011-08-31 23:15:59     ad...@everpix.net       Changed Performance Settings
> max_idle_clones=5, min_pending_latency=0.10
>
> Then looking at estimated new-pricing bills, I figured that's looks
> very expensive, let's reduce this as much as possible for now:
> 2011-09-01 00:08:03     ad...@everpix.net       Changed Performance Settings
> max_idle_clones=3, min_pending_latency=0.10
> 2011-09-01 18:44:01     ad...@everpix.net       Changed Performance Settings
> max_idle_clones=2, min_pending_latency=0.10
> 2011-09-03 12:07:44     ad...@everpix.net       Changed Performance Settings
> max_idle_clones=1, min_pending_latency=0.10
> 2011-09-03 12:08:15     ad...@everpix.net       Changed Performance Settings
> max_idle_clones=2, min_pending_latency=0.10
>
> Then 2 days before our public launch at TechCrunch Disrupt, change of
> mind: now we need a minimum of 5 instances all the time, just in case:
> 2011-09-10 10:16:23     ad...@everpix.net       Changed Performance Settings
> max_idle_clones=5, min_pending_latency=0.10
>
> The next morning, I look at the graph (attached to previous post),
> realize it's obviously not working and decide to re-enable "always-on"
> in billing so we at least get 3.
> 2011-09-11 09:47:15     ad...@everpix.net       Changed Performance Settings
> max_idle_clones=5, min_pending_latency=0.10
> 2011-09-11 09:47:03     ad...@everpix.net       Changed Performance Settings
> max_idle_clones=5, min_pending_latency=0.10
>
> And I post to the mailing list a few minutes later at 9:53AM
>
> I guarantee you min-instances was set to 5 in the dashboard for all
> the duration of the previously attached graph :)
>
> On Sep 12, 10:36 am, Jon McAlister  wrote:
>> Nevermind, I remember, it's everpix-alpha.
>>
>> I see that min-idle-instances was removed on 2011/08/31-19:49:59 and
>> then set to 3 on 2011/09/11-09:47:38. So, it wasn't on when you emailed but
>> is on now. Your instances graph now looks correct.
>>
>>
>>
>>
>>
>>
>>
>> On Mon, Sep 12, 2011 at 10:24 AM, Jon McAlister  wrote:
>> > What's the app-id?
>>
>> > On Sun, Sep 11, 2011 at 9:53 AM, Pol-Online  wrote:
>>
>> >> Hi,
>>
>> >> Because we are about to launch our app very soon, I increased the number
>> >> of idle instances from 1 to 5 yesterday and things looked correct in the
>> >> Dashboard.
>>
>> >> Then this morning, in the Dashboard I see 2 idle instances and this:
>>
>> >> The spikes correspond to cron tasks running hourly I assume.
>>
>> >> Anyway, it looks like either the graph is wrong or the setting doesn't
>> >> work, or the definition of what the setting does is wrong, because clearly
>> >> it not stable at 5.
>>
>> >> I just activated Always On to work around the issue, so now it's back to
>> >> 3+1.
>>
>> >> -Pol
>>
>> >> 
>> >> Pol-Online
>> >> i...@pol-online.net
>>
>> >>  --
>> >> You received this message because you are subscribed to the Google Groups
>> >> "Google App Engine" group.
>> >> To post to this group, send email to google-appengine@googlegroups.com.
>> >> To unsubscribe from this group, send email to
>> >> google-appengine+unsubscr...@googlegroups.com.
>> >> For more options, visit this group at
>> >>http://groups.google.com/group/google-appengine?hl=en.
>>
>>
>>
>>  PastedGraphic-11.png
>> 37KViewDownload
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: High Replication -- Writes Way Higher Than Reads

2011-09-13 Thread objectuser
That's very useful ... and, yes, surprising to me for sure!

Thanks for pointing that out, Francois.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/xdyQ76P6FucJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Multiple Instances of the Same App

2011-09-13 Thread Rajkumar Radhakrishnan
Hi,

Quote (Eric Kolotyluk  wrote) :

We have an app we want to develop for our customers, but we essentially want
> each customer to have their own instance of the app for quotas and billing
> purposes. Basically, if our customers want the service, they would pay
> Google directly, rather than us figuring out who uses what and billing our
> customers. It would also make it easy for our software to automatically
> create the customer's app on app engine and keep it up-to-date.



Quote (Gregory D'alesandre  wrote) :

What you are doing is not being done to avoid incurring fees so it does not
> violate our terms.  We also need to build better support for this sort of
> thing in the future as a number of people have asked for it.



Quote (Gary Frederick  wrote) :

yep (how we say +1 in Texas)



Is there an issue we can star to show we are interested?



Yes, there is now a feature request which you can star to show you are
interested. Check out : Marketplace for Google App Engine
apps
.

Thanks & Regards,
Raj



On Sat, Jul 9, 2011 at 12:05 PM, Gregory D'alesandre wrote:

> Hey Erik, I think you are referring to this section:
> "4.4. You may not develop multiple Applications to simulate or act as a
> single Application or otherwise access the Service in a manner intended to
> avoid incurring fees."
>
> What you are doing is not being done to avoid incurring fees so it does not
> violate our terms.  We also need to build better support for this sort of
> thing in the future as a number of people have asked for it.
>
> Hope that helps!
>
> Greg D'Alesandre
> Senior Product Manager, Google App Engine
>
>
> On Mon, Jul 4, 2011 at 10:00 AM, Brandon Wirtz wrote:
>
>> I'm in the same boat.  Google has let me get away with running the same
>> app
>> customized for the user.  All of my apps are paid apps running on
>> different
>> domains.
>>
>> I'm all for lobbying to get app reseller accounts where we can markup our
>> services on the billing page.  If you come up with a good way to get the
>> billing information by API let me know because I'd like to have a better
>> way
>> to bill clients and generate usage reports.
>>
>>
>> -Original Message-
>> From: google-appengine@googlegroups.com
>> [mailto:google-appengine@googlegroups.com] On Behalf Of Eric Kolotyluk
>> Sent: Monday, July 04, 2011 9:56 AM
>> To: Google App Engine
>> Subject: [google-appengine] Multiple Instances of the Same App
>>
>> I remember reading some policy that Google prohibits people people from
>> basically running the same app under different registration. I gather one
>> reason for this is so that people don't exploit the free nature of apps,
>> or
>> so that Google is not replicating essentially the same app everywhere.
>> What
>> ever the reason I don't want to violate Google's policies.
>>
>> We have an app we want to develop for our customers, but we essentially
>> want
>> each customer to have their own instance of the app for quotas and billing
>> purposes. Basically, if our customers want the service, they would pay
>> Google directly, rather than us figuring out who uses what and billing our
>> customers. It would also make it easy for our software to automatically
>> create the customer's app on app engine and keep it up-to-date.
>>
>> An alternative design would be to have some way to invoke a central app,
>> but
>> for service operations and quota have some way to bill things to a
>> specific
>> account.
>>
>> Does Google have any way to do this that does not violate the policies?
>>
>> The alternative for us is setting up a separate account for each customer
>> on
>> either Amazon, Microsoft, or some other cloud, and essentially giving each
>> customer their own VM instance. There are pros and cons to this, as there
>> are with using the Google PAAS, and I am trying to figure out what our
>> best
>> options are.
>>
>> Cheers, Eric
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine" group.
>> To post to this group, send email to google-appengine@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine+unsubscr...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine" group.
>> To post to this group, send email to google-appengine@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine+unsubscr...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr.

[google-appengine] Re: High Replication -- Writes Way Higher Than Reads

2011-09-13 Thread Francois Masurel
Hi James,

The new GAE 1.5.4 SDK shows in the local dev DatastoreViewer how many write 
ops were needed to create each entity (check screenshot attached).

You will probably be suprised.

François

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/RBctvIERVJkJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

<>

[google-appengine] Re: High Replication -- Writes Way Higher Than Reads

2011-09-13 Thread Simon Knott
Have you read this thread - 
https://groups.google.com/d/msg/google-appengine/mjnSqQWOfqU/cgPVeHbrR8oJ?

It explains what happens at the datastore when an entity is put, and how 
this converts into datastore writes.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/bRsRHhf9zK8J.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: how longs are application logs kept

2011-09-13 Thread Rishi Arora
Found this:
http://code.google.com/appengine/docs/python/tools/uploadinganapp.html#Downloading_Logs

Which explains why my info logs are more shortlived than my error logs.

However, when I used the request_logs command, I only see the logs "header",
but no associated data that I passed on to the logging.info() method.  Is
there an option for the request_logs command that will allow more verbose
logs to be downloaded?


On Tue, Sep 13, 2011 at 9:41 AM, Rishi Arora wrote:

> I noticed that while logs with error severity can be accessed several
> hours, even days after the event, but logs with info severity start
> disappearing within just a couple of hours.  Is this documented?  I saw a
> spike in traffic at 12:28:57 UTC in my app, and I'm trying to investigate
> this.  I can see error logs but no info logs at that time.  And its been
> just a little over 2 hours since that event.  Is there another way to access
> these logs (appcfg.py) than using the admin console?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Something to pass along to the google search team

2011-09-13 Thread Joshua Smith
Sure, but if they just went breadth-first (putting pages to crawl into the tail 
of a work queue that spans hundreds of sites), then there wouldn't be a spike 
at all.

On Sep 13, 2011, at 10:55 AM, Tim wrote:

> 
> Google webmaster tools 
> 
>   https://www.google.com/webmasters/tools/home
> 
> lets you (amongst other things) submit sitemaps and see the crawl rate for 
> your site (for the previous 90 days). There's also a form to report problems 
> with how googlebot is accessing your site
> 
>   https://www.google.com/webmasters/tools/googlebot-report
> 
> The crawl rate is modified to try to avoid overloading your site, but given 
> that GAE will just fire up more instances, then I guess googlebot thinks your 
> site is built for such traffic and just keeps upping the crawl rate. You 
> could try and mimic a site being killed by the crawler keep basic stats 
> in memcache every time you get hit by googlebot (as idenified by request 
> headers) and if the requests come too thick and fast, delay the responses, or 
> simply return a 408 or maybe a 503 or 509 response, and my guess is you'll 
> see the crawl rate back off pretty quickly.
> 
>   http://en.wikipedia.org/wiki/List_of_HTTP_status_codes
> 
> Would be nice if robots.txt or sitemap files let you specify a maximum crawl 
> rate (cf RSS files), or perhaps people agreed on an HTTP status code for 
> "we're close, but not THAT close..." response to tell crawlers to back off 
> (418 perhaps:) but I don't expect those standards have moved very much 
> recently...
> 
> --
> T
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To view this discussion on the web visit 
> https://groups.google.com/d/msg/google-appengine/-/92F2o_-16zMJ.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Something to pass along to the google search team

2011-09-13 Thread Tim

Google webmaster tools 

  https://www.google.com/webmasters/tools/home

lets you (amongst other things) submit sitemaps and see the crawl rate for 
your site (for the previous 90 days). There's also a form to report problems 
with how googlebot is accessing your site

  https://www.google.com/webmasters/tools/googlebot-report

The crawl rate is modified to try to avoid overloading your site, but given 
that GAE will just fire up more instances, then I guess googlebot thinks 
your site is built for such traffic and just keeps upping the crawl rate. 
You could try and mimic a site being killed by the crawler keep basic 
stats in memcache every time you get hit by googlebot (as idenified by 
request headers) and if the requests come too thick and fast, delay the 
responses, or simply return a 408 or maybe a 503 or 509 response, and my 
guess is you'll see the crawl rate back off pretty quickly.

  http://en.wikipedia.org/wiki/List_of_HTTP_status_codes

Would be nice if robots.txt or sitemap files let you specify a maximum crawl 
rate (cf RSS files), or perhaps people agreed on an HTTP status code for 
"we're close, but not THAT close..." response to tell crawlers to back off 
(418 perhaps:) but I don't expect those standards have moved very much 
recently...

--
T

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/92F2o_-16zMJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] High Replication -- Writes Way Higher Than Reads

2011-09-13 Thread James Gilliam
I frequently have way more writes (billing history) than reads on the
datastore and this seems very strange to me.

Typically, .3 million writes (300,000) and .06 million reads (60,000).

And I don't think I am writing to the datastore nearly the amount I am
reading from it -- I would think they would be the other way around.

This is like I read an entity and then write it 5 times. Am I being
charged for writing a record to multiple datacenters?  Gives an
entirely new meaning to high replication.

What am I missing?

Plus, it would be great if we had more insights into these I/O
operations -- are they to service indexes?

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] how longs are application logs kept

2011-09-13 Thread Rishi Arora
I noticed that while logs with error severity can be accessed several hours,
even days after the event, but logs with info severity start disappearing
within just a couple of hours.  Is this documented?  I saw a spike in
traffic at 12:28:57 UTC in my app, and I'm trying to investigate this.  I
can see error logs but no info logs at that time.  And its been just a
little over 2 hours since that event.  Is there another way to access these
logs (appcfg.py) than using the admin console?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Cron job every 30 minutes only on work days

2011-09-13 Thread Matija
Hi,
is there a way to define something like this ?


/cron/odgodjeni
Jada jada
every 30 minutes *every mon,tue,wed,thu,fri *from 06:00 to 
15:00
Europe/Zagreb


I don't want my cron job to starts on Saturdays and Sundays especially with 
new 15 minutes idle instance billing window.

Tnx.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/gmzwHTAwpMQJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Something to pass along to the google search team

2011-09-13 Thread Joshua Smith
In 
http://highscalability.com/blog/2011/9/7/what-google-app-engine-price-changes-say-about-the-future-of.html
he wrote:

> With each crawl costing money, the whole idea of crawling the Internet will 
> have to change.

which led me to a thought: Since google bot is crawling zillions of web sites, 
a change from depth-first crawling to bread-first crawling would make a huge 
difference here.  Dunno if that's practical, but it would be a nice thing for 
the google search guys to look into, to make GAE and Google-Bot more 
compatible.  Because right now, there's a lot of evidence that they are 
accidentally conspiring to be evil.

-Joshua

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Cross Namespace Queries?

2011-09-13 Thread objectuser
I'm looking into using the namespace API in my application.  I usually like 
leveraging platform support for things like this to remove a category of 
defects from my application.

But after doing some searching, it appears that the namespace support is 
currently rather limited.  There is only primitive support for querying 
across namespaces (basically, "find all namespaces").  This implies to me 
that any sort of cross namespace work is much more complicated 
and inefficient.

Am I understanding this correctly?  Does anyone have advice for working with 
namespaces?

Thanks.


-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/U_jjOTqsNxQJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] App Engine Weekly Community Update #8

2011-09-13 Thread Johan Euphrosine
Dear App Engine Community,

Each week we bring you some news and metrics about the App Engine community.

*Highlights*

   App Engine 1.5.4 SDK Release
   http://googleappengine.blogspot.com/2011/09/app-engine-154-sdk-release.html

   A few adjustments to App Engine’s upcoming pricing changes
   
http://googleappengine.blogspot.com/2011/09/few-adjustments-to-app-engines-upcoming.html

   "Schlemiel, you're fired!": The last article in the series about
App Engine tuning by +Emlyn O'Regan
   
http://point7.wordpress.com/2011/09/10/appengine-tuning-schlemiel-youre-fired/

   What Google App Engine Price Changes Say About The Future Of Web Architecture
   
http://highscalability.com/blog/2011/9/7/what-google-app-engine-price-changes-say-about-the-future-of.html

*Google+*

   1:1 with a member of the App Engine community +Tijmen Roberti
   https://plus.google.com/u/1/111042085517496880918/posts/Lag9aMtjeM7

   Some thoughts on the new pricing by +Peter Magnusson
   https://plus.google.com/110401818717224273095/posts/AA3sBWG92gu

   New plusfeed version optimized by +Siegfried Hirsch for the new pricing model
   https://plus.google.com/u/1/102235836543922327908/posts/WJdaAthmw43

*Stack Overflow*

   84 Questions asked this week
   http://stackoverflow.com/tags/google-app-engine/topusers

*Issue tracking*

   Weekly triaging:

   123 new issues were reported in the public issue tracker:
   - 39 Production issues (8 Fixed, 4 Escalated, 9 Triaged, 15 Rejected, 3 New)
   - 58 Defects (1 Fixed, 4 Escalated, 23 Triaged, 9 Rejected, 21 New)
   - 26 Feature requests (0 Fixed, 1 Escalated, 14 Triaged, 2 Rejected, 9 New)

   Overall stats:

   5871 issues (5338 commented by Googlers)
   2621 open issues (431 New, 1863 Triaged, 327 Escalated)
   3250 closed issues (1233 Fixed, 2017 Rejected)

*Groups*

   Weekly stats:

   [appengine]
   1471 messages in 242 threads
   61% of threads replied within 2 days
   Top posters:
   robertklui... 65
   zutesmog... 61
   JoshuaE... 60

   [appengine-python]
   163 messages in 55 threads
   40% of threads replied within 2 days
   Top posters:
   zutesmog... 14
   miloir@g... 13
   robertklui... 11

   [appengine-java]
   194 messages in 53 threads
   57% of threads replied within 2 days
   Top posters:
   nischalsh... 9
   mliberato... 7
   knott.sim... 6

   [appengine-go]
   37 messages in 14 threads
   77% of threads replied within 2 days
   Top posters:
   dsymond... 11
   wmacgyv... 7
   calvin.pre... 4

*Feedbacks are welcome on this format, let me know what you are
looking forward to see in next week App Engine Community Update.*
-- 
Johan Euphrosine (proppy)
Developer Programs Engineer
Google Developer Relations

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Communication between AppEngine and PhoneGap/JQM

2011-09-13 Thread David D.
I found by myself beggining of answer.
For people who would like to make communicate a browser-based
application and a GAE app,
Reslet seems to be a good framework because it provides a GAE specific
implementation for the server-side, and a JavaScript implementation
for the client side (which will work with PhoneGap and Jquery Mobile).

I'm still trying to find an full code example yet.

On 12 sep, 10:45, "David D."  wrote:
> Hi,
>
> I've already got a GAE application, and I'm creating a mobile
> application using Phonegap + JqueryMobile (HTML, CSS, JS).
> Of course, both applications must communicate (a lot) together.
>
> I won't have trouble to create a phoneGap app, but the problem
> concerns the communication between theses AppEngine and PhoneGap apps.
> The mobile PhoneGap app will have to read and write data on the
> bigtable google database (I'm using the Objectify framework (not JDO
> or JPA) but I guess this changes nothing).
>
> I found part of answers here:
> - Using JSON but with php and 
> mysql:http://samcroft.co.uk/2011/updated-loading-data-in-phonegap-using-jqu...
> - Restlet API for 
> appEngine:http://wiki.restlet.org/docs_2.0/13-restlet/275-restlet/252-restlet.html
>
> but I'm still confused (due to a very low experience in this area, I'm
> not a developper, be kind ^^)
> Would you know an example/tutorial which show a full solution using
> AppEngine (or at least Java...) and PhoneGap/Jquery?
> Otherwise, just some guidance for me ? Where should I take a look ? In
> my case, performance is important but elegance and simplicity are
> more.
>
> Thanks a lot!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: using app engine for extremely demanding multiplayer browser game

2011-09-13 Thread Karel Crombecq
Hey Jay,

I actually registered yesterday on your game to get an idea of a game hosted 
on GAE. I'm enjoying it!

But the new pricing greatly disturbs me. I'm not sure if running this game 
on GAE is actually viable at all in terms of costs. I did some research on 
the new pricing (for example 
http://code.google.com/appengine/kb/postpreviewpricing.html#operations_charged_for)
 
and as far as I can see, datastore reads and writes both have a similar 
cost. And they don't charge per query, but they charge per object (row) 
fetched.

I did some calculations on my current database data, and CQ2 generates about 
1M database writes for something like 650 daily users. That's about 3 times 
as much as your game does, which would also triple the bill. That's a lot, 
but something I can handle. Since most of the writes are one-record only, 
the total cost would be $1,5 per day for 1000 users.

However, the datastore reads are the real issue here. I have about 4M SELECT 
queries for 650 users. Considering that many of these return more than one 
row, I can easily reach 10M datastore reads each day, for an additional cost 
of $2,8 each day.

This results in a total of €159 per month, for 1000 users. My estimations 
for the Amazon cloud were a cost of $65 per 1000 users each month (based on 
our current system and their instances), which would make GAE 3 times more 
expensive. That's quite worrysome, even though these statistics were 
generated based on relational database writes as opposed to datastore 
writes. It's hard to predict if I will need less or more datastore 
operations to achieve the same result. I'm actually thinking less, because I 
can cache a lot of static data into memory.


-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/_-Bf3oAZxu8J.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: App Engine SDK 1.5.4 released!

2011-09-13 Thread Ice13ill
I've been expecting the new datastore query improvement...
I have a question related to that feature: i have deployed a new
version with Java SDK 1.5.4 to test queries that previously failed,
but it seams they still fail immediately, also requesting an index
that is already built. Also, these queries work in Datastore Viewer.
Is it something i'm missing ? Do I have to delete the index and re-
built it ?


On Sep 13, 3:43 am, pdknsk  wrote:
> When will the docs for async memcache be added? I've got one
> particular question.
>
> In transactions, when you use db.put_async(), the transaction
> guarantees the put even if get_result() is not called. As I understand
> it, the transaction automatically calls get_result() for the put.
>
> How is this handled for async memcache? Unlike db put/get, memcache
> set/get does not raise an exception, but returns True/False.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: google app engine for extremely demanding multiplayer browser game

2011-09-13 Thread Jean-Marc Truillet
Hi Karel,

Another advantage of GAE for a multiplayer game like yours is the
channel API. It is a Comet-like push system that avoids you polling
the server (and the DataStore) to retrieve the actions of the other
players.
Have also a look at www.ovh.com. They propose (virtual) dedicated
servers and PaaS solutions. Their datacenter is located in Lille
(Rijsel), in your area.

JM


On 12 sep, 16:00, Karel Crombecq  wrote:
> Hey guys,
>
> I am currently investigating possibilities for writing a sequel to the
> popular text-based browser game that I released in 2001 called Castle Quest
> 2 (http://www.castlequest.be). One of the options I am considering is
> developing the game in GWT, and running it on app engine. But I am not sure
> whether app engine will be able to scale to the degree needed for my game.
>
> CQ2 at its peak generated easily 3 million page views per day (90 million
> per month!), with bandwidth usage of 2.5GB each month. The database grew to
> a size of about 1GB. There were at least 250 sql queries each second. And it
> is expected that CQ3, with the advent of social networks and social gaming,
> will reach multiples of these numbers.
>
> Now I don't really know how big the sites are that are hosted by Google app
> engine, as information is rather scarce on that part. So my question to the
> Google team is: do you think (know?) if app engine can handle this kind of
> pressure from one app? Will the data store hold up, and will it scale well?
>
> It is extremely important that I have trustworthy information about this. If
> I decide to go with app engine and the system doesn't hold up, a massive
> money and time investment will be lost.
>
> Thanks in advance,
> Kind regards,
> Karel Crombecq

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: google app engine for extremely demanding multiplayer browser game

2011-09-13 Thread Karel Crombecq
Thank you. I already made several apps and games using GWT and a java 
backend as prototypes and I am very satisfied with the result. I have also 
watched the relevant Google IO talks on scaling, proper app design for 
scalability, and so on, and I believe I have enough information to invest 
time into building a prototype based on GWT/app engine (with possibly PlayN 
on top).

Apologies for the double post by the way. My first post stayed away for half 
a day, so I assumed I forgot to press "post" and made a new one. Now I 
figured it was passed on to the dev team (who were sleeping :)). If 
possible, these can be merged. Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/ba_P6r1WXvAJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Does RPC object block response until async operations finish?

2011-09-13 Thread keakon lolicon
Hi Google guys,

I just did a test of async db and memcache operations.
I put or deleted 100 entities and immediately returned.
The async operation call took only 0.01s, the sync one took 0.2s, but both
of their total response time in the backend log was over 200ms, and I could
also feel the latency was longer than a no-op request.

So I think before the server sending respond to browser, the RPC object will
wait its async operations to finish or get error.
Does it mean if I don't care whether the operation is successful (like
updating a counter), I can do an async call without waiting it by myself?

--
keakon

My blog(Chinese): www.keakon.net
Blog source code: https://bitbucket.org/keakon/doodle/

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Min idle instances setting not working at all?

2011-09-13 Thread Daniel Florey
Where did you find the min idle instances setting?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/jfcSBnHMr5QJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] IOException while updating cells using batch process using 2 legged OAuth using google spreadsheet api

2011-09-13 Thread Vishesh khandelwal
Hi

I am new at using the OAuth process. I am using the 2 legged process
to get the OAuth done.
I am getting the following exception. This error has really wasted my
two weeks. Please help me out.


I am using the 2legged OAuth process to authenticate the DocsService
and SpreadsheetService.
My code snippet is

DocsService client = new DocsService("yourCo-yourAppName-v1");
SpreadsheetService service = new SpreadsheetService("yourCo-
yourAppName-v1");

String CONSUMER_KEY = "THE_CONSUMER_KEY";
String CONSUMER_SECRET = "THE_SECRET_KEY";

GoogleOAuthParameters oauthParameters = new 
GoogleOAuthParameters();
oauthParameters.setOAuthConsumerKey(CONSUMER_KEY);
oauthParameters.setOAuthConsumerSecret(CONSUMER_SECRET);


oauthParameters.setOAuthType(OAuthParameters.OAuthType.TWO_LEGGED_OAUTH);
try {
client.setOAuthCredentials(oauthParameters, new
OAuthHmacSha1Signer());
service.setOAuthCredentials(oauthParameters, new
OAuthHmacSha1Signer());
} catch (OAuthException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}

Now i try to update cells using batch process using the code below,
Here cellInfo is the my class which has cell row and col info and cell
address such as R1C1.
and usermail is something exam...@example.com

private void insertCellBatchProcess(SpreadsheetEntry sp
List cellInfos, String userEmail) throws 
AppException {


String key = sp.getKey();
FeedURLFactory urlFactory = FeedURLFactory.getDefault();
URL cellFeedUrl = null;
try {
cellFeedUrl = urlFactory.getCellFeedUrl(key, "od6", 
"private",
"full");
} catch (MalformedURLException e3) {
e3.printStackTrace();
}

CellFeed batchRequest = new CellFeed();
for (CellInfo cellId : cellInfos) {
CellEntry batchEntry = new CellEntry(new 
Cell(cellId.getRow(),
cellId.getCol(), 
cellId.getCellValue()));
batchEntry.setId(String.format("%s/%s", 
cellFeedUrl.toString(),
cellId.getIdString()));
BatchUtils.setBatchId(batchEntry, cellId.getIdString());
BatchUtils.setBatchOperationType(batchEntry,
BatchOperationType.QUERY);
batchRequest.getEntries().add(batchEntry);
}

CellFeed cellFeed = null;
try {
cellFeed = service.getFeed(

getContentQueryWithQueryParameter(cellFeedUrl,
"xoauth_requestor_id", 
userEmail), CellFeed.class);
} catch (IOException e) {
String msg = "Error getting feed";
throw new AppException(msg, e);
} catch (ServiceException e) {
String msg = "Error getting feed";
throw new AppException(msg, e);
}
CellFeed queryBatchResponse = null;
try {
queryBatchResponse = service.batch(
new 
URL(cellFeed.getLink(Link.Rel.FEED_BATCH,

Link.Type.ATOM).getHref()), batchRequest);
} catch (BatchInterruptedException e) {
String msg = "Batch process interrupted.";
throw new AppException(msg, e);
} catch (MalformedURLException e) {
String msg = "Batch process interrupted.";
throw new AppException(msg, e);
} catch (IOException e) {
String msg = "Batch process interrupted.";
throw new AppException(msg, e);
} catch (ServiceException e) {
String msg = "Batch process interrupted.";
throw new AppException(msg, e);
}

// NOTE: here we are assuming that the size of cellInfo list is
// equal to
// the size of the list which is
// obtained from queryBatchResponse.getEntries(). This is done 
to
// insert
// the value in the cellEntry
CellFeed batchRequest2 = new CellFeed();
for (int i = 0; i < queryBatchResponse.getEntries().size(); 
i++) {
CellEntry batchEntry = 
queryBatchResponse.getEntries().get(i);