[google-appengine] Re: This request used a high amount of CPU and may soon exceed its quota - 2010/2011

2010-12-18 Thread Matija
Sub-400 ms as optimal is for request latency time and I agree with them. You 
will probably get new instances if your average latency is under 1000 ms, 
but if it is under 400 ms you would get new instances at faster rate. Maybe.

I don't have problem with request latency. Actually I don't have problem 
with GAE at all. 

I would like to know a little bit more about CPU usage and scaling up new 
instances. What is meaning of '...may also incur some additional latency in 
order to efficiently share resources...' ? How did they implemented this 
additional latency? With no addition of new instances after some limit or 
some penalty latency (throttle_code?) or something else ?!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: my app has been "disabled"

2010-12-18 Thread Norlesh
That would be very disconcerting (what area doesn't Google have a
competing interest in).
 I hope there is a better explanation.
 Shane

On Dec 18, 6:56 pm, Nickolas Daskalou  wrote:
> It may have been disabled because it competes with Google's URL  
> shortening offering (http://goo.gl/).
>
> Nick
>
> On 18/12/2010, at 1:21 PM, vrypan  wrote:
>
>
>
>
>
>
>
> > Hi. I've been running my URL shortener, urlborg.com, on appengine  
> > since 2008. Suddenly, yesterday, the app was marked as DISABLED by  
> > google, and no one can access it. How do I get in contact with  
> > someone that can explain what's going on? I feel really bad for the  
> > users that relied on my service.
>
> > I really, really like AppEngine, I mean, if you are offering a  
> > hosting environment, and people are paying for it, you can't just  
> > switch off an account, without a warning, or a notice or something...
>
> > Panayotis
> > --
> > You received this message because you are subscribed to the Google  
> > Groups "Google App Engine" group.
> > To post to this group, send email to google-
> > appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > google-appengine+unsubscr...@googlegroups.com
> > .
> > For more options, visit this group 
> > athttp://groups.google.com/group/google-appengine?hl=en
> > .

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] my app has been "disabled"

2010-12-18 Thread Matt H
That would be absolutely disgraceful. Makes me weary about putting my 
business on GAE in case Google suddenly decide to compete with it and shut 
it down.

I most certainly hope that this isn't the case.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: my app has been "disabled"

2010-12-18 Thread ajaxer
badly concerned

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: my app has been "disabled"

2010-12-18 Thread supercobra
I would be surprise it Google shut you down because of a competing
factor. Let's see what they say.

Just curious, did you enable billing on your app?

-- superco...@gmail.com
http://supercobrablogger.blogspot.com/


On Sat, Dec 18, 2010 at 7:09 AM, ajaxer  wrote:
> badly concerned
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Using datastore for up to 4.5 billion keys

2010-12-18 Thread Donovan Hide
Hi,
I have a custom index of a large amount of content that works by creating a 
32 bit hash for sections of text. Each document id is stored against this 
hash and lookups involve hashing the input and retrieving the matching ids. 
Currently I use node.js to serve the index and hadoop to generate it. 
However this is an expensive operation in terms of processing and requires 
an SSD drive for decent serving performance. The scale of the index is as 
follows:

Up to 4.5 billions keys
An average of 8 document ids per key, delta-encoded and then variable 
integer encoded.
Lookups on average involve retrieving values for 3500 keys

Having read the datastore docs it seems like this could be a possible 
schema:

from google.appengine.ext import db

class Index(db.Model):
hash=db.IntegerProperty(required=True)
values=db.BlobProperty(required=True) 

I would be grateful if anyone could give me some advice or tips on how this 
might perform on AppEngine in terms of query performance, cost and 
minimizing metadata/index overhead. It sounds like 4.5 billion*metadata 
storage could be the killer.

Cheers,
Donovan


-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: my app has been "disabled"

2010-12-18 Thread Jeff Schwartz
+1

On Sat, Dec 18, 2010 at 8:16 AM, supercobra  wrote:

> I would be surprise it Google shut you down because of a competing
> factor. Let's see what they say.
>
> Just curious, did you enable billing on your app?
>
> -- superco...@gmail.com
> http://supercobrablogger.blogspot.com/
>
>
> On Sat, Dec 18, 2010 at 7:09 AM, ajaxer  wrote:
> > badly concerned
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com
> .
> > For more options, visit this group at
> > http://groups.google.com/group/google-appengine?hl=en.
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>


-- 
*Jeff Schwartz*

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: my app has been "disabled"

2010-12-18 Thread vrypan
I'm sure this has nothing to do with Google having a competing
service. (I wish it did! :-)

Yes, billing was enabled, and I've been paying for more than a year
(or maybe two).

P.

On Dec 18, 3:16 pm, supercobra  wrote:
> I would be surprise it Google shut you down because of a competing
> factor. Let's see what they say.
>
> Just curious, did you enable billing on your app?
>
> -- superco...@gmail.comhttp://supercobrablogger.blogspot.com/
>
> On Sat, Dec 18, 2010 at 7:09 AM, ajaxer  wrote:
> > badly concerned
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com.
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: my app has been "disabled"

2010-12-18 Thread Jeff Schwartz
I'm not saying it did but could your app be violating Google's TOS?

On Sat, Dec 18, 2010 at 8:30 AM, vrypan  wrote:

> I'm sure this has nothing to do with Google having a competing
> service. (I wish it did! :-)
>
> Yes, billing was enabled, and I've been paying for more than a year
> (or maybe two).
>
> P.
>
> On Dec 18, 3:16 pm, supercobra  wrote:
> > I would be surprise it Google shut you down because of a competing
> > factor. Let's see what they say.
> >
> > Just curious, did you enable billing on your app?
> >
> > -- superco...@gmail.comhttp://supercobrablogger.blogspot.com/
> >
> > On Sat, Dec 18, 2010 at 7:09 AM, ajaxer  wrote:
> > > badly concerned
> >
> > > --
> > > You received this message because you are subscribed to the Google
> Groups
> > > "Google App Engine" group.
> > > To post to this group, send email to google-appengine@googlegroups.com
> .
> > > To unsubscribe from this group, send email to
> > > google-appengine+unsubscr...@googlegroups.com
> .
> > > For more options, visit this group at
> > >http://groups.google.com/group/google-appengine?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>


-- 
*Jeff Schwartz*

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: my app has been "disabled"

2010-12-18 Thread Matt H
Why no warning though? :(

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Using datastore for up to 4.5 billion keys

2010-12-18 Thread supercobra
Just curious, what is this huge database about? Can we check it online?

-- superco...@gmail.com
http://supercobrablogger.blogspot.com/



On Sat, Dec 18, 2010 at 7:20 AM, Donovan Hide  wrote:
> Hi,
> I have a custom index of a large amount of content that works by creating a
> 32 bit hash for sections of text. Each document id is stored against this
> hash and lookups involve hashing the input and retrieving the matching ids.
> Currently I use node.js to serve the index and hadoop to generate it.
> However this is an expensive operation in terms of processing and requires
> an SSD drive for decent serving performance. The scale of the index is as
> follows:
> Up to 4.5 billions keys
> An average of 8 document ids per key, delta-encoded and then variable
> integer encoded.
> Lookups on average involve retrieving values for 3500 keys
> Having read the datastore docs it seems like this could be a possible
> schema:
> from google.appengine.ext import db
> class Index(db.Model):
>     hash=db.IntegerProperty(required=True)
>     values=db.BlobProperty(required=True)
> I would be grateful if anyone could give me some advice or tips on how this
> might perform on AppEngine in terms of query performance, cost and
> minimizing metadata/index overhead. It sounds like 4.5 billion*metadata
> storage could be the killer.
> Cheers,
> Donovan
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Using datastore for up to 4.5 billion keys

2010-12-18 Thread Donovan Hide
Hi,

it's not public yet, but the purpose of the application is the exact
comparison of large corpora.

Cheers,
Donovan.

On 18 December 2010 13:56, supercobra  wrote:
> Just curious, what is this huge database about? Can we check it online?
>
> -- superco...@gmail.com
> http://supercobrablogger.blogspot.com/
>
>
>
> On Sat, Dec 18, 2010 at 7:20 AM, Donovan Hide  wrote:
>> Hi,
>> I have a custom index of a large amount of content that works by creating a
>> 32 bit hash for sections of text. Each document id is stored against this
>> hash and lookups involve hashing the input and retrieving the matching ids.
>> Currently I use node.js to serve the index and hadoop to generate it.
>> However this is an expensive operation in terms of processing and requires
>> an SSD drive for decent serving performance. The scale of the index is as
>> follows:
>> Up to 4.5 billions keys
>> An average of 8 document ids per key, delta-encoded and then variable
>> integer encoded.
>> Lookups on average involve retrieving values for 3500 keys
>> Having read the datastore docs it seems like this could be a possible
>> schema:
>> from google.appengine.ext import db
>> class Index(db.Model):
>>     hash=db.IntegerProperty(required=True)
>>     values=db.BlobProperty(required=True)
>> I would be grateful if anyone could give me some advice or tips on how this
>> might perform on AppEngine in terms of query performance, cost and
>> minimizing metadata/index overhead. It sounds like 4.5 billion*metadata
>> storage could be the killer.
>> Cheers,
>> Donovan
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine" group.
>> To post to this group, send email to google-appeng...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine+unsubscr...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: my app has been "disabled"

2010-12-18 Thread vrypan
Jeff, I don't think so. But in any case, a warning that something is
wrong, would be nice.

I'm a legit user, my app was not intended to do anything that violates
the TOS. It has been working for more than 2 years.

To be honest, it's a project I've almost abandoned. I just feel bad
for the users that depended on it before it suddenly disappeared.

Plus, I'm really septic about developing for the AppEngine in the
future. I wouldn't like to base a future project on a platform that
can shut you down without any warning that something is wrong.

Unfortunately, I've spent the last couple of months developing
something new, based on AppEngine... :-( Now, I'm thinking of piking a
different platform...

P.


On Dec 18, 3:35 pm, Jeff Schwartz  wrote:
> I'm not saying it did but could your app be violating Google's TOS?
>
>
>
> On Sat, Dec 18, 2010 at 8:30 AM, vrypan  wrote:
> > I'm sure this has nothing to do with Google having a competing
> > service. (I wish it did! :-)
>
> > Yes, billing was enabled, and I've been paying for more than a year
> > (or maybe two).
>
> > P.
>
> > On Dec 18, 3:16 pm, supercobra  wrote:
> > > I would be surprise it Google shut you down because of a competing
> > > factor. Let's see what they say.
>
> > > Just curious, did you enable billing on your app?
>
> > > -- superco...@gmail.comhttp://supercobrablogger.blogspot.com/
>
> > > On Sat, Dec 18, 2010 at 7:09 AM, ajaxer  wrote:
> > > > badly concerned
>
> > > > --
> > > > You received this message because you are subscribed to the Google
> > Groups
> > > > "Google App Engine" group.
> > > > To post to this group, send email to google-appengine@googlegroups.com
> > .
> > > > To unsubscribe from this group, send email to
> > > > google-appengine+unsubscr...@googlegroups.com
> > .
> > > > For more options, visit this group at
> > > >http://groups.google.com/group/google-appengine?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.
>
> --
> *Jeff Schwartz*

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Why not SELECT single_property FROM model?

2010-12-18 Thread 风笑雪
I think list is not a problem, we can always get full entity by "SELECT *".

When you query on a list property like:

SELECT a_list FROM model where a_list > 'a' AND a_list < 'b'

I think it's more reasonable to suppose the single value a_list is greater
than 'a' and little than 'b', but "SELECT *" isn't work as this.
So by this way, we can get (a_list, key) tuple, and get all matched entities
by the keys latter.

--
keakon

My blog(Chinese): www.keakon.net
Blog source code: https://bitbucket.org/keakon/doodle/

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Static files cached even after update with changes

2010-12-18 Thread 风笑雪
The cache of my app will by refreshed after deploy, so I'm not sure if it's
a bug cross some apps.
Maybe it's just cached by the browser.

--
keakon

My blog(Chinese): www.keakon.net
Blog source code: https://bitbucket.org/keakon/doodle/



On Sat, Dec 18, 2010 at 2:38 AM, Robert Kluin wrote:

> Hi Noel,
>  People often want static content cached -- it reduces the load on
> your app.  Also, there might be other intermediate caches to worry
> about too, so it may not be only Google caching your content.
>
>  Yes, you should use some type of cache busting strategy.  It depends
> on how your iPhone app is implemented, but perhaps you can use a
> version number on your assets so they can be changed between versions?
>
>
>
>
>
> Robert
>
>
>
>
>
> On Fri, Dec 17, 2010 at 12:57, Noel  wrote:
> > I just pushed an update to my app in which a bunch of the static files
> > changed and were important for the app behavior.
> >
> > What I'm seeing is that any requests for those static files return an
> > old version of the file! I did some digging around and it seems that
> > other people are seeing this and GAE caches things very aggressively.
> >
> > Two things:
> > - Unless I'm missing something GAE shouldn't do that. When I submit
> > changes with appcfg.py update, it knows exactly what has changed, so
> > there's no reason it can't invalidate the caches for the static files
> > that have changed.
> > - Is there a way I can force it from here to start serving the latest
> > version of a static file? I can't change the app because it's an
> > iPhone app.
> >
> > Is it true that in the future, the best way to avoid this might be
> > something like adding ?timestamp=[hour] to the URL request? That way
> > files won't be cached for more than an hour? Is there a better method?
> >
> > Thanks.
> >
> >
> > --Noel
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> > For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
> >
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] GAE + Google Books "X-Forwarded-For" header doesn't work

2010-12-18 Thread @bh!jiT
Hi All,

I have successfully integrated my gae app to interact with google books, but 
my big problem is google books has territorial restrictions on which books 
should be sent for user to view. 
So after Google Books returns me a list of results I simply can't view it 
because I'm in a different country where the book is not allowed to be 
viewed. 

Although Google Book Search Data Api states that setting the 
"X-Forwarded-For" with the user ip will return results specific to the the 
user's location, but even after trying hard to set the header and querying 
the GBS it still returns me the results based on the ip of the google app 
server where my app is deployed not specific to the user ip set in the 
X-Forwarded-For header

I know GAE doesnt allow setting the above header but thats just for app 
engine requests not requests going out of app server to google books server.

Please help or guide me to resolve the above problem.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] How can i ask to add a new MIME types for file attachments to an email message?

2010-12-18 Thread 风笑雪
You can try add this in your app.yaml:

handlers:

- url: /static/(.*\.mobi)
  static_files: static/\1
  upload: static/(.*\.mobi)
  mime_type: application/x-mobipocket-ebook

Then put all mobi files in "static" folder.

--
keakon

My blog(Chinese): www.keakon.net
Blog source code: https://bitbucket.org/keakon/doodle/

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] App Engine Limit on Index Entries per Entity

2010-12-18 Thread 风笑雪
I think each item in the list has two indexes(asc and desc), and key has an
asc index, so the answer is 401.

--
keakon

My blog(Chinese): www.keakon.net
Blog source code: https://bitbucket.org/keakon/doodle/



On Fri, Dec 17, 2010 at 10:09 PM, Ryan  wrote:

> I know that there is a limit of 5000 index entries per entity, which
> means I can't do things like this:
>
> class Foo(db.Model):
>x = db.ListProperty(int)
>y = db.ListProperty(int)
>
> foo1 = Foo(x = range(5001))
> foo1.put()
>
> Furthermore, if I have the index in index.yaml
>
> - kind: Foo
>  properties:
>  - name: x
>  - name: y
>
> then I also see from this thread:
>
>
> http://groups.google.com/group/google-appengine/browse_thread/thread/d5f4dcb7d00ed4c6
>
> that I can't do this:
>
> foo2 = Foo(x = range(100), y=range(100))
> foo2.put()
>
> because that would give me 10,000 index entries.
>
> However, my question is: if I DON'T have any entries in index.yaml for
> Foo and try:
> foo3 = Foo(x = range(100), y=range(100))
> foo3.put()
>
> will that still raise the "BadRequestError:Too many indexed properties
> for entity" exception? From my tests, it looks like it won't cause any
> errors. Is this correct? How many index entries would foo3 have in
> this case? Is it 200 (the sum of the lengths of each list)? Or
> something else?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Using datastore for up to 4.5 billion keys

2010-12-18 Thread Robert Kluin
Hi Donovan,
  It sounds to me like the hash should be used as the key.  I would
probably use a structure something like this:

class Document(db.Model):
data = db.BlobProperty()  # Original 'raw' document data.
hashes = db.StringListProperty(indexed=False) # List of hash
values for this document.

class AddToIndex(db.Model):
"""Set key_name to document_id"""
index = db.StringProperty(required=True) # hash key

class Index(db.Model):
"""Set key_name to your hash value."""
index = db.StringListProperty(required=True)


I am assuming the documents are small enough to fit into a
BlobProperty, if not you can use the blobstore to achieve a similar
result.  You also do not tell us how many hashes there will typically
be per document; the hashes per document is important.  Are you likely
to get multiple documents at one time that have overlapping hashes,
because the might impact the design too.  Also, you don't really tell
us about how you get these documents, or exactly what you will be
doing with them (as in how is the data presented).

So I'll outline a one possible general process -- I'm sure it could be
significantly improved for your use case.

1) A document *somehow* appears for the system to process.  Compute
the hash values of the content, and store the doc entity.

# If computing the hashes is a slow process, split it more steps.
hashes = compute_hashes(doc_data)
def txn():
   doc_key = str(Document(data=doc_data, hashes=hashes).put())
   db.put([AddToIndex(key_name=doc_key, index=hash) for hash in hashes])
db.run_in_transaction(txn)

2) In a processing task, find documents that need added to indexes.

items_to_add = AddToIndex.all().fetch(50)
seen_indexes = set()
for item in items_to_add:
   if not item.index in seen_indexes:
   insert_index_builder_tasks(item.index)
  seen_indexes.add(item.index)

   Your index_builder_task should insert named tasks to prevent fork
bombs.  So you might use some modulus of the current time in
conjunction with a (memcache based) count for the hash, that way if
you do hit the same hash five minutes apart it can still process but
if the same initial processing task runs three times it won't fork.

3) The index builder only needs the hash to do its thing:

index_hash = self.request.get('index_hash')
work = AddToIndex.all(keys_only=True).filter('index', index_hash).fetch(200)
def txn():
   index = Index.get_by_key_name(hash)
   index.index.extend([str(doc) for doc in work])  # Add new docs to index
   index.index = list(set(index.index))  # Remove any dupes
   index.put()
   db.run_in_transaction(txn)
   db.delete(work)

4) Given a document, finding other documents with similar content is
pretty easy, and only needs one _fetch_!

   document = Document.get(document_key)
   indexes = Index.get_by_key_name(document.hashes)

   Now you can do what ever you need with the list of matching
documents in the index lists!

I implemented something very similar to this process a couple weeks
ago; so far it seems to be working quite well for me.  It uses a lot
of tasks, but they are small and very fast.



Robert




On Sat, Dec 18, 2010 at 08:20, Donovan Hide  wrote:
> Hi,
> I have a custom index of a large amount of content that works by creating a
> 32 bit hash for sections of text. Each document id is stored against this
> hash and lookups involve hashing the input and retrieving the matching ids.
> Currently I use node.js to serve the index and hadoop to generate it.
> However this is an expensive operation in terms of processing and requires
> an SSD drive for decent serving performance. The scale of the index is as
> follows:
> Up to 4.5 billions keys
> An average of 8 document ids per key, delta-encoded and then variable
> integer encoded.
> Lookups on average involve retrieving values for 3500 keys
> Having read the datastore docs it seems like this could be a possible
> schema:
> from google.appengine.ext import db
> class Index(db.Model):
>     hash=db.IntegerProperty(required=True)
>     values=db.BlobProperty(required=True)
> I would be grateful if anyone could give me some advice or tips on how this
> might perform on AppEngine in terms of query performance, cost and
> minimizing metadata/index overhead. It sounds like 4.5 billion*metadata
> storage could be the killer.
> Cheers,
> Donovan
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this gr

[google-appengine] Re: Appengine's Turkey problem

2010-12-18 Thread Kaan Soral
Actually Turkey doesn't block google, they block Youtube IP's, so the
problem is probebly caused by shared IP's.

So in my opinion the problem can be solved if Google uses seperate
IP's for components.

And for government blocking, they normally block domain names rather
than IP's but for some reason they only block Youtube IP's because
people just enter the site using seperate DNS etc.

On Dec 17, 2:29 am, Tim Hoffman  wrote:
> Hi
>
> I don't really believe it is solvable by google.
>
> If they add a new pool of addresses for appengine, and some apps turn up on
> appengine that
> any particular government doesn't like, they will block access to that range
> and you are
> back in the same situation.
>
> All anyone can do is educate and lobby their government.
>
> Rgds
>
> Tim Hoffman

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Using datastore for up to 4.5 billion keys

2010-12-18 Thread Donovan
Hi Robert,

thank you very much for the well considered and thorough reply! Much
appreciated!

The likely number of hashes per document is roughly the length of the
document, minus duplicate hashes and minus a fixed number (typically
15), so roughly 3500. There will be very likely a combination of the
same hashes in a batch import of documents and there will also be
probabilistic hash collisions due to the nature of the hash algorithm.
These are fine, however.

The documents will arrive in batches of 1000. Your tips on task queues
describe exactly what I'm working on this minute!!! Your example code
is very useful.

I think the two most useful tips I can glean from your proposed
solution are to use the hash as the key name and the use of
Model.get_by_key_name. I did not know that this could accept a list of
keys, thus only using one query. This is fantastic! Do you know if
there are any limits on the length of the list? I thought perhaps the
limit of 30 would cause problems when using an IN query, but maybe
this does not apply to key names.

I don't need to preserve the document hashes as all the queries will
come from a separate corpora, and they will be hashed at query time.

One question I still have is to choose a suitable hash width. 4.5
billion hashes presumes a hash width of 32 bits, but the overhead of
the metadata and indexes for that many entities could be unhelpful. I
can reduce the hash width to anything between 24 and 32 bits to
minmize the overhead. Do you have any thought on this?

Thanks again for your great answer!

Cheers,
Donovan.


On Dec 18, 7:39 pm, Robert Kluin  wrote:
> Hi Donovan,
>   It sounds to me like the hash should be used as the key.  I would
> probably use a structure something like this:
>
> class Document(db.Model):
>     data = db.BlobProperty()  # Original 'raw' document data.
>     hashes = db.StringListProperty(indexed=False) # List of hash
> values for this document.
>
> class AddToIndex(db.Model):
>     """Set key_name to document_id"""
>     index = db.StringProperty(required=True) # hash key
>
> class Index(db.Model):
>     """Set key_name to your hash value."""
>     index = db.StringListProperty(required=True)
>
> I am assuming the documents are small enough to fit into a
> BlobProperty, if not you can use the blobstore to achieve a similar
> result.  You also do not tell us how many hashes there will typically
> be per document; the hashes per document is important.  Are you likely
> to get multiple documents at one time that have overlapping hashes,
> because the might impact the design too.  Also, you don't really tell
> us about how you get these documents, or exactly what you will be
> doing with them (as in how is the data presented).
>
> So I'll outline a one possible general process -- I'm sure it could be
> significantly improved for your use case.
>
> 1) A document *somehow* appears for the system to process.  Compute
> the hash values of the content, and store the doc entity.
>
>     # If computing the hashes is a slow process, split it more steps.
>     hashes = compute_hashes(doc_data)
>     def txn():
>        doc_key = str(Document(data=doc_data, hashes=hashes).put())
>        db.put([AddToIndex(key_name=doc_key, index=hash) for hash in hashes])
>     db.run_in_transaction(txn)
>
> 2) In a processing task, find documents that need added to indexes.
>
>     items_to_add = AddToIndex.all().fetch(50)
>     seen_indexes = set()
>     for item in items_to_add:
>        if not item.index in seen_indexes:
>            insert_index_builder_tasks(item.index)
>           seen_indexes.add(item.index)
>
>    Your index_builder_task should insert named tasks to prevent fork
> bombs.  So you might use some modulus of the current time in
> conjunction with a (memcache based) count for the hash, that way if
> you do hit the same hash five minutes apart it can still process but
> if the same initial processing task runs three times it won't fork.
>
> 3) The index builder only needs the hash to do its thing:
>
>     index_hash = self.request.get('index_hash')
>     work = AddToIndex.all(keys_only=True).filter('index', 
> index_hash).fetch(200)
>     def txn():
>        index = Index.get_by_key_name(hash)
>        index.index.extend([str(doc) for doc in work])  # Add new docs to index
>        index.index = list(set(index.index))  # Remove any dupes
>        index.put()
>    db.run_in_transaction(txn)
>    db.delete(work)
>
> 4) Given a document, finding other documents with similar content is
> pretty easy, and only needs one _fetch_!
>
>    document = Document.get(document_key)
>    indexes = Index.get_by_key_name(document.hashes)
>
>    Now you can do what ever you need with the list of matching
> documents in the index lists!
>
> I implemented something very similar to this process a couple weeks
> ago; so far it seems to be working quite well for me.  It uses a lot
> of tasks, but they are small and very fast.
>
> Robert
>
> On Sat, Dec 18, 2010 at 08:20, Donov

Re: [google-appengine] Re: Appengine's Turkey problem

2010-12-18 Thread Robert Kluin
So what happens when they find an app hosted on App Engine they don't
like and block that?





On Sat, Dec 18, 2010 at 14:57, Kaan Soral  wrote:
> Actually Turkey doesn't block google, they block Youtube IP's, so the
> problem is probebly caused by shared IP's.
>
> So in my opinion the problem can be solved if Google uses seperate
> IP's for components.
>
> And for government blocking, they normally block domain names rather
> than IP's but for some reason they only block Youtube IP's because
> people just enter the site using seperate DNS etc.
>
> On Dec 17, 2:29 am, Tim Hoffman  wrote:
>> Hi
>>
>> I don't really believe it is solvable by google.
>>
>> If they add a new pool of addresses for appengine, and some apps turn up on
>> appengine that
>> any particular government doesn't like, they will block access to that range
>> and you are
>> back in the same situation.
>>
>> All anyone can do is educate and lobby their government.
>>
>> Rgds
>>
>> Tim Hoffman
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Documentation on AppEngine's caching reverse proxy

2010-12-18 Thread Stephen
> ... your content, but the cache is key off the URL ...

This is a bug. You should be keying off the URL and the Vary header, or not 
caching responses with Vary headers:

  http://code.google.com/p/googleappengine/issues/detail?id=4277

This was reported two months ago :-(

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Documentation on AppEngine's caching reverse proxy

2010-12-18 Thread Stephen
If you haven't already, star this:

  http://code.google.com/p/googleappengine/issues/detail?id=2258


If folks discover new attributes of the cache, by experiment, perhaps they 
could be added there to collect them in one place.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Using datastore for up to 4.5 billion keys

2010-12-18 Thread Robert Kluin
Hey Donovan,
   Some inline comments follow.


On Sat, Dec 18, 2010 at 15:06, Donovan  wrote:
> Hi Robert,
>
> thank you very much for the well considered and thorough reply! Much
> appreciated!
>
> The likely number of hashes per document is roughly the length of the
> document, minus duplicate hashes and minus a fixed number (typically
> 15), so roughly 3500. There will be very likely a combination of the
> same hashes in a batch import of documents and there will also be
> probabilistic hash collisions due to the nature of the hash algorithm.
> These are fine, however.

You'll probably need to add an additional step or two to handle
processing 3,500 hashes / document.  You might need to just try a few
different solutions to figure out what is best for your use-case.
Perhaps you can take advantage of the new long-running (10 minute)
tasks and just loop a few times inserting batches of a couple hundred
AddToindex entities for each document.  Another option is to simply
use a per-document hash-list as a queue, and keep processing it until
the 'queue' is empty.  You could probably run creation tasks in
parallel with some sharding and speed up the document processes a bit.


>
> The documents will arrive in batches of 1000. Your tips on task queues
> describe exactly what I'm working on this minute!!! Your example code
> is very useful.
>
The solution I suggested will handle multiple documents with the same
hash at once well.  In fact, that would (potentially) make it more
efficient since it can add multiple documents to an index at once.


> I think the two most useful tips I can glean from your proposed
> solution are to use the hash as the key name and the use of
> Model.get_by_key_name. I did not know that this could accept a list of
> keys, thus only using one query. This is fantastic! Do you know if
> there are any limits on the length of the list? I thought perhaps the
> limit of 30 would cause problems when using an IN query, but maybe
> this does not apply to key names.

I do not remember there being an explicit limit on the number you can
fetch by name.  I've been able to fetch a few hundred without any
major issues.  You'll probably want to explore way of breaking the
fetch up into smaller batches of a few hundred using async datastore
calls though.  Just do some tests and benchmarks to find out what will
be best.  You're not going to want to use IN for this, use keys
anywhere possible -- it will be significantly faster.  Also, make sure
you use Appstats to help you minimize the number of (serial) RPC calls
you are making.


>
> I don't need to preserve the document hashes as all the queries will
> come from a separate corpora, and they will be hashed at query time.
>
> One question I still have is to choose a suitable hash width. 4.5
> billion hashes presumes a hash width of 32 bits, but the overhead of
> the metadata and indexes for that many entities could be unhelpful. I
> can reduce the hash width to anything between 24 and 32 bits to
> minmize the overhead. Do you have any thought on this?

To reduce the metadata size, mark as many things as indexed=False as
possible, particularly on ListProperties.  Also, use short names for
everything -- entities and properties --  names like 'D' instead of
'Document.'   Also, in some of the example I simply used the encoded
key, instead you should use the key name / id, it will be shorter and
save you some more space.

As far as the hash size goes, I guess it depends on what you're doing
and the relative cost of a false match.  Will the increase in false
positives outweigh the cost of an extra couple gigs of data per month?


In my implementation, I used the Aggregator from my slagg
(https://bitbucket.org/thebobert/slagg/src) lib, I'll probably upload
an example of a similar use-case sometime this weekend.  For what
you're doing you would probably want to customize the 'first steps' in
the processes, but I suspect it would work for you with relatively few
modifications.

You might also want to checkout the new Pipeline API, it looks like a
robust tool for building workflows.  I am not sure about the overhead
/ setup time as I have not played with it yet.


Robert


>
> Thanks again for your great answer!
>
> Cheers,
> Donovan.
>
>
> On Dec 18, 7:39 pm, Robert Kluin  wrote:
>> Hi Donovan,
>>   It sounds to me like the hash should be used as the key.  I would
>> probably use a structure something like this:
>>
>> class Document(db.Model):
>>     data = db.BlobProperty()  # Original 'raw' document data.
>>     hashes = db.StringListProperty(indexed=False) # List of hash
>> values for this document.
>>
>> class AddToIndex(db.Model):
>>     """Set key_name to document_id"""
>>     index = db.StringProperty(required=True) # hash key
>>
>> class Index(db.Model):
>>     """Set key_name to your hash value."""
>>     index = db.StringListProperty(required=True)
>>
>> I am assuming the documents are small enough to fit into a
>> BlobProperty, if not you can use the 

Re: [google-appengine] Re: Using datastore for up to 4.5 billion keys

2010-12-18 Thread Donovan Hide
Hi Robert,

wow! Thanks for the great advice! I'll digest all of this over the
coming few days and will read through your slagg library. Fast
numerical aggregation is another problem that I know of some projects
that would benefit from. I used to do a lot of MDX work, so will be
interesting to see how it compares.

The tip about keeping names short and using ids rather than keys where
possible definitely seems wise:

from google.appengine.ext import db

print db.Key.from_path('Article',12)
print db.Key.from_path('Article','12')
print db.Key.from_path('Article',4294967296)
print db.Key.from_path('Article','4294967296')
print db.Key.from_path('I',1)
print db.Key.from_path('I',1<<32)

agZjaHVybnJyDQsSB0FydGljbGUYDAw
agZjaHVybnJyDwsSB0FydGljbGUiAjEyDA
agZjaHVybnJyEQsSB0FydGljbGUYgICAgBAM
agZjaHVybnJyFwsSB0FydGljbGUiCjQyOTQ5NjcyOTYM
agZjaHVybnJyBwsSAUkYAQw
agZjaHVybnJyCwsSAUkYgICAgBAM


proves the point! It's a shame there is a six character minmum limit
on the app name, as I'm  sure that would shave a few gigabytes of the
index size. I wonder if anyone knows how to minimise these keys any
further?

If I go with this schema:

class I(db.Model):
v=db.UIntListProperty(required=True)

making use of this recipe:

http://appengine-cookbook.appspot.com/attachment/?id=ahJhcHBlbmdpbmUtY29va2Jvb2tyEQsSCkF0dGFjaG1lbnQY2x0M

It's quite possible that the key size will be the biggest user of space!

Well, I'll give it a go and have a look at the AppStats and will
report an answer...

Cheers,
Donovan

On 18 December 2010 22:55, Robert Kluin  wrote:
> Hey Donovan,
>   Some inline comments follow.
>
>
> On Sat, Dec 18, 2010 at 15:06, Donovan  wrote:
>> Hi Robert,
>>
>> thank you very much for the well considered and thorough reply! Much
>> appreciated!
>>
>> The likely number of hashes per document is roughly the length of the
>> document, minus duplicate hashes and minus a fixed number (typically
>> 15), so roughly 3500. There will be very likely a combination of the
>> same hashes in a batch import of documents and there will also be
>> probabilistic hash collisions due to the nature of the hash algorithm.
>> These are fine, however.
>
> You'll probably need to add an additional step or two to handle
> processing 3,500 hashes / document.  You might need to just try a few
> different solutions to figure out what is best for your use-case.
> Perhaps you can take advantage of the new long-running (10 minute)
> tasks and just loop a few times inserting batches of a couple hundred
> AddToindex entities for each document.  Another option is to simply
> use a per-document hash-list as a queue, and keep processing it until
> the 'queue' is empty.  You could probably run creation tasks in
> parallel with some sharding and speed up the document processes a bit.
>
>
>>
>> The documents will arrive in batches of 1000. Your tips on task queues
>> describe exactly what I'm working on this minute!!! Your example code
>> is very useful.
>>
> The solution I suggested will handle multiple documents with the same
> hash at once well.  In fact, that would (potentially) make it more
> efficient since it can add multiple documents to an index at once.
>
>
>> I think the two most useful tips I can glean from your proposed
>> solution are to use the hash as the key name and the use of
>> Model.get_by_key_name. I did not know that this could accept a list of
>> keys, thus only using one query. This is fantastic! Do you know if
>> there are any limits on the length of the list? I thought perhaps the
>> limit of 30 would cause problems when using an IN query, but maybe
>> this does not apply to key names.
>
> I do not remember there being an explicit limit on the number you can
> fetch by name.  I've been able to fetch a few hundred without any
> major issues.  You'll probably want to explore way of breaking the
> fetch up into smaller batches of a few hundred using async datastore
> calls though.  Just do some tests and benchmarks to find out what will
> be best.  You're not going to want to use IN for this, use keys
> anywhere possible -- it will be significantly faster.  Also, make sure
> you use Appstats to help you minimize the number of (serial) RPC calls
> you are making.
>
>
>>
>> I don't need to preserve the document hashes as all the queries will
>> come from a separate corpora, and they will be hashed at query time.
>>
>> One question I still have is to choose a suitable hash width. 4.5
>> billion hashes presumes a hash width of 32 bits, but the overhead of
>> the metadata and indexes for that many entities could be unhelpful. I
>> can reduce the hash width to anything between 24 and 32 bits to
>> minmize the overhead. Do you have any thought on this?
>
> To reduce the metadata size, mark as many things as indexed=False as
> possible, particularly on ListProperties.  Also, use short names for
> everything -- entities and properties --  names like 'D' instead of
> 'Document.'   Also, in some of the example I simply used the encoded
> key

[google-appengine] Re: App Gallery no longer available

2010-12-18 Thread Jay
Ikai, I think I may be able to spend some time on a user space version
of app engine gallery over the coming months. I think the prediction
API is a great idea to use as a tool to help fight the spam. We shall
see.

Thanks for your assistance.

-- Jay

On Nov 8, 1:04 pm, "Ikai Lan (Google)" 
wrote:
> Hey guys,
>
> Apologies for the delayed responses on this thread. The App Gallery was
> something that was really awesome in App Engine's infancy, but App Engine
> has since matured a bit and it's really not the best way to showcase App
> Engine's capabilities. We've decommissioned it. If you're interested in App
> Engine happenings, you'll definitely want to regularly check out our Reddit
> page:
>
> http://www.reddit.com/r/appengine
>
> We should have better communicated our intents with this page. We're
> unlikely to bring the App Gallery back. I'd personally love to see an App
> Engine application that serves the function the App Gallery served, maybe
> one that even uses the Prediction API (http://code.google.com/apis/predict/)
> to sort out the spam and do rankings. If this is a project community members
> are interested in taking on, let me know and I'll get you access to the
> prediction API.
>
> On a related note, due to the changes coming in Google Groups, we will be
> porting the Open Source projects page as well as the "Will it Play" Java
> frameworks page to our Project Hosting site:
>
> http://code.google.com/p/googleappengine/w/list
>
> --
> Ikai Lan
> Developer Programs Engineer, Google App Engine
> Blogger:http://googleappengine.blogspot.com
> Reddit:http://www.reddit.com/r/appengine
> Twitter:http://twitter.com/app_engine
>
> On Mon, Nov 8, 2010 at 1:03 AM, Tonny <12br...@gmail.com> wrote:
> > I you wan't to emphasize the importance of this thread, please mark is
> > a favorite - I tink Google Engineers are monitoring those for
> > importance.
>
> > On Nov 8, 8:53 am, Julian Namaro  wrote:
> > > I think a specific "App Engine" tag in a general Web App Gallery would
> > > make more sense than what we had before.
> > > After all, what you can do with App Engine is pretty much the same
> > > than what you can do with other technology stacks, and much of what
> > > the user experiences is the common client-side part anyway.
>
> > > On Nov 7, 6:21 am, nickmilon  wrote:
>
> > > > Also the gallery was a kind of museum where you could see how GAE
> > > > appls have been evolved in time from the early days circa April 2008
> > > > till now, always advancing using new features of the platform or as
> > > > the features were understood by the developers.
> > > > Now if they even do substitute this with a new site I doubt early
> > > > developers will go and resubmit their work.
> > > > So I only hope Google reconsiders this, fixes the problems related to
> > > > spam and classification of appls and restore the site.
>
> > > > On Nov 6, 5:17 am, Julian Namaro  wrote:
>
> > > > > An App Gallery is a great idea but Google's one looked like a weekend
> > > > > project thrown out in the wild and was pretty useless.
> > > > > There was a lot of spam, unhelpful reviews, and occasionally you
> > could
> > > > > see Desktop software (running only on Windows, not even cross-
> > > > > plateform) with a whole bunch of 5-stars ratings in this "App Engine"
> > > > > gallery.
>
> > > > > My guess is that they're deprecating the gallery in favor of the
> > > > > Chrome Web Store. Hope this will be a more serious attempt :]
>
> > > > > On Nov 5, 6:14 am, MLTrim  wrote:
>
> > > > > > Do you know why App Gallery  is no longer available?
>
> > > > > >http://appgallery.appspot.com
>
> > > > > > thanks
> > > > > > Michele
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Documentation on AppEngine's caching reverse proxy

2010-12-18 Thread saidimu apale
>
> If folks discover new attributes of the cache, by experiment, perhaps they
> could be added there to collect them in one place.


Official docs would go a long way in calming the unease of "what other
critical attributes haven't been discovered experimentally?" It appears
these docs aren't on the agenda at all so I guess it's all experimental for
now.

+1 for your idea of collecting them in one place.

saidimu

On Sat, Dec 18, 2010 at 5:38 PM, Stephen  wrote:

> If you haven't already, star this:
>
>   http://code.google.com/p/googleappengine/issues/detail?id=2258
>
>
> If folks discover new attributes of the cache, by experiment, perhaps they
> could be added there to collect them in one place.
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Static files cached even after update with changes

2010-12-18 Thread Ben
I have recently implemented a cache-busting strategy myself. My build
script changes the version number in the static asset URI, but the
app.yaml config routes all requests to the correct path using a simple
reg ex. It works well and means I can be very aggressive on setting
expiry directives. Because the regular expression is simple enough the
replacement token in my source code works on dev_appserver.py too.
Which is obviously important.

I'd be curious to hear how others do their cache busting (I found
mangling the actual filename to be too inconvenient).


On Dec 18, 4:38 am, Robert Kluin  wrote:
> Hi Noel,
>   People often want static content cached -- it reduces the load on
> your app.  Also, there might be other intermediate caches to worry
> about too, so it may not be only Google caching your content.
>
>   Yes, you should use some type of cache busting strategy.  It depends
> on how your iPhone app is implemented, but perhaps you can use a
> version number on your assets so they can be changed between versions?
>
> Robert
>
>
>
>
>
>
>
> On Fri, Dec 17, 2010 at 12:57, Noel  wrote:
> > I just pushed an update to my app in which a bunch of the static files
> > changed and were important for the app behavior.
>
> > What I'm seeing is that any requests for those static files return an
> > old version of the file! I did some digging around and it seems that
> > other people are seeing this and GAE caches things very aggressively.
>
> > Two things:
> > - Unless I'm missing something GAE shouldn't do that. When I submit
> > changes with appcfg.py update, it knows exactly what has changed, so
> > there's no reason it can't invalidate the caches for the static files
> > that have changed.
> > - Is there a way I can force it from here to start serving the latest
> > version of a static file? I can't change the app because it's an
> > iPhone app.
>
> > Is it true that in the future, the best way to avoid this might be
> > something like adding ?timestamp=[hour] to the URL request? That way
> > files won't be cached for more than an hour? Is there a better method?
>
> > Thanks.
>
> > --Noel
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > google-appengine+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Static files cached even after update with changes

2010-12-18 Thread Peter Petrov
It's not necessary to have a build script which explicitly changes the
version tag. I, for example, calculate a hash from
the CURRENT_VERSION_ID environment variable at runtime, and use its prefix
as a version tag. Since CURRENT_VERSION_ID is guaranteed to change on every
deployment, the version tag changes as well, and everything works fine.

On Sun, Dec 19, 2010 at 5:28 AM, Ben  wrote:

> I have recently implemented a cache-busting strategy myself. My build
> script changes the version number in the static asset URI, but the
> app.yaml config routes all requests to the correct path using a simple
> reg ex. It works well and means I can be very aggressive on setting
> expiry directives. Because the regular expression is simple enough the
> replacement token in my source code works on dev_appserver.py too.
> Which is obviously important.
>
> I'd be curious to hear how others do their cache busting (I found
> mangling the actual filename to be too inconvenient).
>
>
> On Dec 18, 4:38 am, Robert Kluin  wrote:
> > Hi Noel,
> >   People often want static content cached -- it reduces the load on
> > your app.  Also, there might be other intermediate caches to worry
> > about too, so it may not be only Google caching your content.
> >
> >   Yes, you should use some type of cache busting strategy.  It depends
> > on how your iPhone app is implemented, but perhaps you can use a
> > version number on your assets so they can be changed between versions?
> >
> > Robert
> >
> >
> >
> >
> >
> >
> >
> > On Fri, Dec 17, 2010 at 12:57, Noel  wrote:
> > > I just pushed an update to my app in which a bunch of the static files
> > > changed and were important for the app behavior.
> >
> > > What I'm seeing is that any requests for those static files return an
> > > old version of the file! I did some digging around and it seems that
> > > other people are seeing this and GAE caches things very aggressively.
> >
> > > Two things:
> > > - Unless I'm missing something GAE shouldn't do that. When I submit
> > > changes with appcfg.py update, it knows exactly what has changed, so
> > > there's no reason it can't invalidate the caches for the static files
> > > that have changed.
> > > - Is there a way I can force it from here to start serving the latest
> > > version of a static file? I can't change the app because it's an
> > > iPhone app.
> >
> > > Is it true that in the future, the best way to avoid this might be
> > > something like adding ?timestamp=[hour] to the URL request? That way
> > > files won't be cached for more than an hour? Is there a better method?
> >
> > > Thanks.
> >
> > > --Noel
> >
> > > --
> > > You received this message because you are subscribed to the Google
> Groups "Google App Engine" group.
> > > To post to this group, send email to google-appengine@googlegroups.com
> .
> > > To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> > > For more options, visit this group athttp://
> groups.google.com/group/google-appengine?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Problem deleting an application

2010-12-18 Thread joshuacronemeyer
Does anyone know if there is a way to contact google customer service
directly?  It seems crazy that they offer app engine as a paid service
and there is no obvious way to contact them if there are problems with
their service.


On Dec 17, 4:10 pm, joshuacronemeyer 
wrote:
> I have an application that I no longer want to pay for.  When I go to
> the disabling administration page it tells me that I can't request
> permanent deletion because my billing status is "enabled".  When I try
> to disable my billing, the status changes to "changing daily budget"
> for about 10 minutes and then it goes directly back to "enabled".  In
> short there is no way for me to delete this application.  Anyone have
> any ideas?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Problem deleting an application

2010-12-18 Thread 风笑雪
You can fill this form:
http://code.google.com/support/bin/request.py?contact_type=AppEngineBillingSupport

Or file an production issue:
http://code.google.com/p/googleappengine/issues/entry?template=Production%20issue

--
keakon

My blog(Chinese): www.keakon.net
Blog source code: https://bitbucket.org/keakon/doodle/



On Sun, Dec 19, 2010 at 12:50 PM, joshuacronemeyer <
joshuacroneme...@gmail.com> wrote:

> Does anyone know if there is a way to contact google customer service
> directly?  It seems crazy that they offer app engine as a paid service
> and there is no obvious way to contact them if there are problems with
> their service.
>
>
> On Dec 17, 4:10 pm, joshuacronemeyer 
> wrote:
> > I have an application that I no longer want to pay for.  When I go to
> > the disabling administration page it tells me that I can't request
> > permanent deletion because my billing status is "enabled".  When I try
> > to disable my billing, the status changes to "changing daily budget"
> > for about 10 minutes and then it goes directly back to "enabled".  In
> > short there is no way for me to delete this application.  Anyone have
> > any ideas?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Why not SELECT single_property FROM model?

2010-12-18 Thread Eli Jones
I'm still waiting for GAE to let me select individual properties.

For a query like this:

Select value, datevalue From MyModel Where datevalue > date1 AND datevalue <
date2 Order By datevalue

Could be served very quickly by an index on (datevalue, value)

Being able to hit a covering index to serve your queries is a great
benefit.. and it's the oldest trick in the database book.

And, if you have a model with several properties, it can save a lot of
network bandwidth to only return the specific properties that you want.

Naturally, you can create more complex models to work around this.. but that
is less than ideal.

I figure there is some structural limitation in the datastore that makes
this difficult for them to offer.  I'd be very interested to know what that
limitation is. (Since the naive assumption would be that this would be a
trivial thing.. clearly it is not trivial.)

On Sat, Dec 18, 2010 at 1:05 AM, Robert Kluin wrote:

> I figured it might be of value on things like range scans.  Think
> about uses like this:
>
>SELECT key, first_name FROM People WHERE first_name >= 'alb' AND
> first_name < 'ablz';
>
> You could have the entity id _and_ the name from the index.  Aside
> from this type of use case, you are probably right -- not much value
> for equality-only queries.
>
> I'm not sure I see an issue with list properties, wouldn't the
> expectation be to only return the matching element?
>
>
>
>
>
>
>
> On Fri, Dec 17, 2010 at 23:43, Tim Hoffman  wrote:
> > One major problem I see will be with list properties,
> > I assume each value in the list when indexed will have its own position
> in
> > the index along with the Key,
> > So if what you describe was implemented you find you could only get a
> single
> > value from the index that matched.
> > In fact in all cases you could probably only get properties for matches
> from
> > the index(es) unless you retrieve the whole entity.
> > A lazily loaded entity, ie match the entity, get the key, create a lazy
> > entity, will still have to fetch the full entity to get any  property
> > other than those used to find the entity in the indexes.
> > I think ;-)
> > So I am not sure there is a lot to be gained.
> > T
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com
> .
> > For more options, visit this group at
> > http://groups.google.com/group/google-appengine?hl=en.
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] re: from google.appengine.ext.webapp import template

2010-12-18 Thread Martin Webb
hi all...
Since we have moved to 1.4 - ive not updated my local dev pc yet - now i deploy 
my app - it dosnt work -  the includes fail in all my modules for:

from google.appengine.ext.webapp import template



i have tracked it down to the webb django template helper - please dont tell me 
this has been removed from 1.4 - can i still use django - or do i need to redo 
all my templates using a new templater?

 
Regards
 
 
Martin Webb

T. 01233 660879 
F. 01233 640949
M. 07798946071
 
The information contained in this email is confidential and may contain 
proprietary information. It is meant solely for the intended recipient. Access 
to this email by anyone else is unauthorised. If you are not the intended 
recipient, any disclosure, copying, distribution or any action taken or omitted 
in reliance on this, is prohibited and may be unlawful. No liability or 
responsibility is accepted if information or data is, for whatever reason 
corrupted or does not reach its intended recipient. No warranty is given that 
this email is free of viruses. The views expressed in this email are, unless 
otherwise stated, those of the author 


  

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.