[google-appengine] Java getServletContext scalability

2010-02-09 Thread Danny Yoo
How does the use of the global context provided by getServletContext  
affect scalability?  I assume there must be some effect; otherwise  
Jcache would be redundant, no?


--
You received this message because you are subscribed to the Google Groups "Google 
App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Cant acces admin console on production

2010-02-09 Thread Takashi Matsuo
I can't access app engine admin console on production.
It gives me following error:

Error: Server Error

The server encountered an error and could not complete your request.
If the problem persists, please report your problem and mention this
error message and the query that caused it.

only me? or anyone else?

-- 
Takashi Matsuo
Kay's daddy

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] a bug in memcache

2010-02-09 Thread saintthor
quote:

By default, values stored in memcache are retained as long as
possible. Values may be evicted from the cache when a new value is
added to the cache if the cache is low on memory. When values are
evicted due to memory pressure, the least recently used values are
evicted first.

when an old value in, if update an exist value, not add a new value,
cause the total value increase over 1M, the old value is still exist.
i think it should be deleted.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: can i set the max retry times for task in quene?

2010-02-09 Thread saintthor
thank you all.

i know about eta and X-AppEngine-TaskRetryCount. i think setting a
param is better.

On 2月7日, 下午2时10分, kang  wrote:
> So we can get the already retried times and limit the retry times by
> ourself. Thanks.
>
> On Sun, Feb 7, 2010 at 12:30 AM, Prashant Gupta wrote:
>
>
>
> > you can check for X-AppEngine-TaskRetryCount in your code.
>
> >http://code.google.com/appengine/docs/java/taskqueue/overview.html#Ta...
>
> > On 6 February 2010 21:24, Eli Jones  wrote:
>
> >> When you add the task.. just use eta argument to set the time you want it
> >> done by.. and then pass that time to the task in the params as 'eta'.
>
> >> Then, when the task starts running.. have a check the compares the current
> >> datetime to the 'eta' param.. if the 'eta' is greater than the current
> >> datetime.. then have the Task "succeed" by doing nothing.
>
> >> On Sat, Feb 6, 2010 at 3:02 AM, saintthor  wrote:
>
> >>> if not, i sugest a param to set it.
>
> >>> in document: App Engine will attempt to retry until it succeeds. i do
> >>> not think it is necesery.
>
> >>> --
> >>> You received this message because you are subscribed to the Google Groups
> >>> "Google App Engine" group.
> >>> To post to this group, send email to google-appeng...@googlegroups.com.
> >>> To unsubscribe from this group, send email to
> >>> google-appengine+unsubscr...@googlegroups.com
> >>> .
> >>> For more options, visit this group at
> >>>http://groups.google.com/group/google-appengine?hl=en.
>
> >>  --
> >> You received this message because you are subscribed to the Google Groups
> >> "Google App Engine" group.
> >> To post to this group, send email to google-appeng...@googlegroups.com.
> >> To unsubscribe from this group, send email to
> >> google-appengine+unsubscr...@googlegroups.com
> >> .
> >> For more options, visit this group at
> >>http://groups.google.com/group/google-appengine?hl=en.
>
> >  --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.
>
> --
> Stay hungry,Stay foolish.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Keeping original datastore after re-deploying

2010-02-09 Thread Robert Kluin
Adding an additional property to a kind is fine.  Depending on how you
are using it you may want to be sure it has a default of some type
defined.


Robert





On Tue, Feb 9, 2010 at 7:53 PM, 风笑雪  wrote:
> Deploying won't affect your datastore except index.
>
> 2010/2/10 jsnschneck :
>> Hey Guys,
>>
>> I am working on an app that has just had a major overhaul. I want to
>> re-deploy this app but the datastore backend contains information that
>> very important. Is there a way that I can re-deploy the app and hook
>> it up to the original datastore. Also is it possible to extend one of
>> the tables with an extra field without changing the data already
>> contained.
>>
>> Cheers,
>> Jason
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "Google App Engine" group.
>> To post to this group, send email to google-appeng...@googlegroups.com.
>> To unsubscribe from this group, send email to 
>> google-appengine+unsubscr...@googlegroups.com.
>> For more options, visit this group at 
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: When is the Timeout bug going to get fixed?

2010-02-09 Thread Eli Jones
Well.. you can always wrap puts with a try,except like so (if you want it to
just keep retrying):


wait = .1
while True:
try:
db.put(myEntity)
break
except db.Timeout:
from time import sleep
sleep(wait)
wait *= 2




On Tue, Feb 9, 2010 at 5:54 PM, phtq  wrote:

> The recipe does cut down the Timeouts dramatically, but there are
> still a large number which seem to bypass the this fix completely. A
> sample error log entry is attached:
>
> Exception in request:
> Traceback (most recent call last):
>  File "/base/python_lib/versions/third_party/django-0.96/django/core/
> handlers/base.py", line 77, in get_response
>response = callback(request, *callback_args, **callback_kwargs)
>  File "/base/data/home/apps/kbdlessons/1-01.339729324125102596/
> views.py", line 725, in newlesson
>productentity = Products.gql("where Name = :1", ProductID).get()
>  File "/base/python_lib/versions/1/google/appengine/ext/db/
> __init__.py", line 1564, in get
>results = self.fetch(1, rpc=rpc)
>  File "/base/python_lib/versions/1/google/appengine/ext/db/
> __init__.py", line 1616, in fetch
>raw = raw_query.Get(limit, offset, rpc=rpc)
>  File "/base/python_lib/versions/1/google/appengine/api/
> datastore.py", line 1183, in Get
>limit=limit, offset=offset, prefetch_count=limit,
> **kwargs)._Get(limit)
>  File "/base/python_lib/versions/1/google/appengine/api/
> datastore.py", line 1113, in _Run
>raise _ToDatastoreError(err)
> Timeout
>
> Any ideas on how to deal with is class of Timeouts?
>
>
>
> On Jan 28, 9:48 am, phtq  wrote:
> > Thanks for mentioning this recipe, it worked well in testing and we
> > will try it on the user population tomorrow.
> >
> > On Jan 27, 9:48 am, djidjadji  wrote:
> >
> >
> >
> > > There is an article series about the datastore. It explains that the
> > > Timeouts are inevitable. It gives the reason for the timeouts. They
> > > will always be part of Bigtable and the Datastore of GAE.
> >
> > > The only solution is a retry on EVERY read. The get by id/key and the
> queries.
> > > If you do that then very few reads will result in aTimeout.
> > > I wait first 3 and then 6 secs between each request. I log eachTimeout.
> > > If stillTimeoutafter 3 read tries I raise the exception.
> >
> > > The result is very few final read Timeouts. The log shows frequent
> > > requests that need a retry, but most of them will succeed with the
> > > first.
> >
> > > For speed, fetch the Static content object by key_name, and key_name
> > > is the file path.
> >
> > > 2010/1/26 phtq :
> >
> > > > Our application error log for the 26th showed around 160 failed http
> > > > requests due to timeouts. That's 160 users being forced to hit the
> > > > refresh button on their browser to get a normal response. A more
> > > > typical day has 20 to 60 timeouts. We have been waiting over a year
> > > > for this bug to get fixed with no progress at all. Its beginning to
> > > > look like it's unfixable so perhaps Google could provide some
> > > > workaround. In our case, the issue arises because of the 1,000 file
> > > > limit. We are forced to hold all our .js, .css, .png. mp3, etc. files
> > > > in the database and serve them from there. The application is quite
> > > > large and there are well over 10,000 files. The Python code serving
> up
> > > > the files does just one DB fetch and has about 9 lines of code so
> > > > there is no way it can be magically restructured to make theTimeout
> > > > go away. However, putting all the files on the app engine as real
> > > > files would avoid the DB access and make the problem go away. Could
> > > > Google work towards removing that file limit?
> >
> > > > --
> > > > You received this message because you are subscribed to the Google
> Groups "Google App Engine" group.
> > > > To post to this group, send email to
> google-appeng...@googlegroups.com.
> > > > To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> > > > For more options, visit this group athttp://
> groups.google.com/group/google-appengine?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] 1.3.1 SDK Prerelease - help us verify

2010-02-09 Thread Piotr Jaroszyński
Hello,

Not sure whether it was present in 1.3.0 but there is an unpleasant
bug in 1.3.1 where blobstore request mangling breaks data encoding
[1].

[1] - http://code.google.com/p/googleappengine/issues/detail?id=2749

-- 
Best Regards
Piotr Jaroszyński

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] GAE/J recommended best practices.

2010-02-09 Thread Patrick Twohig
Thanks Ikai, I found out I had forgot to turn on memory caching for some
thigns in my app and the sluggishness went away. I still get the loading
request every once in a while, but once it goes fully live (out of beta)
that probably wont' be an issue as we're expecting fairly regular traffic.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: memcache set succeeds but immediate get fails. Pls help

2010-02-09 Thread Andy Freeman
> > memcache.set() does not set if id already present.

Huh?  I don't see that in the documentation.  Why do you think that it
is true?

memcache.set is described as "Sets a key's value, regardless of
previous contents in cache."
memcache.add is described as "Sets a key's value, if and only if the
item is not already in memcache."

http://code.google.com/appengine/docs/python/memcache/functions.html

On Feb 7, 1:32 pm, observer247  wrote:
> Thanks Eli ! The cache time was the issue.
>
> memcache.set() does not set if id already present. So I am using
> delete and add.
> I cannot be sure id is present, memcache could be deleted because of
> memory pressure from app engine, right ?
>
> On Feb 7, 10:18 am, Eli Jones  wrote:
>
>
>
> > One minor thing I noticed.. why not use memcache.set() instead of
> > memcache.delete(), memcache.add()?
>
> > On Sun, Feb 7, 2010 at 6:22 AM, observer247  wrote:
> > > This is my code:
>
> > >                ret = memcache.add(key=mykey, value=qList, time=
> > > 60*60*24*30)
> > >                logging.critical("Created cache batch %s Passed %s" %
> > > (mykey, str(ret)))
>
> > >                qList = memcache.get(mykey)
>
> > > For some reason, qList is None ! I have logged all values and qList is
> > > a non empty list. Check code below where I print a lot of info in the
> > > logs.
>
> > > Detailed code here:
>
> > > def MY_QC_MAX(): return 3
> > > def MY_QC_SIZE(): return 200
>
> > > def createBatchMyModels():
> > >        import random
> > >        for n in range(MY_QC_MAX()):
> > >                bnum = n + 1
> > >                mykey = "qkey_batch_"+str(bnum)
> > >                qQ = MyModel.all(keys_only=True).filter('approved',
> > > True)
> > >                if bnum > 1:
> > >                        qQ = qQ.filter('__key__ >', last_key)
> > >                rows = qQ.fetch(MY_QC_SIZE())
> > >                tot = len(rows)
> > >                if tot < MY_QC_SIZE():
> > >                        logging.critical("Not enough MyModels for
> > > batch %u, got %u" % (bnum, tot))
> > >                        if tot == 0:
> > >                                return
> > >                last_key = rows[tot - 1]
> > >                # create the qList
> > >                qList = list()
> > >                logging.critical("Added %u rows into key %s" % (tot,
> > > mykey))
> > >                tmpc = 0
> > >                for r in rows:
> > >                        if tmpc == 0:
> > >                                logging.critical("elem %u into key %s"
> > > % (r.id(), mykey))
> > >                                tmpc = tmpc + 1
> > >                        qList.append(r.id())
>
> > >                for elem in qList:
> > >                        logging.info("key %s elem is %u" % (mykey,
> > > elem))
> > >                memcache.delete(mykey)
> > >                ret = memcache.add(key=mykey, value=qList, time=
> > > 60*60*24*30)
> > >                logging.critical("Created cache batch %s Passed %s" %
> > > (mykey, str(ret)))
>
> > >                qList = memcache.get(mykey)
> > >                if qList is None:
> > >                        logging.critical(".. getNextMyModel: Did not
> > > find key %s" % mykey)
> > >                else:
> > >                        logging.critical(".. LEN : %u" % len(qList))
>
> > > Sample log:
> > > .
> > > 02-07 03:15AM 05.240 key qkey_batch_1 elem is 13108
> > > C 02-07 03:15AM 05.250 Created cache batch qkey_batch_1 Passed True
> > > C 02-07 03:15AM 05.253 .. getNextQuestion: Did not find key
> > > qkey_batch_1
> > > C 02-07 03:15AM 05.339 Added 200 rows into key qkey_batch_2
> > > ...
>
> > > Can anyone pls help !
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > > "Google App Engine" group.
> > > To post to this group, send email to google-appeng...@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > > google-appengine+unsubscr...@googlegroups.com > >  e...@googlegroups.com>
> > > .
> > > For more options, visit this group at
> > >http://groups.google.com/group/google-appengine?hl=en.- Hide quoted text -
>
> - Show quoted text -

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Keeping original datastore after re-deploying

2010-02-09 Thread 风笑雪
Deploying won't affect your datastore except index.

2010/2/10 jsnschneck :
> Hey Guys,
>
> I am working on an app that has just had a major overhaul. I want to
> re-deploy this app but the datastore backend contains information that
> very important. Is there a way that I can re-deploy the app and hook
> it up to the original datastore. Also is it possible to extend one of
> the tables with an extra field without changing the data already
> contained.
>
> Cheers,
> Jason
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Keeping original datastore after re-deploying

2010-02-09 Thread jsnschneck
Hey Guys,

I am working on an app that has just had a major overhaul. I want to
re-deploy this app but the datastore backend contains information that
very important. Is there a way that I can re-deploy the app and hook
it up to the original datastore. Also is it possible to extend one of
the tables with an extra field without changing the data already
contained.

Cheers,
Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: When is the Timeout bug going to get fixed?

2010-02-09 Thread phtq
The recipe does cut down the Timeouts dramatically, but there are
still a large number which seem to bypass the this fix completely. A
sample error log entry is attached:

Exception in request:
Traceback (most recent call last):
  File "/base/python_lib/versions/third_party/django-0.96/django/core/
handlers/base.py", line 77, in get_response
response = callback(request, *callback_args, **callback_kwargs)
  File "/base/data/home/apps/kbdlessons/1-01.339729324125102596/
views.py", line 725, in newlesson
productentity = Products.gql("where Name = :1", ProductID).get()
  File "/base/python_lib/versions/1/google/appengine/ext/db/
__init__.py", line 1564, in get
results = self.fetch(1, rpc=rpc)
  File "/base/python_lib/versions/1/google/appengine/ext/db/
__init__.py", line 1616, in fetch
raw = raw_query.Get(limit, offset, rpc=rpc)
  File "/base/python_lib/versions/1/google/appengine/api/
datastore.py", line 1183, in Get
limit=limit, offset=offset, prefetch_count=limit,
**kwargs)._Get(limit)
  File "/base/python_lib/versions/1/google/appengine/api/
datastore.py", line 1113, in _Run
raise _ToDatastoreError(err)
Timeout

Any ideas on how to deal with is class of Timeouts?



On Jan 28, 9:48 am, phtq  wrote:
> Thanks for mentioning this recipe, it worked well in testing and we
> will try it on the user population tomorrow.
>
> On Jan 27, 9:48 am, djidjadji  wrote:
>
>
>
> > There is an article series about the datastore. It explains that the
> > Timeouts are inevitable. It gives the reason for the timeouts. They
> > will always be part of Bigtable and the Datastore of GAE.
>
> > The only solution is a retry on EVERY read. The get by id/key and the 
> > queries.
> > If you do that then very few reads will result in aTimeout.
> > I wait first 3 and then 6 secs between each request. I log eachTimeout.
> > If stillTimeoutafter 3 read tries I raise the exception.
>
> > The result is very few final read Timeouts. The log shows frequent
> > requests that need a retry, but most of them will succeed with the
> > first.
>
> > For speed, fetch the Static content object by key_name, and key_name
> > is the file path.
>
> > 2010/1/26 phtq :
>
> > > Our application error log for the 26th showed around 160 failed http
> > > requests due to timeouts. That's 160 users being forced to hit the
> > > refresh button on their browser to get a normal response. A more
> > > typical day has 20 to 60 timeouts. We have been waiting over a year
> > > for this bug to get fixed with no progress at all. Its beginning to
> > > look like it's unfixable so perhaps Google could provide some
> > > workaround. In our case, the issue arises because of the 1,000 file
> > > limit. We are forced to hold all our .js, .css, .png. mp3, etc. files
> > > in the database and serve them from there. The application is quite
> > > large and there are well over 10,000 files. The Python code serving up
> > > the files does just one DB fetch and has about 9 lines of code so
> > > there is no way it can be magically restructured to make theTimeout
> > > go away. However, putting all the files on the app engine as real
> > > files would avoid the DB access and make the problem go away. Could
> > > Google work towards removing that file limit?
>
> > > --
> > > You received this message because you are subscribed to the Google Groups 
> > > "Google App Engine" group.
> > > To post to this group, send email to google-appeng...@googlegroups.com.
> > > To unsubscribe from this group, send email to 
> > > google-appengine+unsubscr...@googlegroups.com.
> > > For more options, visit this group 
> > > athttp://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Pervasive datastore timeouts

2010-02-09 Thread Ikai L (Google)
Adding an index shouldn't cause reads to take longer, though it will add
time to the writes. It's more likely that your adding of an index was simply
a coincidence. Are you still seeing these datastore issues?

On Sat, Feb 6, 2010 at 5:11 AM, Jason Smith wrote:

> We have a cron job which fetches up to 1,000 entities based on a
> timestamp--no transactions, no writes. It has been working for several
> weeks.
>
> I recently added an index (the timestamp, plus a boolean).
> Subsequently, the datastore is almost completely unresponsive with
> these entities. The vast majority of the queries raise a Timeout
> exception, always after exactly 4 seconds. It seems that I can hardly
> query these entities at all, using any filter.
>
> I was under the impression that off-and-on timeouts are transient. But
> this has been happening for nearly a day. Also I expected that once an
> index is serving, no legal query could possibly time out always.
>
> I am concerned that I have an index or datastore issue. Does anybody
> have any insights about this?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>


-- 
Ikai Lan
Developer Programs Engineer, Google App Engine
http://googleappengine.blogspot.com | http://twitter.com/app_engine

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Data design choice: which works better?

2010-02-09 Thread Ikai L (Google)
Hi,

I know this might just me being being crazy, but I'm having a lot of
problems following your description. It's, unfortunately, the limitations of
my human mind at work. Do you mind describing your question in terms of your
problem domain? It'd be easier to wrap my brain around more concrete terms
(Person, Group, Order, etc) rather than abstract letters (I suspect that's
the reason no one has answered your question, because it's too hard to
read).

On Thu, Feb 4, 2010 at 3:35 AM, markvgti  wrote:

> Each user U, has 0 or more children data entities corresponding to
> days D.
>
> Each day D has 0 or more children data entities of types X, Y and Z.
>
> Is it better to associate items X, Y and Z with an entity of type D
> by:
>
> 1. creating a collection each of X, Y and Z data entities inside
> entity D (e.g., private List childrenOfTypeY;), or by
>
> 2. storing X, Y and Z by generating their Key values such that it
> contains the parent entity D's key (e.g., Key k = new
> KeyFactory.Builder(D.class.getSimpleName(),
> "BlahBlahBlah").addChild(X.class.getSimpleName(),
> "YadaYadaYada").getKey();).
>
> My primary concern is which type of data design will be better
> resource/time consumption-wise (i.e., which way is "lighter"). AFAIK,
> method 1 gives me automatic consistency, method 2 gives me
> flexibility.
>
> In method 1, when I fetch an entity of type D, are all children
> entities of types X, Y & Z fetched in one go, or lazily (w/o
> programmatic intervention) on an as-required basis? In method 2
> fetching the children is of course up to me.
>
> Other characteristics of my application:
>
> * As far as writing is concerned, it is more likely that individual
> children of an entity of type D will be written to.
>
> * As far as reading from the datastore is concerned, it is more likely
> that all children of 1 or more entities of type D will need to be
> fetched to satisfy a single request.
>
> * Once an entity of type D is created, it and its children are
> unlikely to be deleted, though entities of type X, Y and Z (for a
> given entity D) may be individually modified.
>
> * Data consistency is obviously of concern, but since data deletion is
> infrequent, app-managed consistency shouldn't be too hard. Plus, a
> cleanup Task can delete orphan X, Y, Z objects (should that happen).
>
> If it matters, I am using Java.
>
> If the above explanation is too opaque :-) or I have posted to the
> wrong group, please let me know.
>
> Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>


-- 
Ikai Lan
Developer Programs Engineer, Google App Engine
http://googleappengine.blogspot.com | http://twitter.com/app_engine

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Select columns

2010-02-09 Thread Ikai L (Google)
Have you read our documentation on KeyFactory?

http://code.google.com/appengine/docs/java/datastore/relationships.html

I'd
try to understand what's going on there. It sounds like you're doing it the
right way, but it's up to you to benchmark and find the best approach for
what works for you. The usage characteristics of your application should
determine the way your store your data.

On Wed, Feb 3, 2010 at 3:42 AM, Manny S  wrote:

> Ikai,
> Based on your inputs I created two data classes that have a unidirectional
> one-to-one relationship
> Now, I have two data classes simpledata and detailscol.
> simpledata contains fields A, B, C (and a Key field)
> detailscol just contains field D.
>
> simpledata imports detailscol that contains field D (and a Key field). It
> also contains an accessor for the detailscol.
> Code:
> simpledata sdata = new simpledata(A,B,C);
> sdata.setKey(null);
> detailscol obj = new detailscol(D);
> sdata.setD(obj);
>
> The keys are generated by the application and then I make the data
> persistent.
>
> Now, I display just the data in simpledata and if the user clicks on a
> details link I get the data stored in detailscol
> To get to that data I just do
>
> detailscol d = sdata.getDetails();
>
> Two questions:
>
> 1) Is this the right approach?
>
> 2) If I want to get the child data using just the parent keyhow do I go
> about it?
>
> E.g, user clicks details and I use some AJAX to redirect to a different
> servlet with just parent key as a parameter (since I don't access the child
> object yet). I get the parent key using
> KeyFactory.keyToString(sdata.getKey());
>
> Now, that I have the parent's key should I do a getObjectbyID on the
> parent data again using this and then get the child using the accessor
> method or is there a direct way to construct the child key and get to the
> child data.
>
> Due to the nature of my application I would like to have the key generated
> automatically (using setKey(null)).
>
> Apologies for the confusion in advance :)
>
> Manny
>
>
>
>
>
>
> On Sat, Jan 30, 2010 at 12:16 AM, Ikai L (Google) wrote:
>
>> Hi Manny,
>>
>> A few things to first remember - App Engine's datastore is not a database,
>> but a distributed key value store with additional features. Thus, we should
>> be careful not to frame our thinking in terms of RDBMS schemas. For this
>> reason, I like to avoid using database terminology that can confound the
>> design process like "table" or "column". App Engine stores objects
>> serialized ("entities") and indexes on the values. It'd be similar to an
>> approach of creating a MySQL table with a String ID and a blob value,
>> storing serialized Objects in the blob column, or using Memcache and storing
>> JSON values.
>>
>> When you retrieve a single value from the key value store, we have to
>> retrieve everything at once. In most scenarios, unlike SQL databases you may
>> be used to, retrieving large binary or text data does not add serious
>> overhead. Of course, this changes if you start storing data on the scale of
>> 1mb and are retrieving it unnecessarily. How large is the data you are
>> retrieving?
>>
>> Here's the way I would model your scenario if I was positive the
>> text/binary field had a 1:1 relationship with the parent class:
>>
>> * on your main entity, define the properties.
>> * define a new entity with a text/binary field, and encode the parent key
>> information in this key such that generating the key for this child field is
>> very cheap. KeyFactory.stringToKey and KeyFactory.keyToString are crucial
>> here. Read more about them here:
>> http://code.google.com/appengine/docs/java/javadoc/com/google/appengine/api/datastore/KeyFactory.html.
>> You can call your child property "parent_id:additional_info" or whatever
>> makes sense to you.
>>
>> Robert's solution of using a child key is basically just a variation on
>> this, as parent key information is encoded in a child key.
>>
>> A lot of this stuff can be a bit different to get used to. I suggest
>> becoming familiar with keys and how they are used in App Engine:
>>
>> Basic documentation about relationships:
>> http://code.google.com/appengine/docs/java/datastore/relationships.html
>> A more advanced article:
>> http://code.google.com/appengine/articles/storage_breakdown.html
>>
>>   On Thu, Jan 28, 2010 at 10:28 PM, Manny S  wrote:
>>
>>>   Hi All,
>>>
>>> First off, thanks for your time. A quick noob question on the right way
>>> to model data.
>>>
>>> I have a table with four columns A,B,C, D.  D - the fourth is of type
>>> text (contains quite a bit of data).
>>>
>>> I wanted to ensure that the contents of the details column 'D' is not
>>> fetched during a query. A sample scenario
>>> User does a search. Sees Columns A,B,C. If they need more details for
>>> that particular record Click on a link that fetches D for that particular
>>> record.
>>>
>>> So I tried to do something like - S

Re: [google-appengine] Creating Index programatically

2010-02-09 Thread Sandeep Sathaye
We have built a relational database on top of Google Bigtable which supports
ANSI SQL-92/ANSI SQL-99 & JDBC 3.0. Please check www.cloud2db.com. We want
to build indexes using SQL DDL commands. Also we support creating multiple
servers and databases within Cloud2db instance that's why we can't
effectively use approach of using one index definition file per appspot.
Also we want to avoid re-deploying complete application for adding new
indexes.
On Tue, Feb 9, 2010 at 1:40 PM, Robert Kluin  wrote:

> No.
>
> But if you explain the problem you are trying to solve someone might
> be able to suggest an alternative solution.
>
> Robert
>
>
>
>
> On Tue, Feb 9, 2010 at 1:34 PM, Sandeep  wrote:
> > Is there any way to create/build an Index programatically in GAE?
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> > For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
> >
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: app engine, shared IP and twitter api

2010-02-09 Thread enes akar
Thanks Ryan, yes I have an unique User Agent.
Also people from twitter answered my question. As I guessed, all app engine
applications have the same IP, we are blocked easily.

They are offering to migrate my app to another platform. I wanted them to
propose app engine's users a special solution. It is very depressing after
all app engine specific work.

On Tue, Feb 9, 2010 at 10:35 PM, Ryan  wrote:

>
> Make sure you set your User Agent string to something unique as well.
> You'll still get Rate Limited, but it should be a slightly higher
> limit.
>
> To answer your question, yes, you are being rate limited because of
> other App Engine Twitter search API users.  I wouldn't suggest using
> App Engine and Twitter Search for a production project because of
> this.  Twitter does not have authenticated search API, and only
> whitelists search users by IP address, and explicitly says in their
> docs that they can't/won't whitelist App Engine apps:
>
> http://apiwiki.twitter.com/Rate-limiting
>
> "The Search API is only able to whitelist IP addresses, not user
> accounts. This works in most situations but for cloud platforms like
> Google App Engine, applications without a static IP addresses cannot
> receive Search whitelisting."
>
> Ryan
>
> On Feb 9, 8:05 am, enes akar  wrote:
> > Thanks Nick I will try to find authenticated version of search api.
> >
> > By the way I am really thankful about the immediate responses of this
> group.
> >
> > On Tue, Feb 9, 2010 at 4:50 PM, Nick Johnson (Google) <
> >
> >
> >
> >
> >
> > nick.john...@google.com> wrote:
> > > Hi,
> >
> > > App Engine uses a shared pool of IPs for outgoing urlfetch requests.
> > > Unfortunately, as you observe, some services such as Twitter enforce
> per-ip
> > > ratelimiting.
> >
> > > In the case of Twitter, most of their APIs that support anonymous
> access
> > > also support authenticated access. You can submit authenticated
> requests
> > > instead, which are limited by your account, rather than by your IP.
> >
> > > -Nick Johnson
> >
> > > On Tue, Feb 9, 2010 at 2:33 PM, enes akar  wrote:
> >
> > >> Hi;
> >
> > >> I have just deployed an application to app engine which use twitter
> search
> > >> api.
> >
> > >> But there is a problem. Twitter blocks some of  my requests saying
> "You
> > >> have been rate limited. Enhance your calm."
> >
> > >> Of course I have asked about this to twitter men, waiting for their
> reply.
> >
> > >> But I want to ask you, whether following scenerio is possible:
> > >> May app engine give the same IP to different applications?
> > >> If so another application which we share the same IP, may be spamming
> > >> twitter api; and because of this spammer application I am blocked too.
> >
> > >> Is this possible?
> > >> Have you seen similar problem, and is there a solution?
> >
> > >> Note: It is not possible to exceed the rate limits of twitter, because
> > >> there is no traffic in my site.
> >
> > >> Thanks in advance.
> >
> > >> --
> > >> Enes Akar
> > >>http://www.linkedin.com/pub/enes-akar/7/835/3aa
> >
> > >> --
> > >> You received this message because you are subscribed to the Google
> Groups
> > >> "Google App Engine" group.
> > >> To post to this group, send email to
> google-appeng...@googlegroups.com.
> > >> To unsubscribe from this group, send email to
> > >> google-appengine+unsubscr...@googlegroups.com e...@googlegroups.com>
> > >> .
> > >> For more options, visit this group at
> > >>http://groups.google.com/group/google-appengine?hl=en.
> >
> > > --
> > > Nick Johnson, Developer Programs Engineer, App Engine
> > > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
> Number:
> > > 368047
> >
> > > --
> > > You received this message because you are subscribed to the Google
> Groups
> > > "Google App Engine" group.
> > > To post to this group, send email to google-appengine@googlegroups.com
> .
> > > To unsubscribe from this group, send email to
> > > google-appengine+unsubscr...@googlegroups.com e...@googlegroups.com>
> > > .
> > > For more options, visit this group at
> > >http://groups.google.com/group/google-appengine?hl=en.
> >
> > --
> > Enes Akarhttp://www.linkedin.com/pub/enes-akar/7/835/3aa
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>


-- 
Enes Akar
http://www.linkedin.com/pub/enes-akar/7/835/3aa

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google

[google-appengine] DeadlineExceededError / 500 Server Error Occurring everywhere

2010-02-09 Thread Leon
My site is suddenly acting up today. Every page is throwing a 500
Server Error ("The server encountered an error and could not complete
your request.") and the logs all cite the "DeadlineExceededError". I
haven't made any code changes in a month. I noticed it happens
immediately after I try to edit content on my site (a couple of writes
to the datastore and memcache), but any further activity yields the
500 / deadline exceed error on every page on the site and this problem
lasts for around 5 minutes, then suddenly the site works again, then
more 500's will occur. The content isn't any different from what I've
worked on in the past, and in the past I was able to do what I'm doing
now without these problems.

Can someone at google see what's up? My app name is
"newfangledfunnies"

Thanks,
Leon

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: app engine, shared IP and twitter api

2010-02-09 Thread Ryan

Make sure you set your User Agent string to something unique as well.
You'll still get Rate Limited, but it should be a slightly higher
limit.

To answer your question, yes, you are being rate limited because of
other App Engine Twitter search API users.  I wouldn't suggest using
App Engine and Twitter Search for a production project because of
this.  Twitter does not have authenticated search API, and only
whitelists search users by IP address, and explicitly says in their
docs that they can't/won't whitelist App Engine apps:

http://apiwiki.twitter.com/Rate-limiting

"The Search API is only able to whitelist IP addresses, not user
accounts. This works in most situations but for cloud platforms like
Google App Engine, applications without a static IP addresses cannot
receive Search whitelisting."

Ryan

On Feb 9, 8:05 am, enes akar  wrote:
> Thanks Nick I will try to find authenticated version of search api.
>
> By the way I am really thankful about the immediate responses of this group.
>
> On Tue, Feb 9, 2010 at 4:50 PM, Nick Johnson (Google) <
>
>
>
>
>
> nick.john...@google.com> wrote:
> > Hi,
>
> > App Engine uses a shared pool of IPs for outgoing urlfetch requests.
> > Unfortunately, as you observe, some services such as Twitter enforce per-ip
> > ratelimiting.
>
> > In the case of Twitter, most of their APIs that support anonymous access
> > also support authenticated access. You can submit authenticated requests
> > instead, which are limited by your account, rather than by your IP.
>
> > -Nick Johnson
>
> > On Tue, Feb 9, 2010 at 2:33 PM, enes akar  wrote:
>
> >> Hi;
>
> >> I have just deployed an application to app engine which use twitter search
> >> api.
>
> >> But there is a problem. Twitter blocks some of  my requests saying "You
> >> have been rate limited. Enhance your calm."
>
> >> Of course I have asked about this to twitter men, waiting for their reply.
>
> >> But I want to ask you, whether following scenerio is possible:
> >> May app engine give the same IP to different applications?
> >> If so another application which we share the same IP, may be spamming
> >> twitter api; and because of this spammer application I am blocked too.
>
> >> Is this possible?
> >> Have you seen similar problem, and is there a solution?
>
> >> Note: It is not possible to exceed the rate limits of twitter, because
> >> there is no traffic in my site.
>
> >> Thanks in advance.
>
> >> --
> >> Enes Akar
> >>http://www.linkedin.com/pub/enes-akar/7/835/3aa
>
> >> --
> >> You received this message because you are subscribed to the Google Groups
> >> "Google App Engine" group.
> >> To post to this group, send email to google-appeng...@googlegroups.com.
> >> To unsubscribe from this group, send email to
> >> google-appengine+unsubscr...@googlegroups.com >>  e...@googlegroups.com>
> >> .
> >> For more options, visit this group at
> >>http://groups.google.com/group/google-appengine?hl=en.
>
> > --
> > Nick Johnson, Developer Programs Engineer, App Engine
> > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> > 368047
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com > e...@googlegroups.com>
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.
>
> --
> Enes Akarhttp://www.linkedin.com/pub/enes-akar/7/835/3aa

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Android app to monitor appengine quotas

2010-02-09 Thread Hugo Visser
Hi,

I've updated Engine Watch for Android. New:
- Now uses the build-in system Google Accounts on Android 2.0 and up
(no more entering passwords)
- Add shortcuts to a specific app on the home screen
- Display of billing stats

Full details on my blog at http://code.neenbedankt.com

Hugo

On Feb 1, 5:05 pm, Hugo Visser  wrote:
> Hi,
>
> I've released a little android app to monitor your app engine quotas.
> It's called Engine Watch and is available from the android market. See
> my blog athttp://code.neenbedankt.comfor more details. I hope it's
> useful to some of you too :)
>
> Hugo

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Audio

2010-02-09 Thread Hazzadous
It would be great to have an audio API like you say, for transcoding,
voice recognition, general manip.  Currently having to upload 4
formats for the same file.  There is this in issues:

http://code.google.com/p/googleappengine/issues/detail?id=1947

On Feb 8, 6:40 pm, sampablokuper  wrote:
> On Feb 6, 8:43 pm, ProfessorMD  wrote:
>
> > Does Google App Engine have support for the Java Sound API?
>
> Come to think of it, does Google App Engine support any audio
> processing libraries? Can an App Engine app, for instance, transcode
> an uploaded WAV file to a FLAC or MP3 file?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Uncatchable severe error "Operation commit failed on resource" logged

2010-02-09 Thread Marc Provost
Hi everybody!

I am using the java implementation and seeing the following error
logged sporadically, both in the development server and live. Note
that it does not seem to be a "real" error, as the commit always goes
through and my data looks perfectly fine. Could it be a low-level
exception that is not converted back to a JDOException? Is it a real
error?

org.datanucleus.transaction.Transaction commit: Operation commit
failed on resource:
org.datanucleus.store.appengine.datastorexaresou...@608d41, error code
UNKNOWN and transaction: [DataNucleus Transaction, ID=Xid=

I think the root cause of this problem is that I'm reading entities
from one entity group, cache them and then I open a transaction on
another entity group. In short, I spawn tasks that first read 30 or so
entities and copy a subset of their content into memory. Then, I open
a transaction and cache this content to a "global" entity (just a
wrapper around a Blob) for later use.  My goal is to go over all the
entities of a kind (1000s) and cache a subset of their data that I
often need. The cache and the other entities are not in the same
entity group. If I only perform the transaction, without reading the
other entities first the error does not occur.

Note that I'm doing this very carefully -- I perform all my data store
operations inside of a try/catch && for loop to retry if necessary.

Thanks for any help!
Marc

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: "The import com.google.apphosting.api.ApiProxy cannot be resolved"

2010-02-09 Thread Guser
This error started on my in Eclipse for me, as well, after I
downloaded and installed the latest GWT 2.0.1

No resolution yet, and it didn't happen after updating on another
workstation, so not sure what the issue is.

Here's the error,

The type com.google.apphosting.api.ApiProxy$Delegate cannot be
resolved. It is indirectly referenced from required .class files

and

The project was not built since its build path is incomplete. Cannot
find the class file for com.google.apphosting.api.ApiProxy$Delegate.
Fix the build path then try building this project





On Feb 3, 1:13 am, ivanceras  wrote:
> I also have the same issue,
> import com.google.appengine.tools.development.ApiProxyLocalImpl;
> is an error in my eclipse IDE.
>
> On Dec 30 2009, 3:32 am, "Ikai L (Google)"  wrote:
>
>
>
> > ApiProxy should just be part of the standard SDK JAR. Is this being included
> > correctly?
>
> > On Sun, Dec 27, 2009 at 7:01 PM, bill  wrote:
> > > I followed the instructions here:
>
> > >http://code.google.com/appengine/docs/java/howto/unittesting.html
>
> > > and added appengine-api-stubs.jar and appengine-local-runtime.jar to
> > > my classpath.  Eclipse was able to resolve these imports fine:
>
> > > import com.google.appengine.api.datastore.dev.LocalDatastoreService;
> > > import com.google.appengine.tools.development.ApiProxyLocalImpl;
>
> > > It is not able to resolve this import:
>
> > > import com.google.apphosting.api.ApiProxy;
>
> > > Is there another JAR file I need to add to my classpath?
>
> > > - Bill
>
> > > --
>
> > > You received this message because you are subscribed to the Google Groups
> > > "Google App Engine" group.
> > > To post to this group, send email to google-appeng...@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > > google-appengine+unsubscr...@googlegroups.com > >  e...@googlegroups.com>
> > > .
> > > For more options, visit this group at
> > >http://groups.google.com/group/google-appengine?hl=en.
>
> > --
> > Ikai Lan
> > Developer Programs Engineer, Google App Engine

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] I need a working InboundMail example, pls

2010-02-09 Thread Ikai L (Google)
Matthew, there's an example of this here in our docs:
http://code.google.com/appengine/docs/python/mail/receivingmail.html

On Mon, Feb 8, 2010 at 1:13 PM, Matthew  wrote:

> Hello, all.
>
> If you have a working example of using inbound email,
> I would be grateful for a peek at it.
>
> Oops, I should mention that I need the Python version.
>
> Also, if you do have an example and you are nice enough
> to share it with me, can I pls beg your indulgence and have
> you show me what you put into the from/to fields in the
> Development Console when you are testing?  Thanks!
>
> Thank you one and all for being here for me,
>
> Matthew
>
> ps. this is my first posting on this group - and as a way
> of introducing myself I'll just say that I am overjoyed
> to see Python being used in such a way - it was clear
> to me when Python 2 arrived that that it (or something
> like it) was going to be the way to software gets created
> - long live Python and GAE
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>


-- 
Ikai Lan
Developer Programs Engineer, Google App Engine
http://googleappengine.blogspot.com | http://twitter.com/app_engine

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: cron job and secured application

2010-02-09 Thread lent
In terms of https enforcement, it is done through the standard web
application descriptor in Java (web.xml).  Appengine automatically
redirects from http to https when an secured url is accessed through
http.  Since a cron job url is a relative url and the cron job winds
up being called through http, the cron job request gets redirected to
https.  The app is not the one sending 302, I believe appengine
doesn't allow a job to do a redirect and returns the 302 error.
That's why I think it would be of value to allow cron job to be
configured to use https rather than http.

Len

On Feb 8, 5:04 am, "Nick Johnson (Google)" 
wrote:
> Hi,
>
> I presume from your description you're enforcing https-only in your code?
> Cronjobs don't actually use HTTP, so they simply fill in the protocol value
> with a default of 'http'. You need to exempt cron requests in your app from
> sending 302s.
>
> -Nick Johnson
>
>
>
> On Sat, Feb 6, 2010 at 1:15 AM, lent  wrote:
> > Hi,
>
> > I have an (java) application that's secured so that only https access
> > is allowed.  I'm running into a problem with cron jobs in that they
> > are sent using http and when scheduled request hits my app gets
> > redirected to https and and winds up getting "too many continues" and
> > 302 status.  I didn't find anything in the docs about configuring to
> > have the scheduled requests sent via https.  Is this in the plans?
> > Any suggestions for how I can get around this problem other than to
> > have the scheduled request urls be allowed to come in through http.
>
> > Thanks,
> > Len
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.
>
> --
> Nick Johnson, Developer Programs Engineer, App Engine
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Stored data voiume is totally different on Datastore Statistics and Dashboard

2010-02-09 Thread Robert Kluin
I think it is a great ide.  Would certainly help us better understand
how we are using space.

Robert






On Tue, Feb 9, 2010 at 4:12 AM, Pavel Kaplin  wrote:
> I like this suggestion too. Moreover, I'd prefer detailed description of the
> number called Used Space on Dashboard/Quota Details pages. E.g. indexes =
> 250 Mb , Entities = 500 Mb, sessions = 50 Mb and so on, plus detailed info
> about indexes and sessions. What do you think?
>
> On Mon, Feb 8, 2010 at 9:57 PM, Robert Kluin  wrote:
>>
>> I like Philip's suggestion a lot.  It will help us all identify space
>> intensive indexes, and _hopefully_ reduce the number of posts about
>> this.
>>
>> I submitted a feature request for this issue:
>> http://code.google.com/p/googleappengine/issues/detail?id=2740
>>
>> Robert
>>
>>
>>
>>
>> On Mon, Feb 8, 2010 at 2:27 PM, WeatherPhilip
>>  wrote:
>> > This issue comes up about once per week. Google -- you need to address
>> > this. The simplest way of addressing it would be to put the size of
>> > *every* index on the statistics page, and show that the total
>> > corresponds (roughly) to the quota number.
>> >
>> > This would enable people to see which index (or indexes) was consuming
>> > a lot of space, and take steps to optimize -- maybe by not indexing
>> > that field.
>> >
>> > Philip
>> >
>> > On Feb 8, 1:48 pm, "Ikai L (Google)"  wrote:
>> >> Are you storing information in sessions? Session information can also
>> >> take
>> >> up space.
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> On Mon, Feb 8, 2010 at 5:32 AM, Pavel Kaplin 
>> >> wrote:
>> >> > Here's the detailed description of mentioned indexes:
>> >> >  1) address (string, < 100 bytes), tradePoint(String, < 100 bytes),
>> >> > user (Key, generated by GAE), timestamp
>> >> >  2) tradePoint, user, timestamp
>> >> >  3) user, timestamp
>> >>
>> >> > Entities count is about 15k. I don't understand how it might
>> >> > happened.
>> >>
>> >> > On Feb 8, 3:25 pm, "Nick Johnson (Google)" 
>> >> > wrote:
>> >> > > Hi Pavel,
>> >>
>> >> > > That depends on the nature of your indexes, and the entities being
>> >> > indexed.
>> >> > > It's certainly possible for indexes to reach this magnitude -
>> >> > particularly
>> >> > > if you're indexing list properties.
>> >>
>> >> > > -Nick Johnson
>> >>
>> >> > > On Mon, Feb 8, 2010 at 1:12 PM, Pavel Kaplin
>> >> > > 
>> >> > wrote:
>> >> > > > It's hard to believe that 3 indexes (for 2, 3 and 4 fields) could
>> >> > > > eat
>> >> > > > 9x more space than data itself.
>> >>
>> >> > > > On Feb 8, 2:45 pm, "Nick Johnson (Google)"
>> >> > > > 
>> >> > > > wrote:
>> >> > > > > Hi Pavel,
>> >>
>> >> > > > > The datastore stats include only the raw size of the entities.
>> >> > > > > The
>> >> > total
>> >> > > > > space consumed is the space consumed by the entities, plus the
>> >> > > > > space
>> >> > > > > consumed by all your indexes.
>> >>
>> >> > > > > -Nick Johnson
>> >>
>> >> > > > > On Mon, Feb 8, 2010 at 12:35 PM, Pavel Kaplin <
>> >> > pavel.kap...@gmail.com
>> >> > > > >wrote:
>> >>
>> >> > > > > > Hi there!
>> >>
>> >> > > > > > My datastore stats says me "Size of all entities = 51
>> >> > > > > > MBytes", but
>> >> > > > > > dashboard shows 0.54 Gb as Total Stored Data.
>> >>
>> >> > > > > > As you can see, these values differ from each other for more
>> >> > > > > > than
>> >> > ten
>> >> > > > > > times. Why?
>> >>
>> >> > > > > > Application id is bayadera-tracker
>> >>
>> >> > > > > > --
>> >> > > > > > You received this message because you are subscribed to the
>> >> > > > > > Google
>> >> > > > Groups
>> >> > > > > > "Google App Engine" group.
>> >> > > > > > To post to this group, send email to
>> >> > google-appengine@googlegroups.com
>> >> > > > .
>> >> > > > > > To unsubscribe from this group, send email to
>> >> > > > > >
>> >> > > > > > google-appengine+unsubscr...@googlegroups.com> >> > > > > > e...@googlegroups.com>> >> > e...@googlegroups.com>> >> > > > e...@googlegroups.com>
>> >> > > > > > .
>> >> > > > > > For more options, visit this group at
>> >> > > > > >http://groups.google.com/group/google-appengine?hl=en.
>> >>
>> >> > > > > --
>> >> > > > > Nick Johnson, Developer Programs Engineer, App Engine
>> >> > > > > Google Ireland Ltd. :: Registered in Dublin, Ireland,
>> >> > > > > Registration
>> >> > > > Number:
>> >> > > > > 368047
>> >>
>> >> > > > --
>> >> > > > You received this message because you are subscribed to the
>> >> > > > Google
>> >> > Groups
>> >> > > > "Google App Engine" group.
>> >> > > > To post to this group, send email to
>> >> > > > google-appengine@googlegroups.com
>> >> > .
>> >> > > > To unsubscribe from this group, send email to
>> >> > > >
>> >> > > > google-appengine+unsubscr...@googlegroups.com> >> > > > e...@googlegroups.com>> >> > e...@googlegroups.com>
>> >> > > > .
>> >> > > > For more options, visit this group at
>> >> > > >http://groups.google.com/group/google-appengine?hl=en.
>> >>
>> >> > > --
>> >> > > Nick Johnson, Developer Programs Engineer, App Engine

[google-appengine] Re: Help us bring Google Development to Saint Louis

2010-02-09 Thread STL Innovation Camp
Thank you all for your RTs and messages.
Because of the strong development community at Google, we have already
had a member of Google's outreach team contact us to speak at the STL
Innovation Camp.

Again thank you, without your desire to help the community this
wouldn't have happened.

On Jan 30, 9:05 pm, STL Innovation Camp  wrote:
> PLEASE READ BEFORE MARKING AS SPAM:
> The Saint Louis Dev Community, like most, is in desperate need of an
> employment boost. To help our unemployed brethren, a number of us
> formed decided to host a boot camp that teaches techies how to build
> innovative products and run their own company.
>
> Since our goal is economic growth and not vendor sales, this camp is
> entirely vendor neutral. We currently have speakers & sponsors from
> the Java, Microsoft, and IBM communities. Unfortunately, we haven't
> been able to get Google's attention.
>
> Please help us attract Google as a sponsor/speaker so Saint Louis
> developers can learn about Google App Engine in this exciting camp.
> Please forward this message or join us in a tweet off to get Googles
> attention:
>
> Twitter Message:
> RT @STLInnovation: Pls RT and get @google to sponsor STL Innovation
> camp. #InnovationCamphttp://bit.ly/aZxtrE
>
> Please tweet quickly, our camp is on Feb. 26th, so we need to get
> Google's attention quickly.http://www.stlinnovationcamp.com| @stlinnovation

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Creating Index programatically

2010-02-09 Thread Robert Kluin
No.

But if you explain the problem you are trying to solve someone might
be able to suggest an alternative solution.

Robert




On Tue, Feb 9, 2010 at 1:34 PM, Sandeep  wrote:
> Is there any way to create/build an Index programatically in GAE?
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] A "Which is Better" question.

2010-02-09 Thread Robert Kluin
We see very different performance depending on the exact structure of
our models and indexes.  I have found that it really helps to just
setup test cases and try both when you have different implementation
ideas.  And it may be the case that you need to use some combination
of the two approaches.

Robert







On Tue, Feb 9, 2010 at 12:50 PM, kang  wrote:
> According to Google engineer, fetch entry by key is very fast. So you should
> use fetch by key rather than manipulating the list. I think the B approach
> will cost more CPU also.
>
> On Wed, Feb 10, 2010 at 1:38 AM, johnP  wrote:
>>
>> I'm trying to get my head around where to use the datastore for
>> business logic, and where to use python code.  So just wanted to ask
>> two "which approach is better" questions:
>>
>> Scenario 1.  For example, let's say you have a list of items, and you
>> need to return both the selected item as well as the list of items.
>> For example - a person has a bunch of cars:  [Honda, Ford, Porsche,
>> Yugo, Trabaunt, Dusenberg].  You have stored the Yugo key as the
>> person's 'active_car'.
>>
>>   You need to return:                 car, cars = Yugo, [Honda, Ford,
>> Porsche, Yugo, Trabaunt, Dusenberg]
>>
>> Which approach is better to return the values...
>>
>> Approach A:     cars=person.car_set.fetch(1000)
>>                       car = db.get(stored_car_key)
>>                       return car, cars
>>
>>
>> Approach B:     cars = person.car_set.fetch(1000)
>>                       car = [i for i in cars if i.key() ==
>> stored_car_key]
>>                       return car, cars
>>
>> In other words - what's cheaper - the list comprehension, or the
>> db.get().  If the person has 1000 cars, does the answer change?
>>
>>
>> Scenario 2.  I have a list of 300 people (In my case, there will never
>> be more than 1000) that I need to slice and dice in different ways.
>> a.  Need them all, by last name, from California.  b.  Need people
>> between the ages of 25 and 35 in California.  c.  Need people over 300
>> lbs in California.  Which approach should I use:
>>
>> Approach A:  Create multiple queries:
>>                     a.  people =
>> state.people_set.order('last_name').fetch(1000)
>>                     b.  people =
>> state.people_set.order('age').filter('age >', 35).etc.
>>                     c.  people =
>> state.people_set.order(weight').filter('weight >', 300).etc.
>>
>> Approach B:  Memcache the entire list of people, and
>> list_comprehension them into submission.  For example:
>>
>>                   def return_people_by_last_name():
>>                         people = get_all_people_from_memcache()
>>                         sort_list = [(i.last_name, i) for i in people
>> if i.state == state]
>>                         sort_list.sort()
>>                         return [i[1] for i in sort_list]
>>
>>                   def sort_people_by_weight():
>>                          similar to above...
>>
>> In approach A, you'll be bearing the cost of additional indexes, as
>> well as bearing the cost that most of your returns will be hits to the
>> database.  In approach B, you might be pulling 300 People from
>> memcache in order to return a single 300 pounder.
>>
>> Answers to these two questions might give me a better sense of when to
>> hit the datastore for business logic, and when to process using python
>> code.
>>
>> Thanks!
>>
>> johnP
>>
>>
>>
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine" group.
>> To post to this group, send email to google-appeng...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine+unsubscr...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>
>
>
> --
> Stay hungry,Stay foolish.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Creating Index programatically

2010-02-09 Thread Sandeep
Is there any way to create/build an Index programatically in GAE?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] already covered, but mentioning it again

2010-02-09 Thread Matthew

Inbound email was not working.

Downgraded Python to 2.5.

Now email works on Ubuntu 10.

Here's what I did:



In a terminal install Python 2.5

They will coexist on the system.

sudo apt-get install python2.5

Edit dev_appserver.py in your google_appengine directory

Change the first line in dev_appserver.py…

#!/usr/bin/env python

….to…

#!/usr/bin/env python2.5

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Server error message when signing up to the google app engine

2010-02-09 Thread Brice Tchiebeb
Hi everyone, I'm not sure that my question has'nt been asked before,
but i'm gonna whatever all ask you that.

When choosing to sign up to the GAE, and after having typed my email
adress and my password, I've had this message :

___
Error: Server Error

The server encountered an error and could not complete your request.
If the problem persists, please report your problem and mention this
error message and the query that caused it.
___

Do someone else get this error message ?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: 500 status on apps, api changes?

2010-02-09 Thread dan
Note: I had this problem as well.

I was using the caching strategy outlined in
http://henritersteeg.wordpress.com/2009/03/30/generic-db-caching-in-google-app-engine/,
which relies on the signature of db.get().

Dan


On Feb 9, 3:08 am, dobee  wrote:
> ok, we now fixed the compatibility issues on our apps, but the
> development sdk still does not match the api on appengine.
>
> it would be nice to get information about such internal changes up-
> front the next time. it is always hard to explain our customers why
> the site was offline for some technical reason we cannot forsee.
>
> thx, bernd

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] I need a working InboundMail example, pls

2010-02-09 Thread Matthew
Hello, all.

If you have a working example of using inbound email,
I would be grateful for a peek at it.

Oops, I should mention that I need the Python version.

Also, if you do have an example and you are nice enough
to share it with me, can I pls beg your indulgence and have
you show me what you put into the from/to fields in the
Development Console when you are testing?  Thanks!

Thank you one and all for being here for me,

Matthew

ps. this is my first posting on this group - and as a way
of introducing myself I'll just say that I am overjoyed
to see Python being used in such a way - it was clear
to me when Python 2 arrived that that it (or something
like it) was going to be the way to software gets created
- long live Python and GAE

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: 500 status on apps, api changes?

2010-02-09 Thread dan
What was the change, and what was the fix?

The stack trace above shows:

  File "/base/data/home/apps/mk-a-z/3.339622665704014614/mkapp/
business.py", line 149, in get_by_ident
business = Business.get_by_key_name(key)
  File "/base/python_lib/versions/1/google/appengine/ext/db/
__init__.py", line 991, in get_by_key_name
return get(keys[0], rpc=rpc)
TypeError: get_cached() got an unexpected keyword argument 'rpc'

If db.get changed, you would have to change google/appengine/ext/db/
__init__.py, but I presume that is replaced when you upload anyway.

Did Model.get_by_key_name change?

Dan

On Feb 9, 3:08 am, dobee  wrote:
> ok, we now fixed the compatibility issues on our apps, but the
> development sdk still does not match the api on appengine.
>
> it would be nice to get information about such internal changes up-
> front the next time. it is always hard to explain our customers why
> the site was offline for some technical reason we cannot forsee.
>
> thx, bernd

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] A "Which is Better" question.

2010-02-09 Thread kang
According to Google engineer, fetch entry by key is very fast. So you should
use fetch by key rather than manipulating the list. I think the B approach
will cost more CPU also.

On Wed, Feb 10, 2010 at 1:38 AM, johnP  wrote:

> I'm trying to get my head around where to use the datastore for
> business logic, and where to use python code.  So just wanted to ask
> two "which approach is better" questions:
>
> Scenario 1.  For example, let's say you have a list of items, and you
> need to return both the selected item as well as the list of items.
> For example - a person has a bunch of cars:  [Honda, Ford, Porsche,
> Yugo, Trabaunt, Dusenberg].  You have stored the Yugo key as the
> person's 'active_car'.
>
>   You need to return: car, cars = Yugo, [Honda, Ford,
> Porsche, Yugo, Trabaunt, Dusenberg]
>
> Which approach is better to return the values...
>
> Approach A: cars=person.car_set.fetch(1000)
>   car = db.get(stored_car_key)
>   return car, cars
>
>
> Approach B: cars = person.car_set.fetch(1000)
>   car = [i for i in cars if i.key() ==
> stored_car_key]
>   return car, cars
>
> In other words - what's cheaper - the list comprehension, or the
> db.get().  If the person has 1000 cars, does the answer change?
>
>
> Scenario 2.  I have a list of 300 people (In my case, there will never
> be more than 1000) that I need to slice and dice in different ways.
> a.  Need them all, by last name, from California.  b.  Need people
> between the ages of 25 and 35 in California.  c.  Need people over 300
> lbs in California.  Which approach should I use:
>
> Approach A:  Create multiple queries:
> a.  people =
> state.people_set.order('last_name').fetch(1000)
> b.  people =
> state.people_set.order('age').filter('age >', 35).etc.
> c.  people =
> state.people_set.order(weight').filter('weight >', 300).etc.
>
> Approach B:  Memcache the entire list of people, and
> list_comprehension them into submission.  For example:
>
>   def return_people_by_last_name():
> people = get_all_people_from_memcache()
> sort_list = [(i.last_name, i) for i in people
> if i.state == state]
> sort_list.sort()
> return [i[1] for i in sort_list]
>
>   def sort_people_by_weight():
>  similar to above...
>
> In approach A, you'll be bearing the cost of additional indexes, as
> well as bearing the cost that most of your returns will be hits to the
> database.  In approach B, you might be pulling 300 People from
> memcache in order to return a single 300 pounder.
>
> Answers to these two questions might give me a better sense of when to
> hit the datastore for business logic, and when to process using python
> code.
>
> Thanks!
>
> johnP
>
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>


-- 
Stay hungry,Stay foolish.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] A "Which is Better" question.

2010-02-09 Thread johnP
I'm trying to get my head around where to use the datastore for
business logic, and where to use python code.  So just wanted to ask
two "which approach is better" questions:

Scenario 1.  For example, let's say you have a list of items, and you
need to return both the selected item as well as the list of items.
For example - a person has a bunch of cars:  [Honda, Ford, Porsche,
Yugo, Trabaunt, Dusenberg].  You have stored the Yugo key as the
person's 'active_car'.

   You need to return: car, cars = Yugo, [Honda, Ford,
Porsche, Yugo, Trabaunt, Dusenberg]

Which approach is better to return the values...

Approach A: cars=person.car_set.fetch(1000)
   car = db.get(stored_car_key)
   return car, cars


Approach B: cars = person.car_set.fetch(1000)
   car = [i for i in cars if i.key() ==
stored_car_key]
   return car, cars

In other words - what's cheaper - the list comprehension, or the
db.get().  If the person has 1000 cars, does the answer change?


Scenario 2.  I have a list of 300 people (In my case, there will never
be more than 1000) that I need to slice and dice in different ways.
a.  Need them all, by last name, from California.  b.  Need people
between the ages of 25 and 35 in California.  c.  Need people over 300
lbs in California.  Which approach should I use:

Approach A:  Create multiple queries:
 a.  people =
state.people_set.order('last_name').fetch(1000)
 b.  people =
state.people_set.order('age').filter('age >', 35).etc.
 c.  people =
state.people_set.order(weight').filter('weight >', 300).etc.

Approach B:  Memcache the entire list of people, and
list_comprehension them into submission.  For example:

   def return_people_by_last_name():
 people = get_all_people_from_memcache()
 sort_list = [(i.last_name, i) for i in people
if i.state == state]
 sort_list.sort()
 return [i[1] for i in sort_list]

   def sort_people_by_weight():
  similar to above...

In approach A, you'll be bearing the cost of additional indexes, as
well as bearing the cost that most of your returns will be hits to the
database.  In approach B, you might be pulling 300 People from
memcache in order to return a single 300 pounder.

Answers to these two questions might give me a better sense of when to
hit the datastore for business logic, and when to process using python
code.

Thanks!

johnP






-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Unable to access Admin Console

2010-02-09 Thread Wesley Chun (Google)
greetings!

are you still having this issue? if so, can you send us a more
accurate URL than "https:appengine.google.com/a//"?
also, what is your application ID? we can help you more with this
additional information.

thanks!
-- wesley
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"Core Python Programming", Prentice Hall, (c)2007,2001
"Python Fundamentals", Prentice Hall, (c)2009
   http://corepython.com

wesley.j.chun :: wesc+...@google.com
developer relations :: google app engine

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] 100 seeks/sec equals how many writes/sec?????

2010-02-09 Thread Wesley C (Google)
marc,

the language is a little tricky here. 风笑雪's response is a closer to
reality. brett's argument was for pure disk operations at the lowest
level, e.g., he was speaking solely of max possible write operations
to disk (and not *entity* writes to disk). it's not even possible to
write 100 small entities to disk in a sec because of the overhead of
journaling, indexing, and verification.

i also received some clarification from brett to confirm:

"[My] point in saying that was to illustrate that *base-case* with a
10ms seek time you could do 100 writes/sec, and that doesn't even
include the data transfer time. With data larger than one disk block
and operating system overhead the potential write throughput for a
single entity is way less."

if you wanted to do a real measurement, you could use time.time() in 2
places to measure write throughput for your app and get a rough idea.
keep in mind that your app runs on different machines and different
disks so an average number is the best rough estimate.

hope this helps!
-- wesley
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
"Core Python Programming", Prentice Hall, (c)2007,2001
"Python Fundamentals", Prentice Hall, (c)2009
   http://corepython.com

wesley.j.chun :: wesc+...@google.com
developer relations :: google app engine

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: 1.3.1 SDK Prerelease - help us verify

2010-02-09 Thread Jeff Schnitzer
In case it wasn't completely clear - 1234 in this example is the
object's id, not an offset.

Jeff

On Tue, Feb 9, 2010 at 9:02 AM, Jeff Schnitzer  wrote:
> Still, a slightly modified version of the original request does not
> seem unreasonable.  He would have to formulate his URLs something like
> this:
>
> myblog.com/comments/?q=the&first=1234
>
> or maybe:
>
> myblog.com/comments/?q=the&after=1234
>
> I could see this being really useful, since encrypting (or worse,
> storing on the server) the cursor is pretty painful.  Furthermore, it
> seems highly probable that as things are, many people will obliviously
> write public webapps that take a raw cursor as a parameter.  This
> could be the new SQL injection attack.
>
> Jeff
>
> 2010/2/9 Alkis Evlogimenos ('Αλκης Ευλογημένος) :
>> If the cursor had to skip entries by using an offset, its performance would
>> depend on the size of the offset. This is what the current Query.fetch() api
>> is doing when you give it an offset. A cursor is a pointer to the entry from
>> which the next query will start. It has no notion of offset.
>> On Tue, Feb 9, 2010 at 4:07 PM, Nickolas Daskalou  wrote:
>>>
>>> Does the production cursor string contain information about the app id,
>>> kind, any filter()s or order()s, and (more importantly) some sort of
>>> numerical value that indicates how many records the next query should
>>> "skip"? If so, and if we could extract this information (and then use it
>>> again to the reconstruct the cursor), that would make for much cleaner,
>>> safer and intuitive URLs than including the entire cursor string (or some
>>> sort of encrypted/encoded cursor string replacement).
>>>
>>>
>>> 2010/2/10 Nick Johnson (Google) 

 Hi Nickolas,

 2010/2/9 Nickolas Daskalou 
>
> I'd want to do this so that I could include parts of the cursor (such as
> the offset) into a URL without including other parts (eg. the model kind 
> and
> filters). I could then reconstruct the cursor on the server side based on
> what was passed into the URL.

 The offset argument you're talking about is specific to the
 dev_appserver's implementation of cursors. In production, offsets are not
 used, so this won't work.
 -Nick Johnson

>
> For example, if I was searching for blog comments that contained the
> word "the" (with the default order being the creation time, descending), 
> the
> URL might look like this:
>
> myblog.com/comments/?q=the
>
> With model:
>
> class Comment(db.Model):
>   
>   created_at = db.DateTimeProperty(auto_now_add=True)
>   words = db.StringListProperty() # A list of all the words in a comment
> (forget about exploding indexes for now)
>   ...
>
> The query object for this URL might look something like:
>
> 
> q =
> Comment.all().filter('words',self.request.get('q')).order('-created_at')
> 
>
> To get to the 1001st comment, it'd be good if the URL looked something
> like this:
>
> myblog.com/comments/?q=the&skip=1000
>
> instead of:
>
> myblog.com/comments/?q=the&cursor=[something ugly]
>
> so that when the request comes in, I can do this:
>
> 
> q =
> Comment.all().filter('words',self.request.get('q')).order('-created_at')
> cursor_template = q.cursor_template()
> cursor =
> db.Cursor.from_template(cursor_template,offset=int(self.request.get('skip')))
> 
> (or something along these lines)
>
> Does that make sense?
>
>
> On 10 February 2010 01:03, Nick Johnson (Google)
>  wrote:
>>
>> Hi Nickolas,
>>
>> 2010/2/9 Nickolas Daskalou 
>>>
>>> Will we be able to construct our own cursors much the same way that we
>>> are able to construct our own Datastore keys (Key.from_path())?
>>
>> No, not practically speaking.
>>
>>>
>>> Also along the same lines, will we be able to "deconstruct" a cursor
>>> to get its components (offset, start_inclusive etc.), as we can now do 
>>> with
>>> keys (key.name(), key.id(), key.kind() etc.)?
>>
>> While you could do this, there's no guarantees that it'll work (or
>> continue to work), as you'd be digging into internal implementation 
>> details.
>> Why do you want to do this?
>> -Nick Johnson
>>>
>>>
>>> 2010/2/9 Nick Johnson (Google) 

 2010/2/9 Stephen 
>
> I'm asking if it's wise to store it as a query parameter embedded in
> a
> web page.

 You're right that it's unwise. Depending on how you construct your
 query, a user could potentially modify the cursor they send to you to 
 return
 results from any query your datastore is capable of performing, which 
 could
 result in you revealing information to the user that they shouldn't 

Re: [google-appengine] Re: 1.3.1 SDK Prerelease - help us verify

2010-02-09 Thread Jeff Schnitzer
Still, a slightly modified version of the original request does not
seem unreasonable.  He would have to formulate his URLs something like
this:

myblog.com/comments/?q=the&first=1234

or maybe:

myblog.com/comments/?q=the&after=1234

I could see this being really useful, since encrypting (or worse,
storing on the server) the cursor is pretty painful.  Furthermore, it
seems highly probable that as things are, many people will obliviously
write public webapps that take a raw cursor as a parameter.  This
could be the new SQL injection attack.

Jeff

2010/2/9 Alkis Evlogimenos ('Αλκης Ευλογημένος) :
> If the cursor had to skip entries by using an offset, its performance would
> depend on the size of the offset. This is what the current Query.fetch() api
> is doing when you give it an offset. A cursor is a pointer to the entry from
> which the next query will start. It has no notion of offset.
> On Tue, Feb 9, 2010 at 4:07 PM, Nickolas Daskalou  wrote:
>>
>> Does the production cursor string contain information about the app id,
>> kind, any filter()s or order()s, and (more importantly) some sort of
>> numerical value that indicates how many records the next query should
>> "skip"? If so, and if we could extract this information (and then use it
>> again to the reconstruct the cursor), that would make for much cleaner,
>> safer and intuitive URLs than including the entire cursor string (or some
>> sort of encrypted/encoded cursor string replacement).
>>
>>
>> 2010/2/10 Nick Johnson (Google) 
>>>
>>> Hi Nickolas,
>>>
>>> 2010/2/9 Nickolas Daskalou 

 I'd want to do this so that I could include parts of the cursor (such as
 the offset) into a URL without including other parts (eg. the model kind 
 and
 filters). I could then reconstruct the cursor on the server side based on
 what was passed into the URL.
>>>
>>> The offset argument you're talking about is specific to the
>>> dev_appserver's implementation of cursors. In production, offsets are not
>>> used, so this won't work.
>>> -Nick Johnson
>>>

 For example, if I was searching for blog comments that contained the
 word "the" (with the default order being the creation time, descending), 
 the
 URL might look like this:

 myblog.com/comments/?q=the

 With model:

 class Comment(db.Model):
   
   created_at = db.DateTimeProperty(auto_now_add=True)
   words = db.StringListProperty() # A list of all the words in a comment
 (forget about exploding indexes for now)
   ...

 The query object for this URL might look something like:

 
 q =
 Comment.all().filter('words',self.request.get('q')).order('-created_at')
 

 To get to the 1001st comment, it'd be good if the URL looked something
 like this:

 myblog.com/comments/?q=the&skip=1000

 instead of:

 myblog.com/comments/?q=the&cursor=[something ugly]

 so that when the request comes in, I can do this:

 
 q =
 Comment.all().filter('words',self.request.get('q')).order('-created_at')
 cursor_template = q.cursor_template()
 cursor =
 db.Cursor.from_template(cursor_template,offset=int(self.request.get('skip')))
 
 (or something along these lines)

 Does that make sense?


 On 10 February 2010 01:03, Nick Johnson (Google)
  wrote:
>
> Hi Nickolas,
>
> 2010/2/9 Nickolas Daskalou 
>>
>> Will we be able to construct our own cursors much the same way that we
>> are able to construct our own Datastore keys (Key.from_path())?
>
> No, not practically speaking.
>
>>
>> Also along the same lines, will we be able to "deconstruct" a cursor
>> to get its components (offset, start_inclusive etc.), as we can now do 
>> with
>> keys (key.name(), key.id(), key.kind() etc.)?
>
> While you could do this, there's no guarantees that it'll work (or
> continue to work), as you'd be digging into internal implementation 
> details.
> Why do you want to do this?
> -Nick Johnson
>>
>>
>> 2010/2/9 Nick Johnson (Google) 
>>>
>>> 2010/2/9 Stephen 

 I'm asking if it's wise to store it as a query parameter embedded in
 a
 web page.
>>>
>>> You're right that it's unwise. Depending on how you construct your
>>> query, a user could potentially modify the cursor they send to you to 
>>> return
>>> results from any query your datastore is capable of performing, which 
>>> could
>>> result in you revealing information to the user that they shouldn't 
>>> know.
>>> You should either store the cursor on the server-side, or encrypt it 
>>> before
>>> sending it to the client.
>>> I was going to mention something about this in my post, but it
>>> slipped my mind.
>>> -Nick Johnson

 On Feb 9, 12:26 am, "Ikai L (Google)

Re: [google-appengine] getting back images from data store

2010-02-09 Thread kang
Have you seen the "Dynamically serving images" part on that page?
http://code.google.com/intl/fr/appengine/docs/python/images/usingimages.html

On Mon, Feb 8, 2010 at 1:56 AM, kais louetri  wrote:

> HI
> i am using the exemple of the guestbook (http://code.google.com/intl/
> fr/appengine/docs/python/images/usingimages.html), and i would like to
> get back images that i stored in it to use them in other pages, can
> any one give me a help with that ?
> thank you
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>


-- 
Stay hungry,Stay foolish.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: 1.3.1 SDK Prerelease - help us verify

2010-02-09 Thread 'Αλκης Ευλογημένος
If the cursor had to skip entries by using an offset, its performance would
depend on the size of the offset. This is what the current Query.fetch() api
is doing when you give it an offset. A cursor is a pointer to the entry from
which the next query will start. It has no notion of offset.

On Tue, Feb 9, 2010 at 4:07 PM, Nickolas Daskalou  wrote:

> Does the production cursor string contain information about the app id,
> kind, any filter()s or order()s, and (more importantly) some sort of
> numerical value that indicates how many records the next query should
> "skip"? If so, and if we could extract this information (and then use it
> again to the reconstruct the cursor), that would make for much cleaner,
> safer and intuitive URLs than including the entire cursor string (or some
> sort of encrypted/encoded cursor string replacement).
>
>
> 2010/2/10 Nick Johnson (Google) 
>
> Hi Nickolas,
>>
>> 2010/2/9 Nickolas Daskalou 
>>
>>> I'd want to do this so that I could include parts of the cursor (such as
>>> the offset) into a URL without including other parts (eg. the model kind and
>>> filters). I could then reconstruct the cursor on the server side based on
>>> what was passed into the URL.
>>>
>>
>> The offset argument you're talking about is specific to the
>> dev_appserver's implementation of cursors. In production, offsets are not
>> used, so this won't work.
>>
>>  -Nick Johnson
>>
>>
>>>
>>> For example, if I was searching for blog comments that contained the word
>>> "the" (with the default order being the creation time, descending), the URL
>>> might look like this:
>>>
>>> myblog.com/comments/?q=the
>>>
>>> With model:
>>>
>>> class Comment(db.Model):
>>>   
>>>   created_at = db.DateTimeProperty(auto_now_add=True)
>>>   words = db.StringListProperty() # A list of all the words in a comment
>>> (forget about exploding indexes for now)
>>>   ...
>>>
>>> The query object for this URL might look something like:
>>>
>>> 
>>> q =
>>> Comment.all().filter('words',self.request.get('q')).order('-created_at')
>>> 
>>>
>>> To get to the 1001st comment, it'd be good if the URL looked something
>>> like this:
>>>
>>> myblog.com/comments/?q=the&skip=1000
>>>
>>> instead of:
>>>
>>> myblog.com/comments/?q=the&cursor=[something ugly]
>>>
>>> so that when the request comes in, I can do this:
>>>
>>> 
>>> q =
>>> Comment.all().filter('words',self.request.get('q')).order('-created_at')
>>> cursor_template = q.cursor_template()
>>> cursor =
>>> db.Cursor.from_template(cursor_template,offset=int(self.request.get('skip')))
>>> 
>>> (or something along these lines)
>>>
>>> Does that make sense?
>>>
>>>
>>>
>>> On 10 February 2010 01:03, Nick Johnson (Google) <
>>> nick.john...@google.com> wrote:
>>>
 Hi Nickolas,

 2010/2/9 Nickolas Daskalou 

 Will we be able to construct our own cursors much the same way that we
> are able to construct our own Datastore keys (Key.from_path())?
>

 No, not practically speaking.


>
> Also along the same lines, will we be able to "deconstruct" a cursor to
> get its components (offset, start_inclusive etc.), as we can now do with
> keys (key.name(), key.id(), key.kind() etc.)?
>

 While you could do this, there's no guarantees that it'll work (or
 continue to work), as you'd be digging into internal implementation 
 details.
 Why do you want to do this?

 -Nick Johnson


>
> 2010/2/9 Nick Johnson (Google) 
>
>>  2010/2/9 Stephen 
>>
>>
>>> I'm asking if it's wise to store it as a query parameter embedded in
>>> a
>>> web page.
>>>
>>
>> You're right that it's unwise. Depending on how you construct your
>> query, a user could potentially modify the cursor they send to you to 
>> return
>> results from any query your datastore is capable of performing, which 
>> could
>> result in you revealing information to the user that they shouldn't know.
>> You should either store the cursor on the server-side, or encrypt it 
>> before
>> sending it to the client.
>>
>> I was going to mention something about this in my post, but it slipped
>> my mind.
>>
>> -Nick Johnson
>>
>>>
>>>
>>> On Feb 9, 12:26 am, "Ikai L (Google)"  wrote:
>>> > A cursor serializes to a Base64 encoded String, so you can store it
>>> anywhere
>>> > you want to store strings: Memcached, Datastore, etc. You can even
>>> pass it
>>> > as an URL parameter to task queues.
>>> >
>>> > 2010/2/8 Stephen 
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > > Ah right, Nick's blog does say start_key and not offset. My bad.
>>> >
>>> > > Maybe there will be warnings in the upcoming documentation, but
>>> my
>>> > > first instinct was to embed the serialised cursor in the HTML as
>>> the
>>> > > 'next' link. But that doesn't look like a good ide

Re: [google-appengine] Re: 1.3.1 SDK Prerelease - help us verify

2010-02-09 Thread Nickolas Daskalou
Does the production cursor string contain information about the app id,
kind, any filter()s or order()s, and (more importantly) some sort of
numerical value that indicates how many records the next query should
"skip"? If so, and if we could extract this information (and then use it
again to the reconstruct the cursor), that would make for much cleaner,
safer and intuitive URLs than including the entire cursor string (or some
sort of encrypted/encoded cursor string replacement).


2010/2/10 Nick Johnson (Google) 

> Hi Nickolas,
>
> 2010/2/9 Nickolas Daskalou 
>
>> I'd want to do this so that I could include parts of the cursor (such as
>> the offset) into a URL without including other parts (eg. the model kind and
>> filters). I could then reconstruct the cursor on the server side based on
>> what was passed into the URL.
>>
>
> The offset argument you're talking about is specific to the dev_appserver's
> implementation of cursors. In production, offsets are not used, so this
> won't work.
>
>  -Nick Johnson
>
>
>>
>> For example, if I was searching for blog comments that contained the word
>> "the" (with the default order being the creation time, descending), the URL
>> might look like this:
>>
>> myblog.com/comments/?q=the
>>
>> With model:
>>
>> class Comment(db.Model):
>>   
>>   created_at = db.DateTimeProperty(auto_now_add=True)
>>   words = db.StringListProperty() # A list of all the words in a comment
>> (forget about exploding indexes for now)
>>   ...
>>
>> The query object for this URL might look something like:
>>
>> 
>> q =
>> Comment.all().filter('words',self.request.get('q')).order('-created_at')
>> 
>>
>> To get to the 1001st comment, it'd be good if the URL looked something
>> like this:
>>
>> myblog.com/comments/?q=the&skip=1000
>>
>> instead of:
>>
>> myblog.com/comments/?q=the&cursor=[something ugly]
>>
>> so that when the request comes in, I can do this:
>>
>> 
>> q =
>> Comment.all().filter('words',self.request.get('q')).order('-created_at')
>> cursor_template = q.cursor_template()
>> cursor =
>> db.Cursor.from_template(cursor_template,offset=int(self.request.get('skip')))
>> 
>> (or something along these lines)
>>
>> Does that make sense?
>>
>>
>>
>> On 10 February 2010 01:03, Nick Johnson (Google) > > wrote:
>>
>>> Hi Nickolas,
>>>
>>> 2010/2/9 Nickolas Daskalou 
>>>
>>> Will we be able to construct our own cursors much the same way that we
 are able to construct our own Datastore keys (Key.from_path())?

>>>
>>> No, not practically speaking.
>>>
>>>

 Also along the same lines, will we be able to "deconstruct" a cursor to
 get its components (offset, start_inclusive etc.), as we can now do with
 keys (key.name(), key.id(), key.kind() etc.)?

>>>
>>> While you could do this, there's no guarantees that it'll work (or
>>> continue to work), as you'd be digging into internal implementation details.
>>> Why do you want to do this?
>>>
>>> -Nick Johnson
>>>
>>>

 2010/2/9 Nick Johnson (Google) 

>  2010/2/9 Stephen 
>
>
>> I'm asking if it's wise to store it as a query parameter embedded in a
>> web page.
>>
>
> You're right that it's unwise. Depending on how you construct your
> query, a user could potentially modify the cursor they send to you to 
> return
> results from any query your datastore is capable of performing, which 
> could
> result in you revealing information to the user that they shouldn't know.
> You should either store the cursor on the server-side, or encrypt it 
> before
> sending it to the client.
>
> I was going to mention something about this in my post, but it slipped
> my mind.
>
> -Nick Johnson
>
>>
>>
>> On Feb 9, 12:26 am, "Ikai L (Google)"  wrote:
>> > A cursor serializes to a Base64 encoded String, so you can store it
>> anywhere
>> > you want to store strings: Memcached, Datastore, etc. You can even
>> pass it
>> > as an URL parameter to task queues.
>> >
>> > 2010/2/8 Stephen 
>> >
>> >
>> >
>> >
>> >
>> > > Ah right, Nick's blog does say start_key and not offset. My bad.
>> >
>> > > Maybe there will be warnings in the upcoming documentation, but my
>> > > first instinct was to embed the serialised cursor in the HTML as
>> the
>> > > 'next' link. But that doesn't look like a good idea as Nick's
>> decoded
>> > > query shows what's embedded:
>> >
>> > > PrimaryScan {
>> > >  start_key: "shell\000TestModel\000foo\000\232bar\000\200"
>> > >  start_inclusive: true
>> > > }
>> > > keys_only: false
>> >
>> > > First, you may or may not want to leak this info. Second, could
>> this
>> > > be altered on the client to change the query in any way that's
>> > > undesirable?
>> >
>> > > Once you have a cursor, where do you store it so you can use it
>> again?
>> >
>> 

Re: [google-appengine] app engine, shared IP and twitter api

2010-02-09 Thread enes akar
Thanks Nick I will try to find authenticated version of search api.

By the way I am really thankful about the immediate responses of this group.

On Tue, Feb 9, 2010 at 4:50 PM, Nick Johnson (Google) <
nick.john...@google.com> wrote:

> Hi,
>
> App Engine uses a shared pool of IPs for outgoing urlfetch requests.
> Unfortunately, as you observe, some services such as Twitter enforce per-ip
> ratelimiting.
>
> In the case of Twitter, most of their APIs that support anonymous access
> also support authenticated access. You can submit authenticated requests
> instead, which are limited by your account, rather than by your IP.
>
> -Nick Johnson
>
> On Tue, Feb 9, 2010 at 2:33 PM, enes akar  wrote:
>
>> Hi;
>>
>> I have just deployed an application to app engine which use twitter search
>> api.
>>
>> But there is a problem. Twitter blocks some of  my requests saying "You
>> have been rate limited. Enhance your calm."
>>
>> Of course I have asked about this to twitter men, waiting for their reply.
>>
>> But I want to ask you, whether following scenerio is possible:
>> May app engine give the same IP to different applications?
>> If so another application which we share the same IP, may be spamming
>> twitter api; and because of this spammer application I am blocked too.
>>
>> Is this possible?
>> Have you seen similar problem, and is there a solution?
>>
>> Note: It is not possible to exceed the rate limits of twitter, because
>> there is no traffic in my site.
>>
>>
>> Thanks in advance.
>>
>>
>>
>> --
>> Enes Akar
>> http://www.linkedin.com/pub/enes-akar/7/835/3aa
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine" group.
>> To post to this group, send email to google-appeng...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine+unsubscr...@googlegroups.com
>> .
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>
>
>
> --
> Nick Johnson, Developer Programs Engineer, App Engine
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>



-- 
Enes Akar
http://www.linkedin.com/pub/enes-akar/7/835/3aa

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: 1.3.1 SDK Prerelease - help us verify

2010-02-09 Thread Nick Johnson (Google)
Hi Nickolas,

2010/2/9 Nickolas Daskalou 

> I'd want to do this so that I could include parts of the cursor (such as
> the offset) into a URL without including other parts (eg. the model kind and
> filters). I could then reconstruct the cursor on the server side based on
> what was passed into the URL.
>

The offset argument you're talking about is specific to the dev_appserver's
implementation of cursors. In production, offsets are not used, so this
won't work.

-Nick Johnson


>
> For example, if I was searching for blog comments that contained the word
> "the" (with the default order being the creation time, descending), the URL
> might look like this:
>
> myblog.com/comments/?q=the
>
> With model:
>
> class Comment(db.Model):
>   
>   created_at = db.DateTimeProperty(auto_now_add=True)
>   words = db.StringListProperty() # A list of all the words in a comment
> (forget about exploding indexes for now)
>   ...
>
> The query object for this URL might look something like:
>
> 
> q =
> Comment.all().filter('words',self.request.get('q')).order('-created_at')
> 
>
> To get to the 1001st comment, it'd be good if the URL looked something like
> this:
>
> myblog.com/comments/?q=the&skip=1000
>
> instead of:
>
> myblog.com/comments/?q=the&cursor=[something ugly]
>
> so that when the request comes in, I can do this:
>
> 
> q =
> Comment.all().filter('words',self.request.get('q')).order('-created_at')
> cursor_template = q.cursor_template()
> cursor =
> db.Cursor.from_template(cursor_template,offset=int(self.request.get('skip')))
> 
> (or something along these lines)
>
> Does that make sense?
>
>
>
> On 10 February 2010 01:03, Nick Johnson (Google) 
> wrote:
>
>> Hi Nickolas,
>>
>> 2010/2/9 Nickolas Daskalou 
>>
>> Will we be able to construct our own cursors much the same way that we are
>>> able to construct our own Datastore keys (Key.from_path())?
>>>
>>
>> No, not practically speaking.
>>
>>
>>>
>>> Also along the same lines, will we be able to "deconstruct" a cursor to
>>> get its components (offset, start_inclusive etc.), as we can now do with
>>> keys (key.name(), key.id(), key.kind() etc.)?
>>>
>>
>> While you could do this, there's no guarantees that it'll work (or
>> continue to work), as you'd be digging into internal implementation details.
>> Why do you want to do this?
>>
>> -Nick Johnson
>>
>>
>>>
>>> 2010/2/9 Nick Johnson (Google) 
>>>
  2010/2/9 Stephen 


> I'm asking if it's wise to store it as a query parameter embedded in a
> web page.
>

 You're right that it's unwise. Depending on how you construct your
 query, a user could potentially modify the cursor they send to you to 
 return
 results from any query your datastore is capable of performing, which could
 result in you revealing information to the user that they shouldn't know.
 You should either store the cursor on the server-side, or encrypt it before
 sending it to the client.

 I was going to mention something about this in my post, but it slipped
 my mind.

 -Nick Johnson

>
>
> On Feb 9, 12:26 am, "Ikai L (Google)"  wrote:
> > A cursor serializes to a Base64 encoded String, so you can store it
> anywhere
> > you want to store strings: Memcached, Datastore, etc. You can even
> pass it
> > as an URL parameter to task queues.
> >
> > 2010/2/8 Stephen 
> >
> >
> >
> >
> >
> > > Ah right, Nick's blog does say start_key and not offset. My bad.
> >
> > > Maybe there will be warnings in the upcoming documentation, but my
> > > first instinct was to embed the serialised cursor in the HTML as
> the
> > > 'next' link. But that doesn't look like a good idea as Nick's
> decoded
> > > query shows what's embedded:
> >
> > > PrimaryScan {
> > >  start_key: "shell\000TestModel\000foo\000\232bar\000\200"
> > >  start_inclusive: true
> > > }
> > > keys_only: false
> >
> > > First, you may or may not want to leak this info. Second, could
> this
> > > be altered on the client to change the query in any way that's
> > > undesirable?
> >
> > > Once you have a cursor, where do you store it so you can use it
> again?
> >
> > > On Feb 8, 10:17 pm, "Ikai L (Google)"  wrote:
> > > > I got beaten to this answer. No, there is no traversal to get to
> the
> > > offset.
> >
> > > > BigTable has an underlying mechanism for range queries on keys.
> Indexes
> > > are
> > > > essentially a key comprised of a concatenation of application ID,
> entity
> > > > type, column, value. When a filter operation is performed, the
> datastore
> > > > looks for a range matching this criteria, returning the set of
> keys. A
> > > > cursor also adds the datastore key of the entity so it is
> possible to
> > > > serialize where to begin the query. This is actually a b

Re: [google-appengine] app engine, shared IP and twitter api

2010-02-09 Thread Nick Johnson (Google)
Hi,

App Engine uses a shared pool of IPs for outgoing urlfetch requests.
Unfortunately, as you observe, some services such as Twitter enforce per-ip
ratelimiting.

In the case of Twitter, most of their APIs that support anonymous access
also support authenticated access. You can submit authenticated requests
instead, which are limited by your account, rather than by your IP.

-Nick Johnson

On Tue, Feb 9, 2010 at 2:33 PM, enes akar  wrote:

> Hi;
>
> I have just deployed an application to app engine which use twitter search
> api.
>
> But there is a problem. Twitter blocks some of  my requests saying "You
> have been rate limited. Enhance your calm."
>
> Of course I have asked about this to twitter men, waiting for their reply.
>
> But I want to ask you, whether following scenerio is possible:
> May app engine give the same IP to different applications?
> If so another application which we share the same IP, may be spamming
> twitter api; and because of this spammer application I am blocked too.
>
> Is this possible?
> Have you seen similar problem, and is there a solution?
>
> Note: It is not possible to exceed the rate limits of twitter, because
> there is no traffic in my site.
>
>
> Thanks in advance.
>
>
>
> --
> Enes Akar
> http://www.linkedin.com/pub/enes-akar/7/835/3aa
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>



-- 
Nick Johnson, Developer Programs Engineer, App Engine
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: comment récupérer des image à partir de la data store

2010-02-09 Thread kais louetri
merci bien pour le lien, il y a quelque chose d'autre que je veut
m'informer sur, j'utilise pour tester(en local) le code qui manipule
des image la bibliothèque PIL(Python Imaging Library). maintenant si
je fini et je veut uploader mon code pour qu'il soit utiliser sur le
net, comment je vais faire? est ce que je doit installer cette
bibliothèque sur le serveur ou est héberger mon application web ?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: 1.3.1 SDK Prerelease - help us verify

2010-02-09 Thread Nickolas Daskalou
I'd want to do this so that I could include parts of the cursor (such as the
offset) into a URL without including other parts (eg. the model kind and
filters). I could then reconstruct the cursor on the server side based on
what was passed into the URL.

For example, if I was searching for blog comments that contained the word
"the" (with the default order being the creation time, descending), the URL
might look like this:

myblog.com/comments/?q=the

With model:

class Comment(db.Model):
  
  created_at = db.DateTimeProperty(auto_now_add=True)
  words = db.StringListProperty() # A list of all the words in a comment
(forget about exploding indexes for now)
  ...

The query object for this URL might look something like:


q = Comment.all().filter('words',self.request.get('q')).order('-created_at')


To get to the 1001st comment, it'd be good if the URL looked something like
this:

myblog.com/comments/?q=the&skip=1000

instead of:

myblog.com/comments/?q=the&cursor=[something ugly]

so that when the request comes in, I can do this:


q = Comment.all().filter('words',self.request.get('q')).order('-created_at')
cursor_template = q.cursor_template()
cursor =
db.Cursor.from_template(cursor_template,offset=int(self.request.get('skip')))

(or something along these lines)

Does that make sense?


On 10 February 2010 01:03, Nick Johnson (Google) wrote:

> Hi Nickolas,
>
> 2010/2/9 Nickolas Daskalou 
>
> Will we be able to construct our own cursors much the same way that we are
>> able to construct our own Datastore keys (Key.from_path())?
>>
>
> No, not practically speaking.
>
>
>>
>> Also along the same lines, will we be able to "deconstruct" a cursor to
>> get its components (offset, start_inclusive etc.), as we can now do with
>> keys (key.name(), key.id(), key.kind() etc.)?
>>
>
> While you could do this, there's no guarantees that it'll work (or continue
> to work), as you'd be digging into internal implementation details. Why do
> you want to do this?
>
> -Nick Johnson
>
>
>>
>> 2010/2/9 Nick Johnson (Google) 
>>
>>>  2010/2/9 Stephen 
>>>
>>>
 I'm asking if it's wise to store it as a query parameter embedded in a
 web page.

>>>
>>> You're right that it's unwise. Depending on how you construct your query,
>>> a user could potentially modify the cursor they send to you to return
>>> results from any query your datastore is capable of performing, which could
>>> result in you revealing information to the user that they shouldn't know.
>>> You should either store the cursor on the server-side, or encrypt it before
>>> sending it to the client.
>>>
>>> I was going to mention something about this in my post, but it slipped my
>>> mind.
>>>
>>> -Nick Johnson
>>>


 On Feb 9, 12:26 am, "Ikai L (Google)"  wrote:
 > A cursor serializes to a Base64 encoded String, so you can store it
 anywhere
 > you want to store strings: Memcached, Datastore, etc. You can even
 pass it
 > as an URL parameter to task queues.
 >
 > 2010/2/8 Stephen 
 >
 >
 >
 >
 >
 > > Ah right, Nick's blog does say start_key and not offset. My bad.
 >
 > > Maybe there will be warnings in the upcoming documentation, but my
 > > first instinct was to embed the serialised cursor in the HTML as the
 > > 'next' link. But that doesn't look like a good idea as Nick's
 decoded
 > > query shows what's embedded:
 >
 > > PrimaryScan {
 > >  start_key: "shell\000TestModel\000foo\000\232bar\000\200"
 > >  start_inclusive: true
 > > }
 > > keys_only: false
 >
 > > First, you may or may not want to leak this info. Second, could this
 > > be altered on the client to change the query in any way that's
 > > undesirable?
 >
 > > Once you have a cursor, where do you store it so you can use it
 again?
 >
 > > On Feb 8, 10:17 pm, "Ikai L (Google)"  wrote:
 > > > I got beaten to this answer. No, there is no traversal to get to
 the
 > > offset.
 >
 > > > BigTable has an underlying mechanism for range queries on keys.
 Indexes
 > > are
 > > > essentially a key comprised of a concatenation of application ID,
 entity
 > > > type, column, value. When a filter operation is performed, the
 datastore
 > > > looks for a range matching this criteria, returning the set of
 keys. A
 > > > cursor also adds the datastore key of the entity so it is possible
 to
 > > > serialize where to begin the query. This is actually a bit awkward
 to
 > > > explain without visuals. You can watch Ryan Barrett's talk here:
 >
 > > >http://www.youtube.com/watch?v=tx5gdoNpcZM
 >
 > > > Hopefully, we'll be able to post an article at some point in the
 future
 > > > explaining how cursors work.
 >
 > > > 2010/2/8 Alkis Evlogimenos ('Αλκης Ευλογημένος) <
 evlogime...@gmail.com>
 >
 > > > > There is no offset. The pro

[google-appengine] app engine, shared IP and twitter api

2010-02-09 Thread enes akar
Hi;

I have just deployed an application to app engine which use twitter search
api.

But there is a problem. Twitter blocks some of  my requests saying "You have
been rate limited. Enhance your calm."

Of course I have asked about this to twitter men, waiting for their reply.

But I want to ask you, whether following scenerio is possible:
May app engine give the same IP to different applications?
If so another application which we share the same IP, may be spamming
twitter api; and because of this spammer application I am blocked too.

Is this possible?
Have you seen similar problem, and is there a solution?

Note: It is not possible to exceed the rate limits of twitter, because there
is no traffic in my site.


Thanks in advance.



-- 
Enes Akar
http://www.linkedin.com/pub/enes-akar/7/835/3aa

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: 1.3.1 SDK Prerelease - help us verify

2010-02-09 Thread Nick Johnson (Google)
Hi Nickolas,

2010/2/9 Nickolas Daskalou 

> Will we be able to construct our own cursors much the same way that we are
> able to construct our own Datastore keys (Key.from_path())?
>

No, not practically speaking.


>
> Also along the same lines, will we be able to "deconstruct" a cursor to get
> its components (offset, start_inclusive etc.), as we can now do with keys (
> key.name(), key.id(), key.kind() etc.)?
>

While you could do this, there's no guarantees that it'll work (or continue
to work), as you'd be digging into internal implementation details. Why do
you want to do this?

-Nick Johnson


>
> 2010/2/9 Nick Johnson (Google) 
>
>>  2010/2/9 Stephen 
>>
>>
>>> I'm asking if it's wise to store it as a query parameter embedded in a
>>> web page.
>>>
>>
>> You're right that it's unwise. Depending on how you construct your query,
>> a user could potentially modify the cursor they send to you to return
>> results from any query your datastore is capable of performing, which could
>> result in you revealing information to the user that they shouldn't know.
>> You should either store the cursor on the server-side, or encrypt it before
>> sending it to the client.
>>
>> I was going to mention something about this in my post, but it slipped my
>> mind.
>>
>> -Nick Johnson
>>
>>>
>>>
>>> On Feb 9, 12:26 am, "Ikai L (Google)"  wrote:
>>> > A cursor serializes to a Base64 encoded String, so you can store it
>>> anywhere
>>> > you want to store strings: Memcached, Datastore, etc. You can even pass
>>> it
>>> > as an URL parameter to task queues.
>>> >
>>> > 2010/2/8 Stephen 
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > > Ah right, Nick's blog does say start_key and not offset. My bad.
>>> >
>>> > > Maybe there will be warnings in the upcoming documentation, but my
>>> > > first instinct was to embed the serialised cursor in the HTML as the
>>> > > 'next' link. But that doesn't look like a good idea as Nick's decoded
>>> > > query shows what's embedded:
>>> >
>>> > > PrimaryScan {
>>> > >  start_key: "shell\000TestModel\000foo\000\232bar\000\200"
>>> > >  start_inclusive: true
>>> > > }
>>> > > keys_only: false
>>> >
>>> > > First, you may or may not want to leak this info. Second, could this
>>> > > be altered on the client to change the query in any way that's
>>> > > undesirable?
>>> >
>>> > > Once you have a cursor, where do you store it so you can use it
>>> again?
>>> >
>>> > > On Feb 8, 10:17 pm, "Ikai L (Google)"  wrote:
>>> > > > I got beaten to this answer. No, there is no traversal to get to
>>> the
>>> > > offset.
>>> >
>>> > > > BigTable has an underlying mechanism for range queries on keys.
>>> Indexes
>>> > > are
>>> > > > essentially a key comprised of a concatenation of application ID,
>>> entity
>>> > > > type, column, value. When a filter operation is performed, the
>>> datastore
>>> > > > looks for a range matching this criteria, returning the set of
>>> keys. A
>>> > > > cursor also adds the datastore key of the entity so it is possible
>>> to
>>> > > > serialize where to begin the query. This is actually a bit awkward
>>> to
>>> > > > explain without visuals. You can watch Ryan Barrett's talk here:
>>> >
>>> > > >http://www.youtube.com/watch?v=tx5gdoNpcZM
>>> >
>>> > > > Hopefully, we'll be able to post an article at some point in the
>>> future
>>> > > > explaining how cursors work.
>>> >
>>> > > > 2010/2/8 Alkis Evlogimenos ('Αλκης Ευλογημένος) <
>>> evlogime...@gmail.com>
>>> >
>>> > > > > There is no offset. The protocol buffer stores a start_key and a
>>> > > boolean
>>> > > > > denoting if this start key is inclusive or not. The performance
>>> of
>>> > > > > continuing the fetch from a cursor should be the same as the
>>> > > performance of
>>> > > > > the first entities you got from a query.
>>> >
>>> > > > > On Mon, Feb 8, 2010 at 4:33 PM, Stephen 
>>> wrote:
>>> >
>>> > > > >> On Feb 8, 7:06 pm, "Ikai L (Google)"  wrote:
>>> > > > >> > The official docs are pending, but here's Nick Johnson to the
>>> > > rescue:
>>> >
>>> > >
>>> http://blog.notdot.net/2010/02/New-features-in-1-3-1-prerelease-Cursors
>>> >
>>> > > > >> What are the performance characteristics of cursors?
>>> >
>>> > > > >> The serialised cursor shows that it stores an offset. Does that
>>> mean
>>> > > > >> that if the offset is one million, one million rows will have to
>>> be
>>> > > > >> skipped before the next 10 are returned? This will be faster
>>> than
>>> > > > >> doing it in your app, but not as quick as the existing bookmark
>>> > > > >> techniques which use the primary key index.
>>> >
>>> > > > >> Or is the server-side stateful, like a typical SQL
>>> implementation of
>>> > > > >> cursors? In which case, are there any limits to the number of
>>> active
>>> > > > >> cursors? Or what if a cursor is resumed some time in the future;
>>> will
>>> > > > >> it work at all, or work slower?
>>> >
>>> > > > >> --
>>> > > > >> You received this message because you are subscribed to the
>>> Google
>>> > > Groups
>>> > >

Re: [google-appengine] Re: 1.3.1 SDK Prerelease - help us verify

2010-02-09 Thread Nickolas Daskalou
Will we be able to construct our own cursors much the same way that we are
able to construct our own Datastore keys (Key.from_path())?

Also along the same lines, will we be able to "deconstruct" a cursor to get
its components (offset, start_inclusive etc.), as we can now do with keys (
key.name(), key.id(), key.kind() etc.)?


2010/2/9 Nick Johnson (Google) 

> 2010/2/9 Stephen 
>
>
>> I'm asking if it's wise to store it as a query parameter embedded in a
>> web page.
>>
>
> You're right that it's unwise. Depending on how you construct your query, a
> user could potentially modify the cursor they send to you to return results
> from any query your datastore is capable of performing, which could result
> in you revealing information to the user that they shouldn't know. You
> should either store the cursor on the server-side, or encrypt it before
> sending it to the client.
>
> I was going to mention something about this in my post, but it slipped my
> mind.
>
> -Nick Johnson
>
>>
>>
>> On Feb 9, 12:26 am, "Ikai L (Google)"  wrote:
>> > A cursor serializes to a Base64 encoded String, so you can store it
>> anywhere
>> > you want to store strings: Memcached, Datastore, etc. You can even pass
>> it
>> > as an URL parameter to task queues.
>> >
>> > 2010/2/8 Stephen 
>> >
>> >
>> >
>> >
>> >
>> > > Ah right, Nick's blog does say start_key and not offset. My bad.
>> >
>> > > Maybe there will be warnings in the upcoming documentation, but my
>> > > first instinct was to embed the serialised cursor in the HTML as the
>> > > 'next' link. But that doesn't look like a good idea as Nick's decoded
>> > > query shows what's embedded:
>> >
>> > > PrimaryScan {
>> > >  start_key: "shell\000TestModel\000foo\000\232bar\000\200"
>> > >  start_inclusive: true
>> > > }
>> > > keys_only: false
>> >
>> > > First, you may or may not want to leak this info. Second, could this
>> > > be altered on the client to change the query in any way that's
>> > > undesirable?
>> >
>> > > Once you have a cursor, where do you store it so you can use it again?
>> >
>> > > On Feb 8, 10:17 pm, "Ikai L (Google)"  wrote:
>> > > > I got beaten to this answer. No, there is no traversal to get to the
>> > > offset.
>> >
>> > > > BigTable has an underlying mechanism for range queries on keys.
>> Indexes
>> > > are
>> > > > essentially a key comprised of a concatenation of application ID,
>> entity
>> > > > type, column, value. When a filter operation is performed, the
>> datastore
>> > > > looks for a range matching this criteria, returning the set of keys.
>> A
>> > > > cursor also adds the datastore key of the entity so it is possible
>> to
>> > > > serialize where to begin the query. This is actually a bit awkward
>> to
>> > > > explain without visuals. You can watch Ryan Barrett's talk here:
>> >
>> > > >http://www.youtube.com/watch?v=tx5gdoNpcZM
>> >
>> > > > Hopefully, we'll be able to post an article at some point in the
>> future
>> > > > explaining how cursors work.
>> >
>> > > > 2010/2/8 Alkis Evlogimenos ('Αλκης Ευλογημένος) <
>> evlogime...@gmail.com>
>> >
>> > > > > There is no offset. The protocol buffer stores a start_key and a
>> > > boolean
>> > > > > denoting if this start key is inclusive or not. The performance of
>> > > > > continuing the fetch from a cursor should be the same as the
>> > > performance of
>> > > > > the first entities you got from a query.
>> >
>> > > > > On Mon, Feb 8, 2010 at 4:33 PM, Stephen 
>> wrote:
>> >
>> > > > >> On Feb 8, 7:06 pm, "Ikai L (Google)"  wrote:
>> > > > >> > The official docs are pending, but here's Nick Johnson to the
>> > > rescue:
>> >
>> > >
>> http://blog.notdot.net/2010/02/New-features-in-1-3-1-prerelease-Cursors
>> >
>> > > > >> What are the performance characteristics of cursors?
>> >
>> > > > >> The serialised cursor shows that it stores an offset. Does that
>> mean
>> > > > >> that if the offset is one million, one million rows will have to
>> be
>> > > > >> skipped before the next 10 are returned? This will be faster than
>> > > > >> doing it in your app, but not as quick as the existing bookmark
>> > > > >> techniques which use the primary key index.
>> >
>> > > > >> Or is the server-side stateful, like a typical SQL implementation
>> of
>> > > > >> cursors? In which case, are there any limits to the number of
>> active
>> > > > >> cursors? Or what if a cursor is resumed some time in the future;
>> will
>> > > > >> it work at all, or work slower?
>> >
>> > > > >> --
>> > > > >> You received this message because you are subscribed to the
>> Google
>> > > Groups
>> > > > >> "Google App Engine" group.
>> > > > >> To post to this group, send email to
>> > > google-appeng...@googlegroups.com.
>> > > > >> To unsubscribe from this group, send email to
>> > > > >> google-appengine+unsubscr...@googlegroups.com
>> 
>> >
>> > > 
>> 
>> >
>> >
>> > > > >> .
>> > > > >> For more options, visit this group at
>> > > > >>http://groups.google.com/group/google-appengine?hl=en.
>> >
>> > > > > --
>>

Re: [google-appengine] App Engine URI Quota

2010-02-09 Thread Nick Johnson (Google)
Hi Tim,

This text is a holdover from when we had a high CPU quota. You can safely
ignore it as anything other than a warning that you should consider
optimising your handler.

-Nick Johnson

On Mon, Feb 8, 2010 at 5:45 AM, Timwillhack  wrote:

> I've been writing a python backend on app engine for use with a
> facebook game.
>
> One of my pages is currently hitting 2459 CPU in the admin console per
> request (avg).  Its a register page and does a lot of databasing
> different classes.
>
> I've noticed an exclamation next to it that says 'This URI uses a high
> amount of CPU and may soon exceed its quota'
>
> keyword here being 'its' as in an individual pages quota.
>
> I've searched on this forum and found an answer that it isn't specific
> to a particular URI anymore but I wanted to verify this as if it is I
> can see running into a lot of problems shortly after release.
>
> I apologize for re-asking but I think being 100% sure would make me
> feel a lot better about a major release using app engine,python and
> facebook together in a winter wonderland.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>


-- 
Nick Johnson, Developer Programs Engineer, App Engine
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Unable to appcfg.py update my app

2010-02-09 Thread Tom Wu
Application: myapp; version: 1.
Server: appengine.google.com.
Scanning files on local disk.
Scanned 500 files.
Scanned 1000 files.
Scanned 1500 files.
Scanned 2000 files.
Initiating update.
Error 500: --- begin server output ---

Server Error (500)
A server error has occurred.
--- end server output ---


Best Regards
Tom Wu

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: 1.3.1 SDK Prerelease - help us verify

2010-02-09 Thread Nick Johnson (Google)
2010/2/9 Stephen 

>
> I'm asking if it's wise to store it as a query parameter embedded in a
> web page.
>

You're right that it's unwise. Depending on how you construct your query, a
user could potentially modify the cursor they send to you to return results
from any query your datastore is capable of performing, which could result
in you revealing information to the user that they shouldn't know. You
should either store the cursor on the server-side, or encrypt it before
sending it to the client.

I was going to mention something about this in my post, but it slipped my
mind.

-Nick Johnson

>
>
> On Feb 9, 12:26 am, "Ikai L (Google)"  wrote:
> > A cursor serializes to a Base64 encoded String, so you can store it
> anywhere
> > you want to store strings: Memcached, Datastore, etc. You can even pass
> it
> > as an URL parameter to task queues.
> >
> > 2010/2/8 Stephen 
> >
> >
> >
> >
> >
> > > Ah right, Nick's blog does say start_key and not offset. My bad.
> >
> > > Maybe there will be warnings in the upcoming documentation, but my
> > > first instinct was to embed the serialised cursor in the HTML as the
> > > 'next' link. But that doesn't look like a good idea as Nick's decoded
> > > query shows what's embedded:
> >
> > > PrimaryScan {
> > >  start_key: "shell\000TestModel\000foo\000\232bar\000\200"
> > >  start_inclusive: true
> > > }
> > > keys_only: false
> >
> > > First, you may or may not want to leak this info. Second, could this
> > > be altered on the client to change the query in any way that's
> > > undesirable?
> >
> > > Once you have a cursor, where do you store it so you can use it again?
> >
> > > On Feb 8, 10:17 pm, "Ikai L (Google)"  wrote:
> > > > I got beaten to this answer. No, there is no traversal to get to the
> > > offset.
> >
> > > > BigTable has an underlying mechanism for range queries on keys.
> Indexes
> > > are
> > > > essentially a key comprised of a concatenation of application ID,
> entity
> > > > type, column, value. When a filter operation is performed, the
> datastore
> > > > looks for a range matching this criteria, returning the set of keys.
> A
> > > > cursor also adds the datastore key of the entity so it is possible to
> > > > serialize where to begin the query. This is actually a bit awkward to
> > > > explain without visuals. You can watch Ryan Barrett's talk here:
> >
> > > >http://www.youtube.com/watch?v=tx5gdoNpcZM
> >
> > > > Hopefully, we'll be able to post an article at some point in the
> future
> > > > explaining how cursors work.
> >
> > > > 2010/2/8 Alkis Evlogimenos ('Αλκης Ευλογημένος) <
> evlogime...@gmail.com>
> >
> > > > > There is no offset. The protocol buffer stores a start_key and a
> > > boolean
> > > > > denoting if this start key is inclusive or not. The performance of
> > > > > continuing the fetch from a cursor should be the same as the
> > > performance of
> > > > > the first entities you got from a query.
> >
> > > > > On Mon, Feb 8, 2010 at 4:33 PM, Stephen  wrote:
> >
> > > > >> On Feb 8, 7:06 pm, "Ikai L (Google)"  wrote:
> > > > >> > The official docs are pending, but here's Nick Johnson to the
> > > rescue:
> >
> > >http://blog.notdot.net/2010/02/New-features-in-1-3-1-prerelease-Cursors
> >
> > > > >> What are the performance characteristics of cursors?
> >
> > > > >> The serialised cursor shows that it stores an offset. Does that
> mean
> > > > >> that if the offset is one million, one million rows will have to
> be
> > > > >> skipped before the next 10 are returned? This will be faster than
> > > > >> doing it in your app, but not as quick as the existing bookmark
> > > > >> techniques which use the primary key index.
> >
> > > > >> Or is the server-side stateful, like a typical SQL implementation
> of
> > > > >> cursors? In which case, are there any limits to the number of
> active
> > > > >> cursors? Or what if a cursor is resumed some time in the future;
> will
> > > > >> it work at all, or work slower?
> >
> > > > >> --
> > > > >> You received this message because you are subscribed to the Google
> > > Groups
> > > > >> "Google App Engine" group.
> > > > >> To post to this group, send email to
> > > google-appeng...@googlegroups.com.
> > > > >> To unsubscribe from this group, send email to
> > > > >> google-appengine+unsubscr...@googlegroups.com
> 
> >
> > > 
> 
> >
> >
> > > > >> .
> > > > >> For more options, visit this group at
> > > > >>http://groups.google.com/group/google-appengine?hl=en.
> >
> > > > > --
> >
> > > > > Alkis
> >
> > > > > --
> > > > > You received this message because you are subscribed to the Google
> > > Groups
> > > > > "Google App Engine" group.
> > > > > To post to this group, send email to
> google-appengine@googlegroups.com
> > > .
> > > > > To unsubscribe from this group, send email to
> > > > > google-appengine+unsubscr...@googlegroups.com
> 
> >
> > > 
> 
> >
> >
> > > > > .
> > > > > For more options, visit this group at
> > > > >http://groups.google.com/group/google-appengine?hl=en.
> >
>

Re: [google-appengine] Re: Can't access Datastore Viewer

2010-02-09 Thread 风笑雪
YouModel.all().filter('email porperty =', None).fetch(100)

You can update or delete them.

2010/2/9 Shai :
> HI,
> Yes, I am using Email fields and URL field's.
> I am not sure if there are currently null values there but my code
> doesn't validate that option so it's a possibility.
>
> Either way, I can change the model to strings instead of URL/Email
> pretty quick but how can I remove the data already there ?
>
> The project is still in testing phase so I can remove all data, is
> there a quick "reset" option ?
> (I'm using the Java + eclipse plug in)
>
>
> P.S.
> Can't email/url fields be null ? is it a problem  ? is this documented
> and I missed it ?
>
>
>
> On Feb 9, 12:34 am, "Ikai L (Google)"  wrote:
>> Datastore viewer issues are likely related to type validation issues. Are
>> you using specialized types such as PhoneNumber, Email Address or URL fields
>> in your model? Are these always being set, or are null or invalid values
>> being set?
>>
>>
>>
>>
>>
>> On Sat, Feb 6, 2010 at 6:12 PM, Shai  wrote:
>> > I hardly have any records, its a new application
>> > I tried other browsers and other computers over a few day's
>>
>> > Maybe something in my data is causing the problem ? I have no clue how
>> > to analyze this without a data viewer. my application run's smoothly
>>
>> > On Feb 7, 4:07 am, Eli Jones  wrote:
>> > > I had a few problems with the Datastore Viewer this evening.. but I
>> > assumed
>> > > it was because I was deleting thousands of entities.
>>
>> > > Try the usual tricks.. log out of your google accounts.. clear cache and
>> > > cookies.. and try relogging in to your Dashboard.
>>
>> > > On Sat, Feb 6, 2010 at 8:58 PM, Shai  wrote:
>> > > > Any one ?
>> > > > It really is blocking me
>>
>> > > > Someone know's if I can/should/how contact google about this ?
>>
>> > > > On Feb 4, 7:49 pm, Shai  wrote:
>> > > > > HI,
>> > > > > Every time I click on Datastore Viewer in my application page I get a
>> > > > > "Server Error (500)"  , "A server error has occurred."
>>
>> > > > > app id is - SwimmingSession
>>
>> > > > > Can anyone advice ? (did't find any related problem in the logs)
>>
>> > > > --
>> > > > You received this message because you are subscribed to the Google
>> > Groups
>> > > > "Google App Engine" group.
>> > > > To post to this group, send email to google-appengine@googlegroups.com
>> > .
>> > > > To unsubscribe from this group, send email to
>> > > > google-appengine+unsubscr...@googlegroups.com> > > >  e...@googlegroups.com>> > e...@googlegroups.com>
>> > > > .
>> > > > For more options, visit this group at
>> > > >http://groups.google.com/group/google-appengine?hl=en.
>>
>> > --
>> > You received this message because you are subscribed to the Google Groups
>> > "Google App Engine" group.
>> > To post to this group, send email to google-appeng...@googlegroups.com.
>> > To unsubscribe from this group, send email to
>> > google-appengine+unsubscr...@googlegroups.com> >  e...@googlegroups.com>
>> > .
>> > For more options, visit this group at
>> >http://groups.google.com/group/google-appengine?hl=en.
>>
>> --
>> Ikai Lan
>> Developer Programs Engineer, Google App 
>> Enginehttp://googleappengine.blogspot.com|http://twitter.com/app_engine
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Stored data voiume is totally different on Datastore Statistics and Dashboard

2010-02-09 Thread Pavel Kaplin
I like this suggestion too. Moreover, I'd prefer detailed description of the
number called Used Space on Dashboard/Quota Details pages. E.g. indexes =
250 Mb , Entities = 500 Mb, sessions = 50 Mb and so on, plus detailed info
about indexes and sessions. What do you think?

On Mon, Feb 8, 2010 at 9:57 PM, Robert Kluin  wrote:

> I like Philip's suggestion a lot.  It will help us all identify space
> intensive indexes, and _hopefully_ reduce the number of posts about
> this.
>
> I submitted a feature request for this issue:
> http://code.google.com/p/googleappengine/issues/detail?id=2740
>
> Robert
>
>
>
>
> On Mon, Feb 8, 2010 at 2:27 PM, WeatherPhilip
>  wrote:
> > This issue comes up about once per week. Google -- you need to address
> > this. The simplest way of addressing it would be to put the size of
> > *every* index on the statistics page, and show that the total
> > corresponds (roughly) to the quota number.
> >
> > This would enable people to see which index (or indexes) was consuming
> > a lot of space, and take steps to optimize -- maybe by not indexing
> > that field.
> >
> > Philip
> >
> > On Feb 8, 1:48 pm, "Ikai L (Google)"  wrote:
> >> Are you storing information in sessions? Session information can also
> take
> >> up space.
> >>
> >>
> >>
> >>
> >>
> >> On Mon, Feb 8, 2010 at 5:32 AM, Pavel Kaplin 
> wrote:
> >> > Here's the detailed description of mentioned indexes:
> >> >  1) address (string, < 100 bytes), tradePoint(String, < 100 bytes),
> >> > user (Key, generated by GAE), timestamp
> >> >  2) tradePoint, user, timestamp
> >> >  3) user, timestamp
> >>
> >> > Entities count is about 15k. I don't understand how it might happened.
> >>
> >> > On Feb 8, 3:25 pm, "Nick Johnson (Google)" 
> >> > wrote:
> >> > > Hi Pavel,
> >>
> >> > > That depends on the nature of your indexes, and the entities being
> >> > indexed.
> >> > > It's certainly possible for indexes to reach this magnitude -
> >> > particularly
> >> > > if you're indexing list properties.
> >>
> >> > > -Nick Johnson
> >>
> >> > > On Mon, Feb 8, 2010 at 1:12 PM, Pavel Kaplin <
> pavel.kap...@gmail.com>
> >> > wrote:
> >> > > > It's hard to believe that 3 indexes (for 2, 3 and 4 fields) could
> eat
> >> > > > 9x more space than data itself.
> >>
> >> > > > On Feb 8, 2:45 pm, "Nick Johnson (Google)" <
> nick.john...@google.com>
> >> > > > wrote:
> >> > > > > Hi Pavel,
> >>
> >> > > > > The datastore stats include only the raw size of the entities.
> The
> >> > total
> >> > > > > space consumed is the space consumed by the entities, plus the
> space
> >> > > > > consumed by all your indexes.
> >>
> >> > > > > -Nick Johnson
> >>
> >> > > > > On Mon, Feb 8, 2010 at 12:35 PM, Pavel Kaplin <
> >> > pavel.kap...@gmail.com
> >> > > > >wrote:
> >>
> >> > > > > > Hi there!
> >>
> >> > > > > > My datastore stats says me "Size of all entities = 51 MBytes",
> but
> >> > > > > > dashboard shows 0.54 Gb as Total Stored Data.
> >>
> >> > > > > > As you can see, these values differ from each other for more
> than
> >> > ten
> >> > > > > > times. Why?
> >>
> >> > > > > > Application id is bayadera-tracker
> >>
> >> > > > > > --
> >> > > > > > You received this message because you are subscribed to the
> Google
> >> > > > Groups
> >> > > > > > "Google App Engine" group.
> >> > > > > > To post to this group, send email to
> >> > google-appengine@googlegroups.com
> >> > > > .
> >> > > > > > To unsubscribe from this group, send email to
> >> > > > > > google-appengine+unsubscr...@googlegroups.com e...@googlegroups.com> >> > e...@googlegroups.com> >> > > > e...@googlegroups.com>
> >> > > > > > .
> >> > > > > > For more options, visit this group at
> >> > > > > >http://groups.google.com/group/google-appengine?hl=en.
> >>
> >> > > > > --
> >> > > > > Nick Johnson, Developer Programs Engineer, App Engine
> >> > > > > Google Ireland Ltd. :: Registered in Dublin, Ireland,
> Registration
> >> > > > Number:
> >> > > > > 368047
> >>
> >> > > > --
> >> > > > You received this message because you are subscribed to the Google
> >> > Groups
> >> > > > "Google App Engine" group.
> >> > > > To post to this group, send email to
> google-appengine@googlegroups.com
> >> > .
> >> > > > To unsubscribe from this group, send email to
> >> > > > google-appengine+unsubscr...@googlegroups.com e...@googlegroups.com> >> > e...@googlegroups.com>
> >> > > > .
> >> > > > For more options, visit this group at
> >> > > >http://groups.google.com/group/google-appengine?hl=en.
> >>
> >> > > --
> >> > > Nick Johnson, Developer Programs Engineer, App Engine
> >> > > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
> >> > Number:
> >> > > 368047
> >>
> >> > --
> >> > You received this message because you are subscribed to the Google
> Groups
> >> > "Google App Engine" group.
> >> > To post to this group, send email to
> google-appeng...@googlegroups.com.
> >> > To unsubscribe from this group, send email to
> >> > google-appengine+unsubscr...@googlegroups.com e...@googlegr

Re: [google-appengine] Re: Stored data voiume is totally different on Datastore Statistics and Dashboard

2010-02-09 Thread Pavel Kaplin
No, I don't store any info in sessions, except logged in user id via Google
Users API.

On Mon, Feb 8, 2010 at 8:48 PM, Ikai L (Google)  wrote:

> Are you storing information in sessions? Session information can also take
> up space.
>
>
> On Mon, Feb 8, 2010 at 5:32 AM, Pavel Kaplin wrote:
>
>> Here's the detailed description of mentioned indexes:
>>  1) address (string, < 100 bytes), tradePoint(String, < 100 bytes),
>> user (Key, generated by GAE), timestamp
>>  2) tradePoint, user, timestamp
>>  3) user, timestamp
>>
>> Entities count is about 15k. I don't understand how it might happened.
>>
>> On Feb 8, 3:25 pm, "Nick Johnson (Google)" 
>> wrote:
>> > Hi Pavel,
>> >
>> > That depends on the nature of your indexes, and the entities being
>> indexed.
>> > It's certainly possible for indexes to reach this magnitude -
>> particularly
>> > if you're indexing list properties.
>> >
>> > -Nick Johnson
>> >
>> >
>> >
>> >
>> >
>> > On Mon, Feb 8, 2010 at 1:12 PM, Pavel Kaplin 
>> wrote:
>> > > It's hard to believe that 3 indexes (for 2, 3 and 4 fields) could eat
>> > > 9x more space than data itself.
>> >
>> > > On Feb 8, 2:45 pm, "Nick Johnson (Google)" 
>> > > wrote:
>> > > > Hi Pavel,
>> >
>> > > > The datastore stats include only the raw size of the entities. The
>> total
>> > > > space consumed is the space consumed by the entities, plus the space
>> > > > consumed by all your indexes.
>> >
>> > > > -Nick Johnson
>> >
>> > > > On Mon, Feb 8, 2010 at 12:35 PM, Pavel Kaplin <
>> pavel.kap...@gmail.com
>> > > >wrote:
>> >
>> > > > > Hi there!
>> >
>> > > > > My datastore stats says me "Size of all entities = 51 MBytes", but
>> > > > > dashboard shows 0.54 Gb as Total Stored Data.
>> >
>> > > > > As you can see, these values differ from each other for more than
>> ten
>> > > > > times. Why?
>> >
>> > > > > Application id is bayadera-tracker
>> >
>> > > > > --
>> > > > > You received this message because you are subscribed to the Google
>> > > Groups
>> > > > > "Google App Engine" group.
>> > > > > To post to this group, send email to
>> google-appengine@googlegroups.com
>> > > .
>> > > > > To unsubscribe from this group, send email to
>> > > > > google-appengine+unsubscr...@googlegroups.com> e...@googlegroups.com>> > > e...@googlegroups.com>
>> > > > > .
>> > > > > For more options, visit this group at
>> > > > >http://groups.google.com/group/google-appengine?hl=en.
>> >
>> > > > --
>> > > > Nick Johnson, Developer Programs Engineer, App Engine
>> > > > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
>> > > Number:
>> > > > 368047
>> >
>> > > --
>> > > You received this message because you are subscribed to the Google
>> Groups
>> > > "Google App Engine" group.
>> > > To post to this group, send email to
>> google-appeng...@googlegroups.com.
>> > > To unsubscribe from this group, send email to
>> > > google-appengine+unsubscr...@googlegroups.com> e...@googlegroups.com>
>> > > .
>> > > For more options, visit this group at
>> > >http://groups.google.com/group/google-appengine?hl=en.
>> >
>> > --
>> > Nick Johnson, Developer Programs Engineer, App Engine
>> > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
>> Number:
>> > 368047
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine" group.
>> To post to this group, send email to google-appeng...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine+unsubscr...@googlegroups.com
>> .
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine?hl=en.
>>
>>
>
>
> --
> Ikai Lan
> Developer Programs Engineer, Google App Engine
> http://googleappengine.blogspot.com | http://twitter.com/app_engine
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>



-- 
Павел Каплин

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: 500 status on apps, api changes?

2010-02-09 Thread dobee
ok, we now fixed the compatibility issues on our apps, but the
development sdk still does not match the api on appengine.

it would be nice to get information about such internal changes up-
front the next time. it is always hard to explain our customers why
the site was offline for some technical reason we cannot forsee.

thx, bernd

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: 500 status on apps, api changes?

2010-02-09 Thread dobee
seems that i have found the problem, the signature of db.get has
changed, but only on appengine, we have put a wrapper around this and
this does not work anymore now

02-09 12:35AM 40.319
UNHANDLED_EXCEPTION:
Traceback (most recent call last):
  File "/base/data/home/apps/mk-a-z/3.339622665704014614/packages/
django.egg/django/core/handlers/base.py", line 86, in get_response
response = callback(request, *callback_args, **callback_kwargs)
  File "/base/data/home/apps/mk-a-z/3.339622665704014614/packages/
django.egg/django/views/decorators/cache.py", line 30, in
_cache_controlled
response = viewfunc(request, *args, **kw)
  File "/base/data/home/apps/mk-a-z/3.339622665704014614/mkapp/
decorators.py", line 58, in wrapper
data = fxn(*args, **kwargs)
  File "/base/data/home/apps/mk-a-z/3.339622665704014614/mkapp/df/
views.py", line 642, in business_view
business = Business.get_by_ident(id)
  File "/base/data/home/apps/mk-a-z/3.339622665704014614/mkapp/
business.py", line 149, in get_by_ident
business = Business.get_by_key_name(key)
  File "/base/python_lib/versions/1/google/appengine/ext/db/
__init__.py", line 991, in get_by_key_name
return get(keys[0], rpc=rpc)
TypeError: get_cached() got an unexpected keyword argument 'rpc'

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] 500 status on apps, api changes?

2010-02-09 Thread dobee
hello

we encounter failures on some apps since about 10 hours. we can find
no log entry for those errors. are there some api changes?

we know of at least one change on how db.Model gets constructed, they
now get the "key" keyword argument, which is new and broke another app
of us, which we were able to fix.

but for this app it must be another problem, are there any other
changes on the internals? we have 2 apps with the same symtpoms.

curl -i http://marktplatz.a-z.ch

HTTP/1.1 500 Internal Server Error
Date: Tue, 09 Feb 2010 08:26:19 GMT
Content-Type: text/html; charset=UTF-8
Server: Google Frontend
Content-Length: 466
X-XSS-Protection: 0




500 Server Error


Error: Server Error
The server encountered an error and could not complete your
request.If the problem persists, please report your problem and
mention this error message and the query that caused it.



thx in advance, bernd

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.