[google-appengine] Re: Gae is down ?

2009-06-27 Thread Devel63

Still problematic here; haven't been able to upload new version for a
couple of hours.

On Jun 26, 9:58 pm, cz czer...@gmail.com wrote:
 Can't upload a new version either: 500 internal server error
 The dashboard is inaccessible as well.
 App is slow but works.

 On Jun 26, 9:55 pm, gg bradjyo...@gmail.com wrote:

  Seems to be just the admin...

  On Jun 26, 9:52 pm, Tom Wu service.g2...@gmail.com wrote:

    Server Error

   A server error has occurred.

   Return to Applications screen » http://appengine.google.com/
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Gae is down ?

2009-06-27 Thread John
It's down for the last 45 minutes! :)

On Sat, Jun 27, 2009 at 2:12 PM, Devel63 danstic...@gmail.com wrote:


 Still problematic here; haven't been able to upload new version for a
 couple of hours.

 On Jun 26, 9:58 pm, cz czer...@gmail.com wrote:
  Can't upload a new version either: 500 internal server error
  The dashboard is inaccessible as well.
  App is slow but works.
 
  On Jun 26, 9:55 pm, gg bradjyo...@gmail.com wrote:
 
   Seems to be just the admin...
 
   On Jun 26, 9:52 pm, Tom Wu service.g2...@gmail.com wrote:
 
 Server Error
 
A server error has occurred.
 
Return to Applications screen » http://appengine.google.com/
 



-- 
Cheers,

John
Creative Director / Programmer
Digital Eternal (http://www.digitaleternal.com)
Branding, Web Design/Development, Internet Advertising, SEO

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Random Datastore Timeouts?

2009-06-27 Thread Brandon Thomson

Timeouts are normal, you have to program your app to deal with them...

The deployment problem right now is separate.

On Jun 27, 12:23 am, Stephen Mayer stephen.ma...@gmail.com wrote:
 Also noticing random errors in the appengine control panel ...

 Looks like this:

 Server Error
 A server error has occurred.

 On Jun 26, 11:15 pm, Stephen Mayer stephen.ma...@gmail.com wrote:

  Please note that sometimes I don't get any timeout and the exact same
  put request works fine.  Appreciate any assistance you can offer!

  Stephen

  On Jun 26, 11:14 pm, Stephen Mayer stephen.ma...@gmail.com wrote:

   Hi All ...

   Anyone know why I might be seeing random timeouts from the datastore?
   I'm inserting a very simple row and I'm the only one using my app ...
   so almost no load.

   ex. error message:
   
   Traceback (most recent call last):
     File /base/python_lib/versions/1/google/appengine/ext/webapp/
   __init__.py, line 503, in __call__
       handler.post(*groups)
     File /base/data/home/apps/myautomaticlife/4.334490812902965182/
   forecast/views.py, line 164, in post
       event.put()
     File /base/python_lib/versions/1/google/appengine/ext/db/
   __init__.py, line 696, in put
       return datastore.Put(self._entity)
     File /base/python_lib/versions/1/google/appengine/api/
   datastore.py, line 166, in Put
       raise _ToDatastoreError(err)
     File /base/python_lib/versions/1/google/appengine/api/
   datastore.py, line 2055, in _ToDatastoreError
       raise errors[err.application_error](err.error_detail)
   Timeout
   

   Any ideas?
   Stephen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Random Datastore Timeouts?

2009-06-27 Thread Paul Kinlan
The datastore is readonly mode at the moment is it not?
http://code.google.com/status/appengine

2009/6/27 Brandon Thomson gra...@gmail.com


 Timeouts are normal, you have to program your app to deal with them...

 The deployment problem right now is separate.

 On Jun 27, 12:23 am, Stephen Mayer stephen.ma...@gmail.com wrote:
  Also noticing random errors in the appengine control panel ...
 
  Looks like this:
 
  Server Error
  A server error has occurred.
 
  On Jun 26, 11:15 pm, Stephen Mayer stephen.ma...@gmail.com wrote:
 
   Please note that sometimes I don't get any timeout and the exact same
   put request works fine.  Appreciate any assistance you can offer!
 
   Stephen
 
   On Jun 26, 11:14 pm, Stephen Mayer stephen.ma...@gmail.com wrote:
 
Hi All ...
 
Anyone know why I might be seeing random timeouts from the datastore?
I'm inserting a very simple row and I'm the only one using my app ...
so almost no load.
 
ex. error message:

Traceback (most recent call last):
  File /base/python_lib/versions/1/google/appengine/ext/webapp/
__init__.py, line 503, in __call__
handler.post(*groups)
  File /base/data/home/apps/myautomaticlife/4.334490812902965182/
forecast/views.py, line 164, in post
event.put()
  File /base/python_lib/versions/1/google/appengine/ext/db/
__init__.py, line 696, in put
return datastore.Put(self._entity)
  File /base/python_lib/versions/1/google/appengine/api/
datastore.py, line 166, in Put
raise _ToDatastoreError(err)
  File /base/python_lib/versions/1/google/appengine/api/
datastore.py, line 2055, in _ToDatastoreError
raise errors[err.application_error](err.error_detail)
Timeout

 
Any ideas?
Stephen
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: upload_data authentication problem

2009-06-27 Thread John

Hi Nick,

  Does this mean that if my app is set to allow authentication from
all Google Account, I will not (with any e-mail) use Bulk Loader.
Essentially, this means bulk loader will not work with these type of
applications?

  If that is so, how do I upload existing data I have from a previous
django-based application? I have production data I want to transfer
over to the app engine datastore.

On May 5, 8:40 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi John,

 Are you using a Google Apps account as an administrator? If your app
 is set to allowauthenticationfrom all domains, and you're using a
 Google Apps account for the administrator account, this is the
 behaviour you'll see with the bulk loader, even if the same account
 works fine for other commands like 'upload'. The reason you're not
 being prompted is likely because you've allowed appcfg.py to store
 your credentials.

 -Nick Johnson
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: bugs in index?

2009-06-27 Thread buger

and this 2 queries return one result! i mean queries in first
message

On Jun 27, 1:19 pm, buger leons...@gmail.com wrote:
 Oohh, it's more interesting problem than i though. I rebuild my
 indexes, and this 2 queries return one result!
 Let's look:

 SELECT * FROM Video WHERE artist = Key
 ('ag9tdXNpY3ZpZGVvYnVnZXJyFwsSBkFydGlzdCILYXNpZ3VyIHLDs3MM')

 This query returns 5 objects, and 4 of them have status value is equal
 to 2

 SELECT * FROM Video WHERE artist = Key
 ('ag9tdXNpY3ZpZGVvYnVnZXJyFwsSBkFydGlzdCILYXNpZ3VyIHLDs3MM') AND
 status = 2
 This query return only 1 object. Ooops, what is it? :)

 my appid is musicvideobuger

 On Jun 26, 2:26 pm, Nick Johnson (Google) nick.john...@google.com
 wrote:

  Hi buger,

  Is this in production, or on the dev_appserver? In production, your
  app does not appear to have the necessary indexes to execute the
  second query.

  -Nick Johnson

  On Thu, Jun 25, 2009 at 11:38 PM, bugerleons...@gmail.com wrote:

   SELECT * FROM Video WHERE artist = Key
   ('ag9tdXNpY3ZpZGVvYnVnZXJyFwsSBkFydGlzdCILYXNpZ3VyIHLDs3MM') AND
   status = 2
   This query gives me 4 results

   But this gives only 1 result!, i'm just added order
   SELECT * FROM Video WHERE artist = Key
   ('ag9tdXNpY3ZpZGVvYnVnZXJyFwsSBkFydGlzdCILYXNpZ3VyIHLDs3MM') AND
   status = 2 ORDER BY created_at desc

   created_at is DateTime property, and all this objects have it

  --
  Nick Johnson, App Engine Developer Programs Engineer
  Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
  Number: 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] BlazeDS Spring Flex not working and no exceptions in log

2009-06-27 Thread steven.head...@gmail.com

I have followed these instructions to build my Flex+BlazeDS+Spring
application: http://java.dzone.com/articles/flex-remoting-google-app.

The app works find within eclipse, but when uploaded to the cloud it
stop working and provided no log information other than:

- - [27/Jun/2009:06:31:27 -0700] GET /favicon.ico HTTP/1.1 404 0 -
Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.4; en-US; rv:1.9.0.11)
Gecko/2009060214 Firefox/3.0.11,gzip(gfe)


I have changed the values in my log.properties to:

# Set the default logging level for all loggers to WARNING
.level = INFO

# Set the default logging level for ORM, specifically, to WARNING
DataNucleus.JDO.level=INFO
DataNucleus.Persistence.level=INFO
DataNucleus.Cache.level=INFO
DataNucleus.MetaData.level=INFO
DataNucleus.General.level=INFO
DataNucleus.Utility.level=INFO
DataNucleus.Transaction.level=INFO
DataNucleus.Datastore.level=INFO
DataNucleus.ClassLoading.level=INFO
DataNucleus.Plugin.level=INFO
DataNucleus.ValueGeneration.level=INFO
DataNucleus.Enhancer.level=INFO
DataNucleus.SchemaTool.level=INFO

still no other log information.

I then changed my BlazeDS build to reflect problems defined in this
article: 
http://martinzoldano.blogspot.com/2009/04/appengine-adobe-blazeds-fix.html.

Still no other error information. Any help would be appreciated.


Regards,


Steven H.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Transactionally updating multiple entities over 1MB

2009-06-27 Thread Andy Freeman

  Does that mean that db.put((e1, e2, e3,)) where all of the entities
  are 500kb will fail?

 Yes.

Thanks.

I'll take this opportunity to promote a couple of related feature
requests.

(1) We need a way to estimate entity sizes
http://code.google.com/p/googleappengine/issues/detail?id=1084

(2) We need a way to help predict when datastore operations will fail
http://code.google.com/p/googleappengine/issues/detail?id=917

I assume that db.get((k1, k2,)) can fail because of size reasons when
db.get(k1) followed by db.get(k2) will succeed.  Does db.get((k1,
k2,)) return at least one entity in that case?



On Jun 26, 9:36 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 On Fri, Jun 26, 2009 at 4:42 PM, Andy Freeman ana...@earthlink.net wrote:

    the 1MB limit applies only to single API calls

  Does that mean that db.put((e1, e2, e3,)) where all of the entities
  are 500kb will fail?

 Yes.



  Where are limits on the total size per call documented?

 http://code.google.com/appengine/docs/python/datastore/overview.html#...
  only mentions a limit on the size of individual entities and the total
  number of entities for batch methods.  The batch method documentation
  (http://code.google.com/appengine/docs/python/datastore/functions.html
  andhttp://code.google.com/appengine/docs/python/memcache/functions.html)
  does not mention any limits.

 You're right - we need to improve our documentation in that area. The 1MB
 limit applies to _all_ API calls.



  Is there a documented limit on the number of entities per memcache
  call?

 No.



  BTW - There is a typo in
 http://code.google.com/appengine/docs/python/memcache/overview.html#Q...
  .
  It says In addition to quotas, the following limits apply to the use
  of the Mail service: instead of Memcache service

 Thanks for the heads-up.

 -Nick Johnson







  On Jun 26, 7:28 am, Nick Johnson (Google) nick.john...@google.com
  wrote:
   Hi tav,

   Batch puts aren't transactional unless all the entities are in the
   same entity group. Transactions, however, _are_ transactional, and the
   1MB limit applies only to single API calls, so you can make multiple
   puts to the same entity group in a transaction.

   -Nick Johnson

   On Fri, Jun 26, 2009 at 8:53 AM, tavt...@espians.com wrote:

Hey guys and girls,

I've got a situation where I'd have to transactionally update
multiple entities which would cumulatively be greater than the 1MB
datastore API limit... is there a decent solution for this?

For example, let's say that I start off with entities E1, E2, E3 which
are all about 400kb each. All the entities are specific to a given
User. I grab them all on a remote node and do some calculations on
them to yield new computed entities E1', E2', and E3'.

Any failure of the remote node or the datastore is recoverable except
when the remote node tries to *update* the datastore... in that
situation, it'd have to batch the update into 2 separate .put() calls
to overcome the 1MB limit. And should the remote node die after the
first put(), we have a messy situation =)

My solution at the moment is to:

1. Create a UserRecord entity which has a 'version' attribute
corresponding to the latest versions of the related entities for any
given User.

2. Add a 'version' attribute to all the entities.

3. Whenever the remote node creates the computed new set of
entities, it creates them all with a new version number -- applying
the same version for all the entities in the same transaction.

4. These new entities are actually .put() as totally separate and new
entities, i.e. they do not overwrite the old entities.

5. Once a remote node successfully writes new versions of all the
entities relating to a User, it updates the UserRecord with the latest
version number.

6. From the remote node, delete all Entities related to a User which
don't have the latest version number.

7. Have a background thread check and do deletions of invalid versions
in case a remote node had died whilst doing step 4, 5 or 6...

I've skipped out the complications caused by multiple remote nodes
working on data relating to the same User -- but, overall, the
approach is pretty much the same.

Now, the advantage of this approach (as far as I can see) is that data
relating to a User is never *lost*. That is, data is never lost before
there is valid data to replace it.

However, the disadvantage is that for (unknown) periods of time, there
would be duplicate data sets for a given User... All of which is
caused by the fact that the datastore calls cannot exceed 1MB. =(

So queries will yield duplicate data -- gah!!

Is there a better approach to try at all? Thanks!

--
love, tav

plex:espians/tav | t...@espians.com | +44 (0) 7809 569 369
   

[google-appengine] Re: Random Datastore Timeouts?

2009-06-27 Thread Stephen Mayer

I've noticed that the put request often eventually returns even though
the app thinks there's a timeout.  This means that an automatic retry
would be a bad idea ... what sort of error message do you return to
users when datastore may have saved your data ... but we're not
sure ...  ?   Is this an unusual behavior ... or is GAE really
unstable atm?

Stephen

On Jun 27, 1:59 am, Paul Kinlan paul.kin...@gmail.com wrote:
 The datastore is readonly mode at the moment is it 
 not?http://code.google.com/status/appengine

 2009/6/27 Brandon Thomson gra...@gmail.com



  Timeouts are normal, you have to program your app to deal with them...

  The deployment problem right now is separate.

  On Jun 27, 12:23 am, Stephen Mayer stephen.ma...@gmail.com wrote:
   Also noticing random errors in the appengine control panel ...

   Looks like this:

   Server Error
   A server error has occurred.

   On Jun 26, 11:15 pm, Stephen Mayer stephen.ma...@gmail.com wrote:

Please note that sometimes I don't get any timeout and the exact same
put request works fine.  Appreciate any assistance you can offer!

Stephen

On Jun 26, 11:14 pm, Stephen Mayer stephen.ma...@gmail.com wrote:

 Hi All ...

 Anyone know why I might be seeing random timeouts from the datastore?
 I'm inserting a very simple row and I'm the only one using my app ...
 so almost no load.

 ex. error message:
 
 Traceback (most recent call last):
   File /base/python_lib/versions/1/google/appengine/ext/webapp/
 __init__.py, line 503, in __call__
     handler.post(*groups)
   File /base/data/home/apps/myautomaticlife/4.334490812902965182/
 forecast/views.py, line 164, in post
     event.put()
   File /base/python_lib/versions/1/google/appengine/ext/db/
 __init__.py, line 696, in put
     return datastore.Put(self._entity)
   File /base/python_lib/versions/1/google/appengine/api/
 datastore.py, line 166, in Put
     raise _ToDatastoreError(err)
   File /base/python_lib/versions/1/google/appengine/api/
 datastore.py, line 2055, in _ToDatastoreError
     raise errors[err.application_error](err.error_detail)
 Timeout
 

 Any ideas?
 Stephen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Lots of errors, using tasks to delete all the data of an app, is my experience typical?

2009-06-27 Thread gae123

I am sharing my experience of deleting all the data of one of my apps.
We are talking about 500MB spread over about a few thousands records
and about 15 Models. The summary is that it worked but caused many
errors and depleted my CPU time, read on for the details...

The approach was pretty simple. Iterate though all of a the 15 kinds
and create one task for each kind. Then each of these tasks, queries
the datastore for 10 entities of this kind, db.delete() them and then
queues one more similar task. This continues until the datastore
reports no more records of this kind. I use the default queue. I would
guess that 60-70% records are in two Models so I did notice as I
expected that pretty quickly only two tasks reamained in the queue.
One more thing, all the data have the same ancestor.

At the end of the process the good news is tha the algorithm worked,
no data in the app and the Dashboard reports:

96% of my CPU time depleted
10% of my datastore time depleted
748 tasks queued
other indicators healthy...

Now I do see many many  logs with the following two traces which I
think is what caused a lot of tasks to fail and be requeued and
eventually depleted almost all my CPU time. So what could I be doing
wrong? What could I do to avoid these errors?



  File /base/data/home/apps/neatschool-test/
0-7-22-81dafb9.334504032648079489/swplatform/controllers/admin.py,
line 89, in post
db.delete(res)
  File /base/python_lib/versions/1/google/appengine/ext/db/
__init__.py, line 1127, in delete
datastore.Delete(keys)
  File /base/python_lib/versions/1/google/appengine/api/
datastore.py, line 269, in Delete
raise _ToDatastoreError(err)
  File /base/python_lib/versions/1/google/appengine/api/
datastore.py, line 2055, in _ToDatastoreError
raise errors[err.application_error](err.error_detail)
TransactionFailedError: too much contention on these datastore
entities. please try again.



 File /base/data/home/apps/neatschool-test/
0-7-22-81dafb9.334504032648079489/swplatform/controllers/admin.py,
line 86, in post
results = qq.fetch(10)
  File /base/data/home/apps/neatschool-test/
0-7-22-81dafb9.334504032648079489/swplatform/util/db/query.py, line
118, in fetch
ents = self.query.fetch(limit, offset)
  File /base/python_lib/versions/1/google/appengine/ext/db/
__init__.py, line 1426, in fetch
raw = self._get_query().Get(limit, offset)
  File /base/python_lib/versions/1/google/appengine/api/
datastore.py, line 959, in Get
return self._Run(limit, offset)._Get(limit)
  File /base/python_lib/versions/1/google/appengine/api/
datastore.py, line 903, in _Run
_ToDatastoreError(err)
  File /base/python_lib/versions/1/google/appengine/api/
datastore.py, line 2055, in _ToDatastoreError
raise errors[err.application_error](err.error_detail)
Timeout





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] The queued task never executes, and see 302 in the logs, what am I doing wrong?

2009-06-27 Thread gae123

Well this is a tip about an issue that took me a while to
troubleshoot. If your app.yaml file has

   secure: always

at the URL of a task (possibly because it is also a WEB entry point)
the task will fail to execute and you will just see a redirect/302 in
the request logs. Then the task will be requeued.

Google folks, is my observation correct and if yes is it a bug or a
feature?

Thanks


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---