[google-appengine] Re: Ask GAE: XMPP and WebSockets in the foreseeable future?

2009-10-10 Thread Scott Ellis
It seems to me that the current implementation of the GAE/XMPP interface
would allow that to work as soon as it's working in the browser.
2009/10/10 Backpack georgen...@gmail.com


 XMPP bots are the best thing since sliced bread, but communicating
 with them from the browsers is a total pain.

 With the upcoming of the WebSocket implementation in webkit and
 mozilla, how long will it take for GAE allowing us to connect to our
 bots using this simple line of code:

 var socket = new WebSocket('wss:talk.google.com','5223')

 Don't you have a remote idea of how that single line of code will
 revolutionize the web as we know it?


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Will reducing model size improve performance?

2009-10-10 Thread Jason Smith

Hi, group. My app's main cost (in dollars and response time) is in the
db.get([list, of, keys, here]) call in some very high-trafficked code.
I want to pare down the size of that model to the bare minimum with
the hope of reducing the time and CPU fee for this very common
activity. Many users who are experiencing growth in the app popularity
probably have this objective as well.

I have two questions that hopefully others are thinking about too.

1. Can I expect the API time of a db.get() with several hundred keys
to reduce roughly linearly as I reduce the size of the entity?
Currently the entity has the following data attached: 9 String, 9
Boolean, 8 Integer, 1 GeoPt, 2 DateTime, 1 Text (avg size ~100 bytes
FWIW), 1 Reference, 1 StringList (avg size 500 bytes). The goal is to
move the vast majority of this data to related classes so that the
core fetch of the main model will be quick.

2. If I do not change the name of the entity (i.e. just delete all the
db.*Property definitions in the model), will I still incur the same
high cost fetching existing entities? The documentation says that all
properties of a model are fetched simultaneously. Will the old
unneeded properties still transfer over RPC on my dime and while users
wait? In other words: if I want to reduce the size of my entities, is
it necessary to migrate the old entities to ones with the new
definition? If so, is it sufficient to re-put() the entity, or must I
save under a wholly new key?

Thanks very much to anyone who knows about this matter!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Will reducing model size improve performance?

2009-10-10 Thread Jason Smith

If you're into SO, I have posted this question there, slightly better
edited and formatted. I will summarize any good answers there in this
list.

http://stackoverflow.com/questions/1547750/improve-app-engine-performance-by-reducing-entity-size

On Oct 10, 6:44 pm, Jason Smith j...@proven-corporation.com wrote:
 Hi, group. My app's main cost (in dollars and response time) is in the
 db.get([list, of, keys, here]) call in some very high-trafficked code.
 I want to pare down the size of that model to the bare minimum with
 the hope of reducing the time and CPU fee for this very common
 activity. Many users who are experiencing growth in the app popularity
 probably have this objective as well.

 I have two questions that hopefully others are thinking about too.

 1. Can I expect the API time of a db.get() with several hundred keys
 to reduce roughly linearly as I reduce the size of the entity?
 Currently the entity has the following data attached: 9 String, 9
 Boolean, 8 Integer, 1 GeoPt, 2 DateTime, 1 Text (avg size ~100 bytes
 FWIW), 1 Reference, 1 StringList (avg size 500 bytes). The goal is to
 move the vast majority of this data to related classes so that the
 core fetch of the main model will be quick.

 2. If I do not change the name of the entity (i.e. just delete all the
 db.*Property definitions in the model), will I still incur the same
 high cost fetching existing entities? The documentation says that all
 properties of a model are fetched simultaneously. Will the old
 unneeded properties still transfer over RPC on my dime and while users
 wait? In other words: if I want to reduce the size of my entities, is
 it necessary to migrate the old entities to ones with the new
 definition? If so, is it sufficient to re-put() the entity, or must I
 save under a wholly new key?

 Thanks very much to anyone who knows about this matter!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] AppEngine sends header: X-XSS-Protection: 0

2009-10-10 Thread chadwackerman

Just noticed this. This disables the IE8 XSS security filter.

Many Google sites seem to be sending it. It's odd.

Regardless, on AppEngine, seems like it should be left to the app to
decide and not Google.
I'm seeing it on static pages so I don't think it's Django or webapp.

Anyone know what's up?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Ask GAE: XMPP and WebSockets in the foreseeable future?

2009-10-10 Thread niklasr

via http://apps.sameplace.cc/chat/chat.xhtml it responds well. sockets
are oldfashioned for one opinion, the less numbers the better.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Using transactions to avoid stale memcache entries.

2009-10-10 Thread Andy Freeman

 Update memcache after the transaction completes. There's still the
 possibility that your script could fail between the two events,

Updating memcache after the transaction completes can result in
persistently inconsistent memcache data even if there's no script
failure.  Consider:

def txn(key):
a = db.get(key)
if not a: return None
a.count += 1
a.put()
return a
a = db.run_in_transaction(txn, key)
if a:
memcache.set(str(a.key()), a)

Even if there are no script failures, the order that different
processes finish the transaction is not guaranteed to be the same as
the order that those processes do the memcache.set.  That
inconsistency lasts until the memcache data timesout.  (IIRC, there's
actually no guarantee that memcache data is flushed when the timeout
expires.)

 but there's
 no avoiding that without transactional semantics between the datastore and
 memcache.

While such transactional semantics between memcache and datastore
would be sufficient, I don't think that they're necessary to satisfy
my requirement.  My existence argument for can satisfy requirement
without transactional semantics is the implementation that I
provided.  It only requires consistency checks at datastore operations
and that I address three specific script failures.  (Note that all
datastore operations after the one that runs into the conflict will be
rolled back/ignored, so there's a cost to delaying the check until
commit.  That said, I don't know if doing the consistency check once
at commit is signficantly cheaper than doing it incrementally at each
datastore operation.)

The script failures that I need to address are machine, deadline, or
programming problems after/during the memcache.set and before the
commit.  The last problem is under my control and I think that I've
got a handle on deadlines.  I have to live with machine errors
everywhere else, so 

Datastore transactions are the only tool that I have to constrain the
order of operations in different processes.  I'd like them to be as
powerful as possible.


On Oct 9, 9:53 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi Andy,

 On Fri, Oct 9, 2009 at 5:08 PM, Andy Freeman ana...@earthlink.net wrote:

   They are raised inside a transaction, when a conflict is detected with
   another concurrent transaction. The transaction infrastructure will catch
   and retry these several times, and only raise it in the external code if
  it
   was unable to execute the transaction after several retries.

  Yes, but when are conflicts checked?  Specifically, is the error
  always raised by the statement in the user function that runs into the
  conflict or can it be raised later, say during transaction commit.

 Any datastore operation inside a transaction could raise this exception. It
 would be a bad idea to rely on _where_ this exception will be raised.







  I've looked at the SDK's implementation of
  RunInTransactionCustomRetries (in google/appengine/api/datastore.py).
  The except that catches the CONCURRENT_TRANSACTION exception protects
  the commit and not the execution of the user function.  That suggests
  that the user function is run to completion regardless of conflicts
  and that the conflict isn't acted upon until a commit is tried.

  However, your description and the documentation suggests the real
  implementation detects and acts on conflicts while running the user
  function.

  Here's a user function which demonstrates the difference.  (Yes, I
  picked an example that I care about.  I'm trying to ensure that
  memcache data is not too stale.)

  def txn():
     ...
     a.put()
     memcache.set('a', a.field)
     return a

  If the CONCURRENT_TRANSACTION exception is raised while txn is being
  run, specifically during a.put(), the memcache.set won't happen when
  db.run_in_transaction(txn) fails.  If that exception is raised after
  txn has exited and during commit (as the SDK code suggests), the
  memcache.set will happen whether or not db.run_in_transaction(txn)
  fails.

  If my understanding of the SDK code is correct and the real
  implementation works the same way, namely that conflicts are detected
  after the user function completes, how can I ensure that memcache data
  is not too stale?  (One way is to have that data expire reasonably
  quickly, but that reduces the value of memcache.)

 Update memcache after the transaction completes. There's still the
 possibility that your script could fail between the two events, but there's
 no avoiding that without transactional semantics between the datastore and
 memcache.

  Also, what's the definition of conflict?  Clearly there's a conflict
  between a user function that reads a given data store entity and one
  that writes the same entity.  However, what about the following?

  def txn1(a, b):
     # notice - no read for a or b
     a.put()
     b.put()
     return True

  Does the conflict detection system detect the conflict between
  transactions with 

[google-appengine] Re: Deleting / Hoarding of Application Ids

2009-10-10 Thread Andy Freeman

Note that the yadayada.appspot.com is reserved for the owner of
yaday...@gmail.com.  This association seems reasonable, but means that
any name recycling/reclaiming for appengine must also address gmail
and perhaps other google accounts.

On Oct 8, 10:09 pm, Donzo para...@gmail.com wrote:
 Are there plans to enable deleting Application Ids?  Are there plans
 to eventually expire (auto-delete) Application Ids reserved but
 unused???

 I'm just getting into GAE and am extremely interested in using it for
 several web sites we now operate.  One issue is that almost all the
 GAE application ids corresponding to domains we own are already
 taken ... but none have a live site running.  For example, we own
 yadayada.com (not really), but I've found that yadayada.appspot.com is
 already reserved but doesn't have an active web site running.  This is
 true for almost all of my domains, some of them rather unique so I
 wonder if appspot.com is not seeing a lot of hoarding of good names
 since it's free.

 This is important to me because of the current inability to use https
 (ssl) AND my domain name for a GAE hosted web site.  I do need to
 direct my users to https for some pages ... and am willing to do so if
 I can get yadayada.appspot.com.  But since that's already taken, I
 would have to use something else ... which will spook some of my users
 into thinking its an impostor web site (asking for id and password no
 less!!!).
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Ask GAE: XMPP and WebSockets in the foreseeable future?

2009-10-10 Thread PointBreak

While sameplace is a cool app, it uses xmpp4moz which uses sockets to
connect to gtalk, so sockets nonetheless but in a browser specific
extension using mozilla's xpcom and xul.

That's why HTML5 brings WebSockets to the table so we don't rely on
hacks like java applets, flash or xpcom to use sockets.

The chat part has been solved by Jabber/xmpp, the bot part has been
solved by GAE/xmpp and the collaboration part by Google/WAVE.

All we need is a single point of connection to bring it all to the
browser without hacks.


On Oct 10, 8:50 am, niklasr nikla...@gmail.com wrote:
 viahttp://apps.sameplace.cc/chat/chat.xhtmlit responds well. sockets
 are oldfashioned for one opinion, the less numbers the better.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Will reducing model size improve performance?

2009-10-10 Thread Kevin Pierce
Hi,
1. I recommend using the same key_name based on your logical pkey for all of
those related models. Then you can generate keys for which data you need and
get the parts you want all at once.  Not sure what your performance benefit
expectation can be.

2. If you put the entity again it should overwrite the old unused properties
and keep them from being transfered over the wire.

On Sat, Oct 10, 2009 at 5:53 AM, Jason Smith j...@proven-corporation.comwrote:


 If you're into SO, I have posted this question there, slightly better
 edited and formatted. I will summarize any good answers there in this
 list.


 http://stackoverflow.com/questions/1547750/improve-app-engine-performance-by-reducing-entity-size

 On Oct 10, 6:44 pm, Jason Smith j...@proven-corporation.com wrote:
  Hi, group. My app's main cost (in dollars and response time) is in the
  db.get([list, of, keys, here]) call in some very high-trafficked code.
  I want to pare down the size of that model to the bare minimum with
  the hope of reducing the time and CPU fee for this very common
  activity. Many users who are experiencing growth in the app popularity
  probably have this objective as well.
 
  I have two questions that hopefully others are thinking about too.
 
  1. Can I expect the API time of a db.get() with several hundred keys
  to reduce roughly linearly as I reduce the size of the entity?
  Currently the entity has the following data attached: 9 String, 9
  Boolean, 8 Integer, 1 GeoPt, 2 DateTime, 1 Text (avg size ~100 bytes
  FWIW), 1 Reference, 1 StringList (avg size 500 bytes). The goal is to
  move the vast majority of this data to related classes so that the
  core fetch of the main model will be quick.
 
  2. If I do not change the name of the entity (i.e. just delete all the
  db.*Property definitions in the model), will I still incur the same
  high cost fetching existing entities? The documentation says that all
  properties of a model are fetched simultaneously. Will the old
  unneeded properties still transfer over RPC on my dime and while users
  wait? In other words: if I want to reduce the size of my entities, is
  it necessary to migrate the old entities to ones with the new
  definition? If so, is it sufficient to re-put() the entity, or must I
  save under a wholly new key?
 
  Thanks very much to anyone who knows about this matter!
 



-- 
Kevin Pierce
Software Architect
VendAsta Technologies Inc.
kpie...@vendasta.com
(306)955.5512 ext 103
www.vendasta.com

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Will reducing model size improve performance?

2009-10-10 Thread Andy Freeman

 In other words: if I want to reduce the size of my entities, is
 it necessary to migrate the old entities to ones with the new
 definition?

I'm pretty sure that the answer to that is yes.

  If so, is it sufficient to re-put() the entity, or must I
 save under a wholly new key?

I think that it should be sufficient re-put() but decided to test that
hypothesis.

It isn't sufficient in the SDK - the SDK admin console continues to
show values for properties that you've deleted from the model
definition after the re-put().  Yes, I checked to make sure that those
properties didn't have values before the re-put().

I did the get and re-put() in a transaction, namely:

def txn(key):
obj = Model.get(key)
obj.put()
assert db.run_in_transaction(txn, key)

I tried two things to get around this problem.  The first was to add
db.delete(obj.key()) right before obj.put().  (You can't do obj.delete
because that trashes the obj.)

The second was to add obj.old_property = None right before the
obj.put() (old_property is the name of the property that I deleted
from Model's definition.)

Neither one worked.  According to the SDK's datastore viewer, existing
instances of Model continued to have values for old_property after I
updated them with that transaction even with the two changes, together
or separately.

If this is also true of the production datastore, this is a big deal.


On Oct 10, 4:44 am, Jason Smith j...@proven-corporation.com wrote:
 Hi, group. My app's main cost (in dollars and response time) is in the
 db.get([list, of, keys, here]) call in some very high-trafficked code.
 I want to pare down the size of that model to the bare minimum with
 the hope of reducing the time and CPU fee for this very common
 activity. Many users who are experiencing growth in the app popularity
 probably have this objective as well.

 I have two questions that hopefully others are thinking about too.

 1. Can I expect the API time of a db.get() with several hundred keys
 to reduce roughly linearly as I reduce the size of the entity?
 Currently the entity has the following data attached: 9 String, 9
 Boolean, 8 Integer, 1 GeoPt, 2 DateTime, 1 Text (avg size ~100 bytes
 FWIW), 1 Reference, 1 StringList (avg size 500 bytes). The goal is to
 move the vast majority of this data to related classes so that the
 core fetch of the main model will be quick.

 2. If I do not change the name of the entity (i.e. just delete all the
 db.*Property definitions in the model), will I still incur the same
 high cost fetching existing entities? The documentation says that all
 properties of a model are fetched simultaneously. Will the old
 unneeded properties still transfer over RPC on my dime and while users
 wait? In other words: if I want to reduce the size of my entities, is
 it necessary to migrate the old entities to ones with the new
 definition? If so, is it sufficient to re-put() the entity, or must I
 save under a wholly new key?

 Thanks very much to anyone who knows about this matter!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Will reducing model size improve performance?

2009-10-10 Thread Jason Smith

Thanks for the help guys. I think this is an important matter to have
cleared up.

It's bedtime here (GMT+7) however tomorrow I think I will do some
benchmarks along the lines of the example I wrote up in the SO
question.

At this point I would think the safest thing would be to completely
change the model name, thereby guaranteeing that you will be writing
entities with fresh keys. However I suspect it's not necessary to go
that far. I'm thinking that on the production datastore, changing the
model definition and then re-put()ing the entity will be what's
required to realize a speed benefit when reducing the number of
properties on a model. But the facts will speak for themselves.

On Oct 11, 12:17 am, Andy Freeman ana...@earthlink.net wrote:
  In other words: if I want to reduce the size of my entities, is
  it necessary to migrate the old entities to ones with the new
  definition?

 I'm pretty sure that the answer to that is yes.

               If so, is it sufficient to re-put() the entity, or must I
  save under a wholly new key?

 I think that it should be sufficient re-put() but decided to test that
 hypothesis.

 It isn't sufficient in the SDK - the SDK admin console continues to
 show values for properties that you've deleted from the model
 definition after the re-put().  Yes, I checked to make sure that those
 properties didn't have values before the re-put().

 I did the get and re-put() in a transaction, namely:

 def txn(key):
     obj = Model.get(key)
     obj.put()
 assert db.run_in_transaction(txn, key)

 I tried two things to get around this problem.  The first was to add
 db.delete(obj.key()) right before obj.put().  (You can't do obj.delete
 because that trashes the obj.)

 The second was to add obj.old_property = None right before the
 obj.put() (old_property is the name of the property that I deleted
 from Model's definition.)

 Neither one worked.  According to the SDK's datastore viewer, existing
 instances of Model continued to have values for old_property after I
 updated them with that transaction even with the two changes, together
 or separately.

 If this is also true of the production datastore, this is a big deal.

 On Oct 10, 4:44 am, Jason Smith j...@proven-corporation.com wrote:



  Hi, group. My app's main cost (in dollars and response time) is in the
  db.get([list, of, keys, here]) call in some very high-trafficked code.
  I want to pare down the size of that model to the bare minimum with
  the hope of reducing the time and CPU fee for this very common
  activity. Many users who are experiencing growth in the app popularity
  probably have this objective as well.

  I have two questions that hopefully others are thinking about too.

  1. Can I expect the API time of a db.get() with several hundred keys
  to reduce roughly linearly as I reduce the size of the entity?
  Currently the entity has the following data attached: 9 String, 9
  Boolean, 8 Integer, 1 GeoPt, 2 DateTime, 1 Text (avg size ~100 bytes
  FWIW), 1 Reference, 1 StringList (avg size 500 bytes). The goal is to
  move the vast majority of this data to related classes so that the
  core fetch of the main model will be quick.

  2. If I do not change the name of the entity (i.e. just delete all the
  db.*Property definitions in the model), will I still incur the same
  high cost fetching existing entities? The documentation says that all
  properties of a model are fetched simultaneously. Will the old
  unneeded properties still transfer over RPC on my dime and while users
  wait? In other words: if I want to reduce the size of my entities, is
  it necessary to migrate the old entities to ones with the new
  definition? If so, is it sufficient to re-put() the entity, or must I
  save under a wholly new key?

  Thanks very much to anyone who knows about this matter!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Will reducing model size improve performance?

2009-10-10 Thread Nick Johnson (Google)
On Sat, Oct 10, 2009 at 6:27 PM, Jason Smith j...@proven-corporation.comwrote:


 Thanks for the help guys. I think this is an important matter to have
 cleared up.

 It's bedtime here (GMT+7) however tomorrow I think I will do some
 benchmarks along the lines of the example I wrote up in the SO
 question.

 At this point I would think the safest thing would be to completely
 change the model name, thereby guaranteeing that you will be writing
 entities with fresh keys. However I suspect it's not necessary to go
 that far. I'm thinking that on the production datastore, changing the
 model definition and then re-put()ing the entity will be what's
 required to realize a speed benefit when reducing the number of
 properties on a model. But the facts will speak for themselves.


There's no need to use a new model name: You can simply create new entities
to replace the old ones, under the current model name. If you're using key
names, you can construct a new entity with the same values as the old ones,
and store that.

You can also use the low-level API in google.appengine.api.datastore; this
provides a dict-like interface from which you can delete unwanted fields.

-Nick Johnson


 On Oct 11, 12:17 am, Andy Freeman ana...@earthlink.net wrote:
   In other words: if I want to reduce the size of my entities, is
   it necessary to migrate the old entities to ones with the new
   definition?
 
  I'm pretty sure that the answer to that is yes.
 
If so, is it sufficient to re-put() the entity, or must I
   save under a wholly new key?
 
  I think that it should be sufficient re-put() but decided to test that
  hypothesis.
 
  It isn't sufficient in the SDK - the SDK admin console continues to
  show values for properties that you've deleted from the model
  definition after the re-put().  Yes, I checked to make sure that those
  properties didn't have values before the re-put().
 
  I did the get and re-put() in a transaction, namely:
 
  def txn(key):
  obj = Model.get(key)
  obj.put()
  assert db.run_in_transaction(txn, key)
 
  I tried two things to get around this problem.  The first was to add
  db.delete(obj.key()) right before obj.put().  (You can't do obj.delete
  because that trashes the obj.)
 
  The second was to add obj.old_property = None right before the
  obj.put() (old_property is the name of the property that I deleted
  from Model's definition.)
 
  Neither one worked.  According to the SDK's datastore viewer, existing
  instances of Model continued to have values for old_property after I
  updated them with that transaction even with the two changes, together
  or separately.
 
  If this is also true of the production datastore, this is a big deal.
 
  On Oct 10, 4:44 am, Jason Smith j...@proven-corporation.com wrote:
 
 
 
   Hi, group. My app's main cost (in dollars and response time) is in the
   db.get([list, of, keys, here]) call in some very high-trafficked code.
   I want to pare down the size of that model to the bare minimum with
   the hope of reducing the time and CPU fee for this very common
   activity. Many users who are experiencing growth in the app popularity
   probably have this objective as well.
 
   I have two questions that hopefully others are thinking about too.
 
   1. Can I expect the API time of a db.get() with several hundred keys
   to reduce roughly linearly as I reduce the size of the entity?
   Currently the entity has the following data attached: 9 String, 9
   Boolean, 8 Integer, 1 GeoPt, 2 DateTime, 1 Text (avg size ~100 bytes
   FWIW), 1 Reference, 1 StringList (avg size 500 bytes). The goal is to
   move the vast majority of this data to related classes so that the
   core fetch of the main model will be quick.
 
   2. If I do not change the name of the entity (i.e. just delete all the
   db.*Property definitions in the model), will I still incur the same
   high cost fetching existing entities? The documentation says that all
   properties of a model are fetched simultaneously. Will the old
   unneeded properties still transfer over RPC on my dime and while users
   wait? In other words: if I want to reduce the size of my entities, is
   it necessary to migrate the old entities to ones with the new
   definition? If so, is it sufficient to re-put() the entity, or must I
   save under a wholly new key?
 
   Thanks very much to anyone who knows about this matter!
 



-- 
Nick Johnson, Developer Programs Engineer, App Engine
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en

[google-appengine] Re: Deleting / Hoarding of Application Ids

2009-10-10 Thread OvermindDL1

On Sat, Oct 10, 2009 at 9:08 AM, Andy Freeman ana...@earthlink.net wrote:

 Note that the yadayada.appspot.com is reserved for the owner of
 yaday...@gmail.com.  This association seems reasonable, but means that
 any name recycling/reclaiming for appengine must also address gmail
 and perhaps other google accounts.

So how do we use the appspot name that uses our own name (which is
what I tried to initially do but it said it already existed...)?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: 求救:我的Google A pp Engine出错误了!

2009-10-10 Thread monnand
ego008 写道:
 app_id

 2009/10/7 皇家元林 hjy...@gmail.com

   
 管理员,我的Google App Engine出错啦!
 上传不了micolog博客,怎么办?
 
这里也不是google app engine的官方列表啊……这里的管理员管不了app engine


   


-- 
Regards
 
Monnand
Email: monn...@gmail.com
GTalk: monn...@gmail.com


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] something about yourube api (upload)

2009-10-10 Thread Nsource

hello everyone, i'm new.

i neet to upload a video to youtube with youtube api.

And my problem is that everytime i open this page it will wait fot
long long time (more then 3~4 hours). it should be show something
after uploaded the video but there isn't any output

But when i go to YOUTUBE.COM ,i also see the video which i upload it
is.

this is my code


?php
 require_once 'Zend/Loader.php';
 // the Zend dir must be in your include_path
 Zend_Loader::loadClass('Zend_Gdata_YouTube');
  Zend_Loader::loadClass('Zend_Gdata_AuthSub');
  Zend_Loader::loadClass('Zend_Gdata_ClientLogin');


$authenticationURL= 'https://www.google.com/youtube/accounts/
ClientLogin';
$httpClient = Zend_Gdata_ClientLogin::getHttpClient(
  $username =
'osource1...@gmail.com',
  $password = 'pass1234',
  $service = 'youtube',
  $client = null,
  $source = 'MySource', // a
short string identifying your application
  $loginToken = null,
  $loginCaptcha = null,
  $authenticationURL);



$myDeveloperKey = 'AI39si7-1seWd-
crQBVHK_XPwVfcJPCujLFX9bKfdgqMunKO6xk9Ow0cLLSuA6XLQGlMDT_yoN6Vin9iY8YJISUkC7RLpUWdFg';



//
$httpClient-setHeaders('X-GData-Key', key={$myDeveloperKey});
$yt = new Zend_Gdata_YouTube($httpClient);


function printVideoFeed($videoFeed, $displayTitle = null)
{
  $count = 1;
  if ($displayTitle === null) {
$displayTitle = $videoFeed-title-text;
  }
  echo 'h2' . $displayTitle . /h2\n;
  echo pre\n;
  foreach ($videoFeed as $videoEntry) {
echo 'Entry # ' . $count . \n;
printVideoEntry($videoEntry);
echo \n;
$count++;
  }
  echo /pre\n;
}
function printVideoEntry($videoEntry)
{
  // the videoEntry object contains many helper functions
  // that access the underlying mediaGroup object
  echo 'Video: ' . $videoEntry-getVideoTitle() . \n;
  echo 'Video ID: ' . $videoEntry-getVideoId() . \n;
  echo 'Updated: ' . $videoEntry-getUpdated() . \n;
  echo 'Description: ' . $videoEntry-getVideoDescription() . \n;
  echo 'Category: ' . $videoEntry-getVideoCategory() . \n;
  echo 'Tags: ' . implode(, , $videoEntry-getVideoTags()) . \n;
  echo 'Watch page: ' . $videoEntry-getVideoWatchPageUrl() . \n;
  echo 'Flash Player Url: ' . $videoEntry-getFlashPlayerUrl() . \n;
  echo 'Duration: ' . $videoEntry-getVideoDuration() . \n;
  echo 'View count: ' . $videoEntry-getVideoViewCount() . \n;
  echo 'Rating: ' . $videoEntry-getVideoRatingInfo() . \n;
  echo 'Geo Location: ' . $videoEntry-getVideoGeoLocation() . \n;
  echo 'Recorded on: ' . $videoEntry-getVideoRecorded() . \n;

  // see the paragraph above this function for more information on
the
  // 'mediaGroup' object. in the following code, we use the mediaGroup
  // object directly to retrieve its 'Mobile RSTP link' child
  foreach ($videoEntry-mediaGroup-content as $content) {
if ($content-type === video/3gpp) {
  echo 'Mobile RTSP link: ' . $content-url . \n;
}
  }

  echo Thumbnails:\n;
  $videoThumbnails = $videoEntry-getVideoThumbnails();

  foreach($videoThumbnails as $videoThumbnail) {
echo $videoThumbnail['time'] . ' - ' . $videoThumbnail['url'];
echo ' height=' . $videoThumbnail['height'];
echo ' width=' . $videoThumbnail['width'] . \n;
  }
}
//printVideoFeed($yt-getuserUploads('default'));
//exit();
// create a new VideoEntry object
$myVideoEntry = new Zend_Gdata_YouTube_VideoEntry();
// create a new Zend_Gdata_App_MediaFileSource object
$filesource = $yt-newMediaFileSource('cc.avi');
$filesource-setContentType('video/avi');
// set slug header
$filesource-setSlug('cc.avi');
// add the filesource to the video entry
$myVideoEntry-setMediaSource($filesource);
$myVideoEntry-setVideoTitle('My Test Movie');
$myVideoEntry-setVideoDescription('My Test Movie');
// The category must be a valid YouTube category!
$myVideoEntry-setVideoCategory('Autos');
// Set keywords. Please note that this must be a comma-separated
string
// and that individual keywords cannot contain whitespace
$myVideoEntry-SetVideoTags('cars, funny');
// set some developer tags -- this is optional
// (see Searching by Developer Tags for more details)
$myVideoEntry-setVideoDeveloperTags(array('mydevtag',
'anotherdevtag'));
// set the video's location -- this is also optional
$yt-registerPackage('Zend_Gdata_Geo');
$yt-registerPackage('Zend_Gdata_Geo_Extension');
$where = $yt-newGeoRssWhere();
$position = $yt-newGmlPos('37.0 -122.0');
$where-point = $yt-newGmlPoint($position);
$myVideoEntry-setWhere($where);
// upload URI for the currently authenticated user
$uploadUrl = 'http://uploads.gdata.youtube.com/feeds/api/users/default/
uploads';
// try to upload the video, catching a Zend_Gdata_App_HttpException,
// if available, or just a regular Zend_Gdata_App_Exception 

[google-appengine] Re: Unable to upload new application and unable to access console

2009-10-10 Thread Jeff S (Google)
Hi Jacob,

Is you email account a Google Apps account? Since the error states that the
password doesn't match, I'm wondering if you could have a Google Apps
account and a Google Account with the same email address but different
passwords. Does that seem like a possibility?

Thank you,

Jeff

On Wed, Oct 7, 2009 at 1:57 PM, jacobg ja...@cookiejunkie.com wrote:


 Hi,

 Earlier today I created an application. I went through the SMS
 verification process and entered the ID and other application details.
 When I hit the submit button it took me back to the create page
 (https://appengine.google.com/start) without giving me any feedback. I
 tried accessing the admin console, but it always took me back to the
 Create Application wizard.

 I looked around on the forums, and people mentioned that I needed to
 upload the application to GAE in order to see it in the console. When
 I tried to do that, it asked me for the account credentials three
 times and then said com.google.appengine.tools.admin.ServerConnection
 $ClientLoginException: Email jacobatcookiejunkiedotcom and
 password do not match. I tried this three times without success. I
 also verified the credentials by logging in and out of my Google
 Account several times and it worked fine.

 When I try to go through the Create Application wizard again it asks
 me to verify my mobile number, but then says it has already been used!
 So now I'm stuck, unable to access my new application and creating new
 ones. Please help me!

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---