[09:01am] scudder: marzia_google and I are here from Google
[09:02am] scudder is now known as scudder_google.
[09:02am] marzia_google: good morning
[09:02am] tonyarkles: hi!
[09:02am] Cyndre: Morning, and thank you for you time
[09:02am] DineshV: hi
[09:02am] martimartino: evening!
[09:02am] scudder_google:
[09:02am] SDragon: 1st question: are the naked domain issues resolved
/ are there any plans to officially support naked domains in the near
future?
[09:03am] jeverling: hi everybody!
[09:03am] marzia_google: though we would like to eventually support
naked domains
[09:03am] marzia_google: it's not something that is the highest
priority, they were taken away so that we could do effective load
balancing on requests
[09:04am] marzia_google: so the preferred method of handling naked
domains is to set up a 302
[09:04am] VM: hi!
[09:05am] Cyndre: when dealing with multiple pages is it better to
handle them in one script, or a script per page?
[09:05am] SDragon: is geographical load balancing implemented? IE:
requests from europe handled within europe, etc; and if so, are there
any data-synchronization issues we should pay attentiont to?
[09:06am] marzia_google: hmmmm, i don't have a concrete answer for
that question (re: one or multiple scripts)
[09:06am] marzia_google: certainly at some point multiple scripts are necessary
[09:06am] prasannaj: hi
[09:06am] DineshV: we are developing a app development platform over
gae(in lines of coghead and longjump) for over 6 months now.
[09:07am] scudder_google: SDragon: at the moment app engine doesn't do
very much geographic load balancing
[09:07am] martimartino: I've not found much in the docs about
enforcing unique properties etc. what's the best way to enforce data
constraints - can you do them at the model level, or should checks
just be done when creating/saving entities?
[09:08am] DineshV: while trying to improve performance, we found out
memcache and queries are not performant as we actually expected it to
be.
[09:08am] DineshV: we used cprofile to profile the code in gae
[09:08am] Cyndre: is there an issue with storring gigs of data in
datastore?  asking because its the lowest set constraint, .5 gb
[09:08am] marzia_google: concerning profiling, if you could elaborate
that would be helpful
[09:09am] scudder_google: martimartino: checking your data to fit
constrints on write is generally the prefered model
[09:09am] marzia_google: concerning storage, there is no concern, 1/2
gig is the free quota
[09:09am] marzia_google: which currently we can bump
[09:09am] martimartino: scudder: ok, thanks for that!
[09:09am] scudder_google: what kinds of constraints are you interested in?
[09:09am] marzia_google: and soon you should be able to pay for
[09:09am] Cyndre: any idea how soon?  that was my next question
[09:09am] SDragon: which of the upcoming runtime languages are in the
most mature phase / expected to be released within the current release
window?
[09:10am] martimartino: marzia_google: Mainly enforcing unique groups
of properties
[09:10am] marzia_google: before the end of march, as we've mentioned
paying is currently in the trusted tester phase
[09:11am] scudder_google: SDragon: no comment
[09:11am] DineshV: marzia_google:we are creating close to 400objects
in single transaction(to create a complete page). we use memcache as
second level cache
[09:11am] Cyndre: marzia_google: Thanks, I feel much better about
using GAE now.  ty
[09:11am] DineshV: and we do maintain a first level chache
[09:11am] marzia_google: yes, you will not be able to create 400
objects reliably in a single transaction
[09:12am] scudder_google: martimartino: so you are saying that you
want to ensure that a certain property in an entity is unique across
all entities of that kind?
[09:12am] marzia_google: the datastore is optimized for quickly
reading information, writes require a bit more work
[09:12am] noodlez_: work queues/long running processes are not on the
roadmap, does this means they won't be available anytime soon (at
least not in Q1)?
[09:12am] Cyndre: Ive noticed that with index building - I had 4 items
in it and it took 7 minutes to order it......
[09:13am] VM: is it advisable to use memcache,for caching data above
the Bigtable
[09:13am] marzia_google: we are working on updating the roadmap with
additional information for the upcoming years, while long running
processes are something we are working on, they aren't slated for Q1
[09:14am] martimartino: scudder: basically I want to do the equivalent
of unique_together in a rdb
[09:14am] marzia_google: re: index building, it's an offline process
so it can't necessarily be reliably timed
[09:14am] marzia_google: i would read 'How Index Building Works'
[09:14am] marzia_google: to get a better idea of what happens
[09:14am] marzia_google: or see ryan+appengine's comments in the
groups, which are the most detailed wrt to details on the datastore
[09:14am] DineshV: marzia_google: I am trying to get profile data. but
the profile data that comes with local sdk and gae points to 2
different issues.
[09:15am] muthu_ joined the chat room.
[09:15am] mib_wuv62gu8 joined the chat room.
[09:15am] marzia_google: VM: Im not sure what you mean by caching data
above big table
[09:15am] marzia_google: profiling data is only reliable in production
[09:15am] dankles joined the chat room.
[09:15am] marzia_google: since the SDKs performance characteristics
differ drastically from production
[09:16am] VM: marzia:its somethink like,building a second level cache
[09:16am] martimartino: the other sql trick I would like to use is
selecting on a regexp, so that I can match the start of any word in a
sentence. Any tips on how to do this?
[09:16am] noodlez_: it seems not all requests marked as "high cpu" in
the logs are counted as such in the quota. can you give us some
intuition about which ones are counted? is there a different quota for
high datastore cpu requests?
[09:17am] dankles: i 2nd noodlez_ question
[09:17am] VM: marzia: so that i dont need to hit the database every
time,i will directly get it from memcache(second level cache)
[09:17am] tonyarkles: re noodlez_ comment: has anything changed
recently for how high cpu requests are counted?
[09:18am] scudder_google: martimartino: one approach you could use, is
to specify a property to search on which has the first 5 characters in
the sentence
[09:18am] marzia_google: VM: yes, then this is definitely something
you should be using memcache for
[09:18am] marzia_google: concerning high CPU warnings
[09:18am] scudder_google: martimartino: then you could perform an
exact match on that
[09:18am] Cyndre: noodlez_ speaking of which in the dashboard my
script says Im about to run out of time, but only used 1.84 hours of
cputime
[09:18am] scudder_google: while the larger glob of text is not indexed
[09:19am] marzia_google: the requests that are counted toward high CPU
quota are those that say 'This request uses xx times the amount of
CPU'
[09:19am] ryan_google joined the chat room.
[09:19am] DineshV: yes. in production the profiler points to proxy
wait, and as per your post, means waiting for some api response
[09:19am] martimartino: scudder: that sounds like a nice trick! What
I'm tryint to do is a live search kind of thing though, so each query,
the num of characters will increase...
[09:19am] moraes: how long application is kept in cache before the
server reloads it?
[09:19am] DineshV: out of 3 seconds, proxy wait is close to 2.1sec
[09:20am] DineshV: and initially we suspected our code
[09:20am] VM: marzia: but when i deployed it in GAE cloud,it performs
very slow,the more i call memcache api,it says apiproxy:wait
[09:20am] DineshV: but, when commenting out memcache and queries in
our app, it died down to .09
[09:20am] ryan_google: moraes: are you worried about delay between
deploying a version and seeing it running live?
[09:21am] warreninaustinte: i have a few questions about GData feed in
App Engine
[09:21am] sumit joined the chat room.
[09:21am] dan_google joined the chat room.
[09:21am] scudder_google: warreninaustinte: I can help with that
[09:21am] marzia_google: well retrieving things from memcache will
result in api_proxy wait time, but should reduce the overall time to
complete a request
[09:22am] marzia_google: it should generally be much faster than
retrieving data from the datastore
[09:22am] moraes: ryan_google: no, i'm actually thinking if i should
make some initialization stuff less heavy if it reloads too often
[09:22am] ryan_google: ah, i see
[09:22am] ryan_google: i assume you're using main() runtime caching
[09:22am] martimartino: scudder: If supply the query "fr" I would want
to match a sentence containing any words starting with "fr" eg: "my
brain is fried", and "my best friend" would be matches
[09:22am] moraes: ryan_google: , yes
[09:22am] DineshV: is there any immediate plan to improve the
performance of memcache
[09:23am] ryan_google: there's no hard and fast rule, and we don't
publish certain details of the runtime architecture
[09:23am] dankles: this was mentioned in a recent post:  basic DS
write operations seem to take up many 100s of ms-cpu, often ~1000 for
a single write.  is this going to be optimized at all, or should we
budget to pay for this?
[09:23am] VM: marzia: when compared to direct hit from
datastore,memcache is better.But still it takes more time
in,apiproxy.wait
[09:23am] whaley joined the chat room.
[09:24am] ryan_google: but in general, the ratio of cold to warm
requests is way low, so don't worry too much about expensive
initialization if it's runtime cached
[09:24am] scudder_google: martimartino: have you looked at SearchableEntity
[09:24am] Indra83: Hi, I have user entities and each of them has a
score which keeps chanigng.. now i want to compute their ranks based
on the score... whats the best way to implement this?
[09:24am] ryan_google: dankles: we're still tuning the cost of all
operations, including datastore, so you can definitely expect to see
improvements there
[09:24am] moraes: ok, good. thanks.
[09:24am] martimartino: scudder: no. Thankyou, I'll check it out!
[09:24am] DineshV:
marzia:http://jquerysample.appspot.com/engine/withmetadata for
profiling data
[09:24am] ryan_google: dankles: to be clear, though, that only means
we're working on getting them to be more accurate, not necessarily
that they'll all drop
[09:24am] scudder_google: it isn't a perfect solution but it is a good
start to look for an approach that might fit what you need
[09:25am] DineshV: please do view source for clean formatted data
[09:25am] marzia_google: so, i would say, generally we are happy with
the performance of memcache
[09:25am] scudder_google:
http://code.google.com/p/googleappengine/source/browse/trunk/google/appengine/ext/search/__init__.py
[09:25am] marzia_google: and while it is the case that there is
apiproxy.wait time involved
[09:25am] marzia_google: it shouldn't be too concerning
[09:26am] ryan_google: indra83: good question! that's a difficult
problem. we actually have a ranking implementation internally that
would do what you want, and we definitely want to open source it
[09:26am] ryan_google: i'llre- ping the people who wrote
[09:26am] dankles: ryan_google: what do you mean by more accurate?
[09:26am] DineshV: we are more worried about performance, as in app
dev platform, performance is one of the key driving points.
[09:26am] martimartino: scudder: I'll defo take a look. thanks!
[09:27am] ryan_google: dankles: i see, you really were asking if they'd go down
[09:27am] Indra83: ryan_google: that would be great.. if i get it
before i reach 1000 users i dont have to rewrite my code
[09:27am] ryan_google: the short answer is, disk seeks are expensive,
and disk writes are expensive, and puts incur lots of both
[09:28am] ryan_google: so while you may see changes in the cost of
puts, possibly big changes, we can't change that fundamental rule of
thumb
[09:28am] dankles: ryan_google: if a request is handled in <200ms, it
just seems odd that there was 1000ms-cpu used.  is this expected?
[09:28am] dankles: that would imply that 5 servers are churning away
handling the request in parallel
[09:28am] ryan_google: dankles: sure! it's a distributed system,
particularly the datastore so many machines may have worked on your
request in parallel
[09:28am] marzia_google: we are also very concerned with performance,
and i would always recommend using memcache
[09:29am] dankles: so is it a fair guideline that a page doesn't write
more than one or two entities?
[09:30am] DineshV: marzia_google: is there any timeline for improving
the performance of memcache and query
[09:30am] DineshV: marzia_google: otherwise we will think about
changing our design to accomodate
[09:30am] ryan_google: dankles: that really depends on the app and the request
[09:31am] ryan_google: some requests for some apps will simply need to
write lots of entities
[09:31am] marzia_google: no, we don't have any plans to alter the
performance of memcache
[09:31am] ryan_google: but yes, in general, the read/write ratio of
requests should be high, and of the requests that do writes, if you
can keep as many as possible doing as few writes as possible, that's
always good
[09:31am] ryan_google: but again, that all varies widely by app
[09:31am] dankles: ryan_google: and in those cases, are you suggesting
that they would just go over quota and hope there weren't many?
[09:32am] Adhir left the chat room. (Connection timed out)
[09:32am] ryan_google: like cpu cost, we're also tweaking quotas right now
[09:32am] ryan_google: specifically the high cpu quota
[09:33am] ryan_google: (we're definitely aware that it's a pain point,
and we don't like that any more than you all do)
[09:33am] dankles: i'm just finding that for even small entities, it's
often ~1000ms-cpu for a write, which has caused me to change a lot of
my logic around in ways that make the code less maintainable
[09:33am] crewnerd: +1 on dankles concerns
[09:33am] ryan_google: so you can expect that the high cpu quota won't
hurt nearly as much in the future as it does now
[09:33am] ryan_google: understood. writes will always cost a
noticeable amount, since they incur a number of disk seeks and writes
for the indices
[09:33am] ryan_google: that won't change
[09:33am] Indra83: one more thing.. i keep getting "This request used
a high amount of CPU, and was roughly 1.3..." warning messages but
they dont show up as High CPU requests... are both of them different?
[09:34am] scudder_google: also, the High CPU limit applies to runtime
CPU, and datastore CPU usage is not counted towards that limit
[09:34am] ryan_google: +1
[09:34am] dankles: scudder_google: what what? wait, really?
[09:34am] Cyndre: . /test  485  14449  100%      each request is only
taking 30 devices and storing the data - should I look at using more
requests and less devices per request?
[09:35am] DineshV: marzia_google:we believe what we develop is very
significant, and probably the biggest app on gae. we would like some
guidance, as we change design quite often after understanding some
difficulties.
[09:35am] scudder_google: dankles: yup, see this FAQ item
http://code.google.com/appengine/kb/general.html#highcpu
[09:36am] DineshV: we already spoke to rafe kaplan when he was in
india. he is also very interested in what we are doing here.
[09:36am] dan_google: Datastore CPU quota counts toward CPU Time
quota, but not toward High CPU Requests.
[09:36am] marzia_google: the groups is the best source of advice
concerning app engine
[09:36am] scudder_google: Indra83: when you say that they don't show
up, where are you looking for them? in the logs, quota dashboard...
[09:36am] dankles: scudder_google: ok i did know that, sorry.  what's
confusing is that there isn't a breakout of these times anywhere.
[09:36am] marzia_google:
http://groups.google.com/group/google-appengine
[09:37am] dankles: and you get these red flag warnings that don't
really mean anything
[09:37am] dankles: just bc the DS used up CPU
[09:37am] VM: is there any profiling tool to profile our app running
in GAE cloud
[09:37am] Indra83: scudder_google: the warning message comes up in
logs.. and its quiet possible as im parsing some HTML... but it dosent
show up in the 'Quota Details' in dashboard
[09:38am] ianbicking joined the chat room.
[09:38am] scudder_google: Indra83: ah ok, the dashboard may not show
them if they are infrequent enough
[09:38am] tonyarkles: VM: cprofile
[09:38am] dankles: Indra83: those credits also get refreshed every
minute, maybe you're not checking fast enough?
[09:38am] scudder_google: since the High CPU Request limit refills at
a rate of 2 credits per minute
[09:39am] ryan_google: vm: also
http://code.google.com/appengine/kb/commontasks.html#profiling
[09:40am] dankles: btw the faq entry
http://code.google.com/appengine/kb/general.html#highcpu is
info-dense, but could really use a re-writing for clarity
[09:40am] pingooo joined the chat room.
[09:40am] dankles: maybe a pie chart or something
[09:40am] dankles: venn diagram
[09:40am] Cyndre: on the dashboard under current load I am getting
warnings that the script uses a high ammount of cpu and may soon
exceed its quota - is this to be ignored, or does it mean that I
should make my scripts run faster?
[09:41am] dankles: like "where do my mc-cpu go?" illustrated
[09:41am] noodlez_: scudder_google: I think the High CPU requests
warnings in the logs count the datastore CPU time as well, and the
quota page shows them correctly
[09:42am] marzia_google: if you are getting warnings that you will run
out of quota, then you should definitely look at optimizing those
requests
[09:42am] DineshV: marzia_google: is the performance drop in memcache
is because of no of objects that are stored in memcache.
[09:43am] tav_ joined the chat room.
[09:43am] Indra83: dankles: i tried to check again but cant reproduce them...
[09:43am] scudder_google: noodlez_: yes the log entries for CPU
warnings contain a couple of different numbers, one of which points to
runtime only (x.y times  over the...) other numbers tend to include
the datastore CPU usage
[09:43am] DineshV: and what is the maximum no of objects that can be
created in single transaction
[09:43am] rwilliamz joined the chat room.
[09:43am] Indra83: have you guys increased the CPU threshold before
you show log messages?
[09:43am] Cyndre: marzia_google: I am submitting 30 devices with 8
pieces of data each, and parsing with this -
http://pastebin.com/d4f9eeab9  would I be better off sending 3 times
as many requests with only 10 devices each?
[09:44am] ryan_google: cyndre: we're not sure we understand what you
mean by "devices"
[09:45am] scudder_google: Indra83: yes we did
[09:45am] marzia_google: the more items you store in memcache the
longer it should take, but it shouldn't take that long in general
[09:45am] scudder_google: we have increased the threshold for what is
counted as a high CPU request
[09:45am] martimartino: another bit of advice guys if poss! my django
app stored quite a lot of stuff in the session. Ive not found out much
about sessions yet on appengine, but I guess you'd advise putting as
much as possible into cookies?
[09:45am] marzia_google: there is no hard and fast rule as to the
number of items that can be created in a transaction
[09:45am] Cyndre: each device is a cpe that I have polled data off of
and then submitted to GAE
[09:45am] marzia_google: 'it depends on the size and shape of your data'
[09:45am] ryan_google: dineshv: there's no limit on entities put per
transaction. (i think i answered that on the group...?)
[09:45am] Indra83: scudder_google: Thank you!
[09:45am] ryan_google: cyndre: so each device is...an entity? (we
don't know your app, so you'll have better luck if you use generic app
engine terminology.)
[09:46am] ericsk: hello, has one of you ever heard about GAEO
http://doc.gaeo.org/ ?
[09:46am] dankles: ericsk: cool icon
[09:47am] ryan_google: gaeo looks cool, thanks for making it!
[09:47am] ericsk: I want to know how you think about it
[09:47am] Cyndre: okay, I have 30 different items with 8 pieces of
data on each, I then combine all 30 into a post, submit to gae, parse
and store it
[09:47am] DineshV: marzia_google: all are expando models
[09:47am] ericsk: ryan_google: thanks
[09:47am] ryan_google: dineshv: expando vs model doesn't really matter
[09:48am] ryan_google: cyndre: oh, i see, you're asking if splitting
them into multiple requests will help with the cpu warning you're
seeing
[09:48am] Cyndre: yea
[09:48am] DineshV: marzia_google: objects are not very small. close to
20 string/double props.
[09:48am] ryan_google: yes, it might, if those are the specific
requests that are causing the warning
[09:48am] ryan_google: dineshv: are you seeing a problem, ie a
timeout? or are you just curious?
[09:49am] DineshV: no timeout. but there is a significant diff in
performance when using mock and memcache/gfs
[09:49am] rwilliamz_ joined the chat room.
[09:49am] ryan_google: dineshv: by mock, you mean the dev_appserver
[09:50am] Cyndre: are you guys planning an api that I can just tie
into remotely to dump data into the datastore? (this way I only need
to submit, can parse on client side)
[09:50am] ryan_google: dineshv: that will always be true. they're
nothing like each other
[09:50am] tav left the chat room. (Read error: 110 (Connection timed out))
[09:50am] ryan_google: cyndre: we're definitely working on a better
bulk uploader, but nothing quite like what you describe
[09:51am] dankles: i'm using JSONPropertys (on top of TextProperty),
which means lots of parsing in/out of DS.  is there any way to
optimize this, eg using faster native json libs, etc?
[09:51am] Cyndre: thanks - going to have to check out the bulk
uploader - currently submitting 1700 items with 8 pieces of data
hourly, hopefully soon to be 45k items
[09:51am] dankles: or i was thinking of switching to ReprProperty and
storing repr(obj) and evaling on the way out... maybe faster.
[09:52am] ryan_google: dankles: sounds like you might know more about
optimizing json parsing than us
[09:53am] ericsk: dankles: simplejson ?
[09:53am] dankles: ryan_google: well i'm just using
django.utils.simplejson .. is that native code?
[09:53am] scudder_google: dankles: we don't have immediate plans to
use a C JSON library, for now it is pure Python, so run with whichever
you find to be faster
[09:54am] moraes: hmmm. c json would be nice. json is too commonly used.
[09:54am] DineshV: ryan_google: when using memcache, there is
significant api proxy wait time, which is not the case when we mock
memcache with our own first level cache.
[09:54am] dankles: i'll probably profile the ReprProperty approach
then... it feels like it would be faster, as that's native
parsing/dumping.
[09:54am] DineshV: obviously we can't use first level cache as
alternative to memcache
[09:54am] ryan_google: dineshv: again, assuming by mock you mean
dev_appserver, there's no meaningful comparison between dev_appserver
and prod, at all
[09:55am] ryan_google: they're entirely different
[09:55am] DineshV: mock is not dev_appserver
[09:55am] DineshV: both are in prod
[09:55am] DineshV: mock I mean a datastructure we use as first level cache
[09:55am] DineshV: in prod
[09:55am] ryan_google: so, "mock" is your own in-runtime cache?
[09:55am] ryan_google: ah, ok then
[09:55am] ryan_google: sure, that makes sense. with the real memcache,
you have to make an RPC to another machine, which is running a
memcache server
[09:56am] ryan_google: as with any app engine API
[09:56am] DineshV: understand that
[09:56am] ryan_google: your mock is running in the same process, so it
doesn't incur the network overhead
[09:56am] dankles: google guys: as the hour is winding down, is there
anything cool / new / special about GAE you want to share with us?
[09:56am] sumit: there are some funds around some of the other
platforms like Facebook, Salesforce.com to encourage development. Is
there / will there be one around AppEngine?
[09:56am] DineshV: so to do with queries. but, the difference is very
significant
[09:56am] Cyndre: Id just like to thank you guys for your time,
patience, and gae   Thanks for the informative chat.
[09:56am] moraes: +1 for specials
[09:57am] ryan_google: dineshv: sure! a memory lookup will be orders
of magnitude faster than a network round trip
[09:57am] martimartino: well said Cyndre, thanks ladies and gents!
[09:57am] tonyarkles: thanks a pile everyone!  I really appreciate
that you take the time to sit down with us
[09:57am] rizumu joined the chat room.
[09:57am] ryan_google: sumit: good question! i don't think we have any
product or business people on the chat, so we probably don't know
[09:58am] ryan_google: dankles: one thing i would mention is remote_api
[09:58am] ryan_google: a new feature
[09:58am] ryan_google: in the sdk
[09:58am] ryan_google: we're hoping to put up an article soon
[09:58am] ryan_google: it's basically an api call proxy
[09:58am] nor3: how do i always manage to walk in on these dev chats
at the end of htem?
[09:58am] aa_: yeah has someone logged it?
[09:59am] ryan_google: you can write a python script that uses the app
engine APIs, e.g. datastore, memcache, users, etc, that you run on
your desktop
[09:59am] marzia_google: it's logged, and we'll publish the transcript
[09:59am] scudder_google: aa: yes, and it will be posted on the discussion group
[09:59am] ryan_google: remote_api proxies those API calls and runs
them against your prod app's data, memcache, etc
[09:59am] scudder_google: ha, marzia beat me to it
[09:59am] marzia_google: also, the schedule for these chats is on the group
[09:59am] aa_: ok, thanks, sorry, this api call proxy thing sounds awesome
[09:59am] ericsk: ryan_google: something like RPC ?
[09:59am] ryan_google: one interesting example use case for
remote_api, combined with a __key__ query, is mapping over your
datastore and doing something for each entity
[10:00am] ryan_google: you don't have to deal with splitting up the
entities across requests, and keeping track of them, since the python
script is running on your local machine
[10:00am] ryan_google: so it can run for hours, or even days, if necessary
[10:00am] crewnerd: ah - a poor man's long-running process?
[10:00am] ryan_google: yup
[10:00am] noodlez_: remote_api sounds great
[10:00am] dankles: sounds nice
[10:00am] ryan_google: using your own machine
[10:00am] aa_: self-hosted long-running process
[10:00am] ryan_google: we'll get an article up soon, but in the
meantime, feel free to take a look in the sdk. it's in
google/appengine/ext/remote_api/.
[10:00am] ericsk: cool
[10:01am] moraes: or a external server etc. interesting.
[10:01am] ryan_google: (fair warning, i *think* it's all there and
working in the 1.1.9 sdk, but i'm not entirely sure. if not, it'll
definitely be there and work in the next release.)
[10:01am] dankles: ok guys thank you much.  remember thru the bitching
and questions, we still appreciate all the hard work you put in!
[10:01am] Indra83: neat.. i can do all CPU intensive stuff on my machine
[10:01am] ryan_google: aww, thanks, warm fuzzies!
[10:01am] ryan_google: bye all
[10:02am] VM: thanks
[10:02am] noodlez_: thanks a lot guys! you are doing an awesome job with GAE!
[10:02am] DineshV: thanks everybody. thanks for you time
[10:02am] crewnerd left the chat room. ("ChatZilla 0.9.84 [Firefox
3.0.5/2008120121]")
[10:02am] moraes: bye, and thanks!
[10:02am] marzia_google: thanks everyone for coming!
[10:02am] TooAngel1 left the chat room. (Success)
[10:02am] VM: bye all
[10:02am] Indra83: thanks!
[10:02am] ericsk: thanks!
[10:02am] angerman left the chat room.
[10:02am] aa_: yes, thanks
[10:02am] scudder_google: thank you all, great questions and
suggestions, happy coding
[10:02am] • aa_ wonders if these remote api thing can be triggered by
appengine itself, I guess using an http get as a trigger would work.
That could be quite fun

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to