[google-appengine] Re: What are your top three issues in appengine?

2009-10-08 Thread Joe Bowman

Well, here's the reasons I'm not using appengine for my big project,
so I guess these would be my top issues

1. datastore timeout issue... we need way more reliability with the
datastore. timeout on put can be abstracted with memcache (provided
the memcache entity can last long enough for a retry), however timeout
on read is something you really can't work around (other than to fail
the entire request)

2. Larger file support. In my case, I want users to be able to upload
media such as photos and media.

3. The option to purchase dedicated memcached space. This would go
along with option 1, it also could mean some functions such as http
sessions and caching could be done entirely in memcache. The problem
now is with memcache shared amongst instances you can't be sure your
data will persist, as it would if you're running your own environment.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Why Google AppEngine sucks

2009-09-26 Thread Joe Bowman

Here's my thoughts on the matter, as posted a few weeks ago

http://joerussbowman.tumblr.com/post/182818817/why-im-dropping-google-appengine-for-my-primary

Basically, it depends on whether or not appengine is the right tool
for the job. If you have a lot of reading/writing to backend
datastore, such as in the case of http sessions, then I'd probably not
recommend it. I believe a higher profile app will see less errors due
to it being kept hot most of the time, however for low use apps that
will see more cold boots on requests, you really have to be careful.

I will say, since whatever fix happened for the imports issue, I've
seen a little more reliability from my apps, but, it's not been long
enough (or used enough) for me to really stand behind that feeling. My
app, django using appengine-patch and gaeutilities session, was seeing
enough problems that even I stopped using it, and have since decided
to move off of Appengine for that specific project.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Timeouts have increased since maintenance on August 18th ?

2009-08-28 Thread Joe Bowman

For the past couple of days I've been seeing lots of datastore
timeouts. The confusing thing is it seems to be application specific,
as it's only one app that I'm seeing the problem, all the others
appear to be running fine.

On Aug 28, 6:58 am, Sylvain  wrote:
> Hi,
>
> Today, I've checked the log for one of my app and I've noticed than
> one of my handler produces a lot of timeouts since the  maintenance on
> August 18th.
>
> Just after the message "Datastore writes are temporarily unavailable."
> is gone, a lot of timeouts are raised an now the number seems to be
> very high.
>
> Did you notice such behavior ?
>
> Regards
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How can I use cron to make something happen every hour on the hour?

2009-05-11 Thread Joe Bowman

Does "every 1 hours" not work?

Or if you need the specific time, "every hour 00"

I'd try those.



On May 10, 5:01 pm, Luke  wrote:
> Yeah I suppose thats only 24*60 = 1440 requests, not bad.
>
> On May 10, 1:51 pm, Sylvain  wrote:
>
> > You can do this :
> > every 1 minutes  (or 2 minutes if you don't need to be very precise)
> > and check the time ?
>
> > if time != 9h (10,11,...):
> >   return
>
> > It is not perfect but it could work.
>
> > Regards ?
>
> > On 10 mai, 21:29, Luke  wrote:
>
> > > I need something to happen every hour on the hour (e.g. 9:00, 10:00,
> > > 11:00) but the cron parser does not appear to support this. I can make
> > > 24 entries of the form every day 00:00, every day 01:00, every day
> > > 02:00 etc. but I'm only allowed 20 cron.yaml entries by google.
> > > Incidentally, its annoying that appcfg.py cron_info doesn't appear to
> > > have the same requirements as appcfg.py update (cron_info didn't
> > > inform me about the 20 entry limitation).
>
> > > Thanks,
> > > Luke
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Still no full-text search? Mystified by the priorities.

2009-04-29 Thread Joe Bowman

What about Yahoo! Boss? You can restrict it to search a site, and
while not documented, has functionality such as inurl and inpath which
you could use to push out the specific data you need. The one trick
would be to make sure Yahoo searches the proper path, but I'm sure
there's ways to get that in their crawler list.

On Apr 29, 4:58 pm, "Thomas McKay - www.winebythebar.com"
 wrote:
> I concur. Awkward putting the GAE badge on my homepage and then adding
> caveats to the search fields.
>
> On Apr 29, 4:37 pm, dartdog  wrote:
>
> > I don't get Google's reticence to even comment on this issue and give
> > some guidance as to when an how we might be looking for a solution.
>
> > > > I would think a Search API that leveraged Google's search
> > > > infrastructure would be GAE's killer app.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-07 Thread Joe Bowman

Not to mention the threats consist of actions that were suggested as
an alternative. That suggestion was reproached as unacceptable. So it
is quite confusing.

On Apr 7, 2:16 pm, Andy Freeman  wrote:
> > Some user reported a problem and wanted to know if Google had any plan
> > to solve it. That equates to wanting a "guarantee" in your world?  Some
> > kind of twisted world you live in there.
>
> When considering a plan to solve a problem, I think that it's
> reasonable to consider whether said plan will actually solve the
> problem.  Why?  Because if a plan doesn't solve the problem, the
> problem still exists.
>
> I am willing to assume that Google is doing what it can reasonably do
> about this.  The continued complaints suggest that the results of
> those efforts are inadequate.  And, we've seen "threats" regarding
> what will happen if Google doesn't come through.  Maybe those people
> will be satisfied by something short of a guarantee, but 
>
> And, as has been noted, a Google representative posted a solution and
> was ignored.
>
> On Apr 6, 11:33 pm, Andy  wrote:
>
> > > Yes, I do.
>
> > I'm glad you finally learn the word "obligation". Too bad you didn't
> > learn it earlier when you spewed your nonsense that obligation can
> > only come from "laws and contracts".
>
> > Feel free to consult a dictionary first next time when you find
> > yourself once again tempted to use a big word you don't understand.
>
> > > I'm not angry.
>
> > Good for you. Definitely worth reporting back to your anger management
> > counselor
>
> > > I'm merely pointing out that Google's
> > > capabilities in this area are limited, that they need to take their
> > > complaints elsewhere if they want guarantees.
>
> > Who's talking about "guarantees"?
>
> > Some user reported a problem and wanted to know if Google had any plan
> > to solve it. That equates to wanting a "guarantee" in your world? Some
> > kind of twisted world you live in there.
>
> > In fact the only person who even brought up the word "guarantee" is
> > you.
>
> > Do you always argue against your own strawman like that?
>
> > > Do you really believe that Google can honor a promise that a given
> > > site won't be blocked if the Chinese govt wants to block said site?
> > > (Feel free to assume that the site is hosted in China.)
>
> > Who's talking about "promise that a given site won't be blocked" other
> > than you?
>
> > Once again you're the only person to use words like "guarantees" and
> > "promise"
>
> > You must be really busy arguing with your own strawman like that...
>
> > > I merely pointed out that Google can't do as they ask
>
> > And you're the spokesperson of Google? self-appointed?
>
> > This is what the OP asked: "Does Google have a plan for dealing with
> > this?"
> > No different than any other threads that are also about reporting
> > problems and asking for solutions.
>
> > For whatever reason such a simple question bothers you tremendously.
> > To such a degree that you felt compelled to spew nonsense such as
> > "Google can't do as they ask", when in fact you have
> > no standing to speak for Google on what they can or cannot do.
>
> > So the real question is why does the simple question "Does Google have
> > a plan for dealing with this?" bother you so much?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-07 Thread Joe Bowman

http://code.google.com/p/googleappengine/issues/detail?id=1072

On Apr 7, 11:39 am, WallyDD  wrote:
> Just to add some irony to this.
>
> Google is doing some developer days in Beijing and they are going to
> talk about appengine.
> And just to really demonstrate how aware Google is of this entire
> issue they have advertised this on blogspot.com, which is also blocked
> in 
> China.http://google-code-updates.blogspot.com/2009/04/google-developer-days...
>
> To answer Andys question.
> Does Google have a plan for dealing with this? I don't think so.
>
> On Apr 6, 10:23 pm, WallyDD  wrote:
>
> > Thanks for the answer Joe.
>
> > I have to agree it is not a turnkey solution and from the look of
> > things people are probably better off giving up on GAE and finding an
> > alternate host. The general feeling I find on the web is that Amazons
> > service is better suited for the international market.
>
> > On Apr 6, 3:59 pm, Joe Bowman  wrote:
>
> > > Get a server and IP that is available in China, but outside of the
> > > chinese firewall. Configure it to proxy you appspot.com domain. It
> > > gets tricky handling cookies and session state and such doing this
> > > though. Not a turnkey solution. Basically all requests to your
> > > appengine application coming from users using the proxy, will be seen
> > > as the proxy machine not the individual client machines. There are
> > > some proxy passthroughs you can do depending on the software you
> > > choose to handle this.
>
> > > Of course you'll have to pay for the bandwidth usage going through the
> > > proxy as well.
>
> > > On Apr 6, 12:35 pm, WallyDD  wrote:
>
> > > > The internet is indeed a funny place.
> > > > I did respond with a question on how to set this up but have received
> > > > no answer?
>
> > > > Any ideas anyone?
>
> > > > On Apr 6, 3:03 am, Paddy Foran  wrote:
>
> > > > > I'd just like to point out how funny it is that people keep banging on
> > > > > for Google to respond, and in their banging on for Google to respond,
> > > > > they missed Google's actual response.
>
> > > > > >> Is there any google staff who is responsible for GAE promotion and
> > > > > >> technology to say something here?
>
> > > > > >> How can I access to my Google Apps via my own domain directly, e.g.
> > > > > >> how can access via mail.my_domain.com instead of mail.google.com/a/
> > > > > >> my_domain.com?
>
> > > > > >One way to address this is to run a proxy server elsewhere, which 
> > > > > >will
> > > > > >allow your site to have it's own unique IP, rather than the shared 
> > > > > >IPs
> > > > > >of Google.
>
> > > > > >-Brett
> > > > > >App Engine Team
>
> > > > > Please note the "App Engine Team" signature. That means Brett (at
> > > > > least claims he) is from Google.
>
> > > > > Poor Brett was ignored, as people clamoured for Brett to comment.
>
> > > > > This is why I love the internet. It amuses me to no end.
>
> > > > > On Apr 6, 12:48 am, Andy Freeman  wrote:
>
> > > > > > > No company is willing to be a pawn in the game of politics between
> > > > > > > Google and China.
>
> > > > > > That sounds reasonable, but what can Google do to stop the Chinese
> > > > > > govt from blocking?
>
> > > > > > (1) Google can't tell the Chinese govt what to do.
>
> > > > > > (2) The Chinese govt appears to be technically competent and 
> > > > > > controls
> > > > > > the relevant connections, both from the outside and from internal
> > > > > > datacenters.
>
> > > > > > (3) Google can propose agreements, but China is a soverign entity 
> > > > > > and
> > > > > > and can do what it pleases wrt internal matters.  (Other posters 
> > > > > > have
> > > > > > suggested that buying dinner for the appropriate official would 
> > > > > > cause
> > > > > > the blocking to go away.  I don't see why the Chinese govt would 
> > > > > > find
> > > > > > such an agreement binding.)
>
> > > > > > Yes, one can argue that Google &q

[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-06 Thread Joe Bowman

Get a server and IP that is available in China, but outside of the
chinese firewall. Configure it to proxy you appspot.com domain. It
gets tricky handling cookies and session state and such doing this
though. Not a turnkey solution. Basically all requests to your
appengine application coming from users using the proxy, will be seen
as the proxy machine not the individual client machines. There are
some proxy passthroughs you can do depending on the software you
choose to handle this.

Of course you'll have to pay for the bandwidth usage going through the
proxy as well.

On Apr 6, 12:35 pm, WallyDD  wrote:
> The internet is indeed a funny place.
> I did respond with a question on how to set this up but have received
> no answer?
>
> Any ideas anyone?
>
> On Apr 6, 3:03 am, Paddy Foran  wrote:
>
> > I'd just like to point out how funny it is that people keep banging on
> > for Google to respond, and in their banging on for Google to respond,
> > they missed Google's actual response.
>
> > >> Is there any google staff who is responsible for GAE promotion and
> > >> technology to say something here?
>
> > >> How can I access to my Google Apps via my own domain directly, e.g.
> > >> how can access via mail.my_domain.com instead of mail.google.com/a/
> > >> my_domain.com?
>
> > >One way to address this is to run a proxy server elsewhere, which will
> > >allow your site to have it's own unique IP, rather than the shared IPs
> > >of Google.
>
> > >-Brett
> > >App Engine Team
>
> > Please note the "App Engine Team" signature. That means Brett (at
> > least claims he) is from Google.
>
> > Poor Brett was ignored, as people clamoured for Brett to comment.
>
> > This is why I love the internet. It amuses me to no end.
>
> > On Apr 6, 12:48 am, Andy Freeman  wrote:
>
> > > > No company is willing to be a pawn in the game of politics between
> > > > Google and China.
>
> > > That sounds reasonable, but what can Google do to stop the Chinese
> > > govt from blocking?
>
> > > (1) Google can't tell the Chinese govt what to do.
>
> > > (2) The Chinese govt appears to be technically competent and controls
> > > the relevant connections, both from the outside and from internal
> > > datacenters.
>
> > > (3) Google can propose agreements, but China is a soverign entity and
> > > and can do what it pleases wrt internal matters.  (Other posters have
> > > suggested that buying dinner for the appropriate official would cause
> > > the blocking to go away.  I don't see why the Chinese govt would find
> > > such an agreement binding.)
>
> > > Yes, one can argue that Google "needs" the Chinese govt to not block,
> > > but that doesn't imply that Google can do anything to stop the Chinese
> > > govt from blocking.  Google's needs do not obligate the Chinese govt.
>
> > > On Apr 5, 3:16 pm, WallyDD  wrote:
>
> > > > Google is more or less obligated to solve this issue.
>
> > > > No company is willing to be a pawn in the game of politics between
> > > > Google and China.
> > > > Name a single company (that has any international presence) who would
> > > > be willing to use GAE knowing full well that it is blocked in its
> > > > current form?
> > > > This issue has nothing to do with the Chinese government and there is
> > > > no way Google will point the finger at them.
>
> > > > Perhaps google can also take on all the other countries that are
> > > > blocking GAE and while they are at it they can point fingers at
> > > > corporate america and their firewalls?
> > > > You have to remember that at the moment this is a "preview release".
>
> > > > I don't really understand why you persist with this argument. You have
> > > > raised some valid points which should be looked at and considered in
> > > > the scheme of things but most of the diatribe you present here seems
> > > > aimed at China/Chinese Government. I have always found prejudices
> > > > cloud peoples judgement.
>
> > > > To sumarise how this problem will probably be viewed;
> > > > Google created a dns based system (for GAE addressing) which puts
> > > > everything though ghs.google.com. This system works really well and
> > > > from my experience it was very clever and efficient. However it has an
> > > > issue with firewalls that got overlooked. Google has just recently
> > > > been made aware of this problem.
>
> > > > On Apr 5, 12:53 pm, Andy Freeman  wrote:
>
> > > > > > Feel free to hair-split the word "obligation".
>
> > > > > It's the plain meaning of the word.  I apologise for not knowing that
> > > > > you didn't know what it meant when you wrote that Google had an
> > > > > obligation to make GAE available in China.  Are there other statements
> > > > > that you made without understanding their meaning?
>
> > > > > China availability issue is one of the few issues where folks claim
> > > > > that/act like Google has an obligation even though it's an issue where
> > > > > Google has very little capability to change things.
>
> > > > > > That's why I want to hear fr

[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-05 Thread Joe Bowman

Plenty of companies would be willing to deal with not being able to
support customers in China. Either for reasons of they only support
selling products within in their own countries, to, startups that will
move off of appengine if the need and funding arises to allow them
move off of appengine to support Chinese customers.

So far, I've only seen two people complaining about the Chinese
firewall and appengine, and a couple others voicing their opinion on
it not being Google's obligation to support it. So really, when you
count how many members this group has, I'd venture a guess that most
just don't care either way. So maybe we're all wasting our time. If
Google was going to respond, they'd have done it days ago.

I'd say do as they ask, file an issue, and move on.

On Apr 5, 6:16 pm, WallyDD  wrote:
> Google is more or less obligated to solve this issue.
>
> No company is willing to be a pawn in the game of politics between
> Google and China.
> Name a single company (that has any international presence) who would
> be willing to use GAE knowing full well that it is blocked in its
> current form?
> This issue has nothing to do with the Chinese government and there is
> no way Google will point the finger at them.
>
> Perhaps google can also take on all the other countries that are
> blocking GAE and while they are at it they can point fingers at
> corporate america and their firewalls?
> You have to remember that at the moment this is a "preview release".
>
> I don't really understand why you persist with this argument. You have
> raised some valid points which should be looked at and considered in
> the scheme of things but most of the diatribe you present here seems
> aimed at China/Chinese Government. I have always found prejudices
> cloud peoples judgement.
>
> To sumarise how this problem will probably be viewed;
> Google created a dns based system (for GAE addressing) which puts
> everything though ghs.google.com. This system works really well and
> from my experience it was very clever and efficient. However it has an
> issue with firewalls that got overlooked. Google has just recently
> been made aware of this problem.
>
> On Apr 5, 12:53 pm, Andy Freeman  wrote:
>
> > > Feel free to hair-split the word "obligation".
>
> > It's the plain meaning of the word.  I apologise for not knowing that
> > you didn't know what it meant when you wrote that Google had an
> > obligation to make GAE available in China.  Are there other statements
> > that you made without understanding their meaning?
>
> > China availability issue is one of the few issues where folks claim
> > that/act like Google has an obligation even though it's an issue where
> > Google has very little capability to change things.
>
> > > That's why I want to hear from a Google representative on their plan.
>
> > I predict that if Google says anything, it will be roughly equivalent
> > to "we're doing what we can".  At that point, you'll have to decide if
> > the results, which will vary with the whim of the Chinese govt, are
> > adequate for your purposes.
>
> > Of course, if you're better at dealing with the Chinese govt than
> > Google is
>
> > > Now just accept that fact and act accordingly.
>
> > And the basis for this order is...
>
> > On Apr 4, 6:11 pm, Andy  wrote:
>
> > > > I'm someone who understands that obligations come from laws and
> > > > contracts.  Feel free to point to the relevant chapter and verse that
>
> > > > However, absent a contract and/or a law, Google isn't obligated to
> > > > make GAE applications visible in China.
>
> > > Feel free to hair-split the word "obligation".
>
> > > Does Google have the legal obligation to solve this problem? No. Just
> > > like Google doesn't have any legal obligation to improve this service
> > > or add any new features. Does that mean users should stop posting any
> > > thread that's about improving GAE?
>
> > > Does that mean you're going to start polluting every single thread in
> > > this forum by posting your 'Google has no legal obligation to do this"
> > > drivel?
>
> > > > Good for you.  And Google may, or may not, offer such an option.  Note
> > > > "may not" - they're under no obligation to do so.  (I don't presume to
> > > > know the risks and costs of offering such an option.  After all, China
> > > > can block at the edge of the data centers, impose conditions, or even
> > > > shut them down.)
>
> > > Another zero-value drivel.
>
> > > Yes Google may or may not offer that solution, just like they may or
> > > may not offer any solution to any other problems raised in this forum
>
> > > That's why I want to hear from a Google representative on their plan.
> > > Your speculation on what Google may or may not do is just that,
> > > worthless speculation that serves no purpose in this discussion.
>
> > > You're right to not "presume to know" though, seeing how you don't
> > > know anything in this matter.
>
> > > Now just accept that fact and act accordingly.
--~--~--

[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-03 Thread Joe Bowman

My "take you business elsewhere" was offered a suggestion of how to
work around the fact that applications on GAE are being blocked by
national firewalls. Nothing more, nothing less. I also did not do any
moral/political preaching, if you read my response, I broke it down
into dollars and cents, the moral/political reasons are the reasons it
doesn't make sense when dollars come into play.

To reiterate:

 - In order for the national firewalls to not block appengine, Google
would have to enforce application rules that comply with those
firewalls. Meaning, they'd have to allow less applications, as spend
the money and invest in the manpower and application approval
workflows to support such restrictions. And, the political reputation
damage would be enough to drive even more people away from their
product.

In the end, it's not about politics or morals, it's about money, which
is what corporations the size of Google are always about. They
wouldn't be where they are today, in such a short time, if they
weren't.

The ability to purchase a static IP is a nice thought, but the costs
associated with the setup of that which would need to be passed on to
the user may be more than you're willing to pay. Something to think
about. It's a cloud based infrastructure with deployments worldwide,
setting up individual static IP's is a task a lot larger than what
you're dedicated/vps service offerings have to deal with.  I'd suggest
you file a feature request, which is a lot more likely to get noticed
than any reply in this thread.

On Apr 3, 2:10 am, Andy  wrote:
> I want to to hear from Google whether it has done anything to solve
> this problem or whether it has any plan to do so.
>
> I don't want to hear pompous speech from a self-appointed non-google
> spokesperson on his "political/moral" drivels and that he "encourage
> me to take my business elsewhere".
>
> So no, there's no pot and kettle here at all.
>
> And no, there's no need for google to "subvert the great firewall" in
> order to solve this problem. Google could talk to the authorities in
> China to see what can be done to get unblocked. It could give App
> Engine users the option to move their sites to google's data centers
> in China. It could start selling static IP hosting.  Plenty of
> solutions - just because you don't know about them doesn't mean they
> don't exist.
>
> On Apr 3, 1:54 am, Andy Freeman  wrote:
>
> > > This is a forum for people to share information on GAE and solve
>
> > problems.
>
> > Pot, kettle and all that unless you know how Google can subvert the
> > "great firewall".
>
> > On Apr 2, 8:48 pm, Andy  wrote:
>
> > > No one is interested in hearing your "political/moral" preaching.
>
> > > This is a forum for people to share information on GAE and solve
> > > problems. If you have anything of value to add to the discussion, feel
> > > free to add your bits. If not, you won't be missed.
>
> > > So you "encourage me to take my business elsewhere"?
>
> > > Who are you - are you the spokesperson of Google? Is that the Google
> > > official position on this matter?
>
> > > Or was that just another failed attempt of you at self-aggrandizement?
>
> > > On Apr 2, 7:53 pm, Joe Bowman  wrote:
>
> > > > China and the other countries block content that they deem
> > > > unacceptable for their citizens. In order to get appengine off the
> > > > blacklist, they would have to disallow people to create applications
> > > > which would be deemed offensive to those countries.
>
> > > > First, looking at it from the pure technical/business view, this would
> > > > require that applications no longer post immediately, and be under
> > > > review at each update at a minimum. This would potentially decrease
> > > > the amount of applications served (thus decreasing revenue) while
> > > > increasing costs to support the system.
>
> > > > From the political/moral view, Google has been a staunch supporter of
> > > > rights to speech, and it wasn't that long ago that they were chastised
> > > > for bending their own rules to support China at all by allowing the
> > > > filtering of search results. Further expansion of their products
> > > > having such filtering imposed by them would lead to more reputation
> > > > damage. Reputation damage also costs money.
>
> > > > So really, from two different perspectives, there's no business sense
> > > > in worrying about if appengine appl

[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-02 Thread Joe Bowman

China and the other countries block content that they deem
unacceptable for their citizens. In order to get appengine off the
blacklist, they would have to disallow people to create applications
which would be deemed offensive to those countries.

First, looking at it from the pure technical/business view, this would
require that applications no longer post immediately, and be under
review at each update at a minimum. This would potentially decrease
the amount of applications served (thus decreasing revenue) while
increasing costs to support the system.

>From the political/moral view, Google has been a staunch supporter of
rights to speech, and it wasn't that long ago that they were chastised
for bending their own rules to support China at all by allowing the
filtering of search results. Further expansion of their products
having such filtering imposed by them would lead to more reputation
damage. Reputation damage also costs money.

So really, from two different perspectives, there's no business sense
in worrying about if appengine applications are being firewalled by 6
out of the 150+ countries that exist in the world. As a customer you
have every right to take your business elsewhere, and if making you
application available in those 6 countries is of the importance that
you need to, I encourage you to do so. Not every web application is
going to be appropriate for appengine.

There's 6 countries that support appengine, and can only write
programs in python. Which is really the limiting factor of the
application environment?

On Apr 2, 7:16 pm, Andy Freeman  wrote:
> > Why shouldn't this be google's problem?
>
> Suppose that I sold raincoats and you wanted to buy one of my
> raincoats.  If someone else got between us and stopped me from
> delivering raincoats to you, who would you hold responsible?
>
> Google isn't doing the blocking.
>
> Yes, Google may be able to make more money if it can get around the
> blocking, but that doesn't change the fact that the blocks are not
> under Google's control.
>
> In other words, blocking may be a problem, that is an issue, for
> Google, but it isn't Google's problem, that is, something that Google
> has some obligation to do act upon.
>
> On Apr 2, 3:38 pm, Andy  wrote:
>
> > Why shouldn't this be google's problem?
>
> > Google's hosting platform is being blocked by the country with the
> > largest internet population in the world. You think that's not a major
> > problem?
>
> > I've used plenty of hosting sites that are perfectly accessible from
> > China. So obviously this is a problem for Google.
>
> > On Apr 2, 11:18 am, Barry Hunter  wrote:
>
> > > And why is this Google's problem?- Hide quoted text -
>
> > - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-02 Thread Joe Bowman

Most shared hosting providers don't have the customer base Google
already has, because they don't offer those services for free. Also,
because you haven't run into the 'bad neighbor' issue doesn't mean
it's not common. It really depends on what service providers you were
using.

It's not really a DNS issue by google. Static IP's are expensive and
if you haven't been keeping up with the IPv4 dilemma that's been going
on for the past few years, we're almost out of IPv4 address worldwide,
let alone Google being able to give each app it's own IP for free
(which is basically what you're asking for).

This really is an issue where if you have requirements of a dedicated
IP for your application, then yes, I imagine you do want to go with a
different hosting provider who is willing to provide (and charge you
for) that.

On Apr 2, 2:04 pm, WallyDD  wrote:
> Barry,
>
> The issue is with the way google deals with dns.
> The issue is very much googles as it means a lot of people will not be
> able to develop on Google app engine. Most larger websites have no
> choice but to steer clear of Google application engine.
>
> I would love to take the issue up with  the maintainers of these
> firewalls. Please would you be so kind as to provide me with their
> contact information?
>
> On Apr 2, 1:49 pm, Barry Hunter  wrote:
>
> > On 02/04/2009, WallyDD  wrote:
>
> > >  I have custom domain. I have never had anything blocked in many years
> > >  before migrating to google app engine.
>
> > >  The 'bad neighbour' issue has never been a problem for me in the past.
>
> > What do you want, a medal?
>
> > really that was just blind luck. (or maybe you weren't on a shared
> > system, or a highly segregated system, far removed from your
> > neighbours, either way just lucky)
>
> > I still contend this isn't Google's problem, take up your grivence
> > with the maintainers of the firewalls.
>
> > >  On Apr 2, 11:18 am, Barry Hunter  wrote:
> > >  > And why is this Google's problem?
>
> > >  > Presumably you are a victim of those countries over-zealous blocking
> > >  > (presuming you don't think your site is getting blocked itself)
>
> > >  > Any shared hosting will suffer this 'bad neighbour' issue, AppEngine
> > >  > happens to be a rather large hosting provider, so its quite likely.
>
> > >  > They could make the problem less likely to occur by using larger pool
> > >  > of IP addresses, and hashing the domain to specific IPs, as I imagine
> > >  > it would be impractical to offer unique IPs, but there is little
> > >  > incentive to do so (IMHO)
>
> > >  > You might get slightly better results using a custom domain if you
> > >  > don't already.
>
> > > > On 02/04/2009, WallyDD  wrote:
>
> > >  > >  List of countries where any website hosted on google app engine is 
> > > not
> > >  > >  accessible;
> > >  > >  China
> > >  > >  Iran
> > >  > >  Sudan
> > >  > >  Syria
> > >  > >  Indonesia is blocked by most providers
> > >  > >  Cuba
>
> > >  > >  On Apr 2, 10:48 am, WallyDD  wrote:
>
> > >  > > > Hello,
>
> > >  > >  > My website (on google app engine) is blocked in China where I 
> > > used to
> > >  > >  > get a lot of traffic from. I only just realised this from looking 
> > > at
> > >  > >  > the logs and noting that traffic from china has crawled to 
> > > standstill.
> > >  > >  > I imagine my website is blocked in other countries as well thanks 
> > > to
> > >  > >  > this blocking technique.
>
> > >  > >  > Does Google have a plan for dealing with this?
>
> > >  > >  > Any chance of a response from someone at google? I would really 
> > > like
> > >  > >  > to know if this is being dealt with seriously?
>
> > >  > >  > This doesn't just apply to my website, it applies to every site on
> > >  > >  > google app engine.
>
> > >  > --
>
> > > > Barry
>
> > >  > -www.nearby.org.uk-www.geograph.org.uk-
>
> > --
> > Barry
>
> > -www.nearby.org.uk-www.geograph.org.uk-
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How does GAE's cost compare to that of commercial hosting?

2009-04-02 Thread Joe Bowman

Really, can that conversation be kept in one thread instead of FUD
that spreads through multiple threads about unrelated topics? If you
are going to repeatedly bring that up in other threads, then at least
link to the original thread about the concern instead of vague
references to " blocked in half a dozen countries (and the number is
growing)".

For those that need more information on what WallyDD is referring to,
here's a link to the thread he started on the topic.
http://groups.google.com/group/google-appengine/browse_thread/thread/4771c58c4f5a6dd7

On Apr 2, 2:10 pm, WallyDD  wrote:
> Don't forget to keep in mind that your site(s)/applications will be
> automatically blocked in half a dozen countries (and the number is
> growing) if you choose GAE.
>
> The cost of migrating to and from GAE is also quite high as you will
> need to rewrite some code to make full use of the datastore.
>
> It is a great service and very affordable otherwise.
>
> On Apr 1, 5:37 am, Andy  wrote:
>
> > Does anyone have rough ballpark estimates on how does GAE's cost
> > compare to that of commercial hosting?
>
> > For example, the traffic supported by GAE's free quotas -- is that
> > generally higher or less than the traffic supported by a $5/mo shared
> > hosting account?
>
> > And what about the traffic supported by a $40/mo VPS account on a
> > typical web hosting company -- how much would that cost if the site
> > was hosted on App Engine?
>
> > What about a $200/mo dedicated server hosting -- what's the GAE
> > equivalent?
>
> > I know it depends on a lot of factors, but if anyone has any ballpark
> > estimates or experiences they're willing to share I'd really
> > appreciate it.
>
> > I'm starting a project and try to decide between GAE and standard web
> > hosting. The project will start small, but if the traffic grows I want
> > to see which would be a better option for me. GAE is less flexible and
> > creates a certain amount of lock-in, but at least I want to know if
> > it'll end up cheaper or more expensive as traffic grows
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Parallel urlfetch utility class / function.

2009-03-18 Thread Joe Bowman

Well, you'll never get a true parallel running of the callbacks, based
on the fact even if they're running in the same thread as the
urlfetch, each fetch will take a different amount of time. Though, I'm
not sure if the callbacks would run in the core thread or not. That's
where they'd be run if you see them running sequentially. I don't have
access to a machine to take the time to look at this until tonight,
and not sure even then I'd have the time. However, if I was to look at
it, I'd probably try these two approaches...

Since you're checking thread hashes, you could check the hash of the
thread the urlfetch uses, and see if the callback thread hash matches.

You could also do something like

Get a urlfetch-start-timestamp
urlfetch
Get a urlfetch-complete-timestamp
Get a callback-start-timestamp

Compare the urlfetch-start-timestamps to confirm they're all starting
at the same time.
Compare the urlfetch-complete-timestamps to the callback-start-
timestamps to see if the callback is indeed starting at the end of the
fetch.

On Mar 18, 8:11 am, bFlood  wrote:
> hey david,joe
>
> I've got the async datastore Get working but I'm not sure the
> callbacks are being run on a background thread. they appear to be when
> you examine something like the thread local storage (hashes are all
> unique) but then if you insert just a simple time.sleep they appear to
> run serially. (note - while not completely new to async code, this is
> my first run with python so I'm not sure of the threading contentions
> of something like sleep or logging.debug)
>
> I would like to be able to run some code just after the fetch for each
> entity, the hope is that this would be run in parallel
>
> any thoughts?
>
> cheers
> brian
>
> On Mar 18, 6:14 am, Joe Bowman  wrote:
>
> > Ah ha.. thanks David.
>
> > And for the views, if I really wanted to launch everything at once, I
> > could map my boss, youtube, twitter, etc etc pulls to their own urls,
> > and use megafetch in my master view to pull those urls all at once
> > too.
>
> > On Mar 18, 5:14 am, David Wilson  wrote:
>
> > > Hey Joe,
>
> > > With the gdata package you can do something like this instead:
>
> > > As usual, completely untested code, but looks about right..
>
> > > from youtube import YouTubeVideoFeedFromString
>
> > > def get_feeds_async(usernames):
> > >     fetcher = megafetch.Fetcher()
> > >     output = {}
>
> > >     def cb(username, result):
> > >         if isinstance(output, Exception):
> > >             logging.error('could not fetch: %s', output)
> > >             content = None
> > >         else:
> > >             content = YouTubeVideoFeedFromString(result.content)
> > >         output[username] = content
>
> > >     for username in usernames:
> > >         url = 'http://gdata.youtube.com/feeds/api/users/%s/uploads'%\
> > >             (username,)
> > >         fetcher.start(url, lambda result: cb(username, result))
>
> > >     fetcher.wait()
> > >     return output
>
> > > feeds = get_feeds_async([ 'davemw', 'waverlyflams', 'googletechtalks',
> > >                           'TheOnion', 'winterelaxation' ])
>
> > > # feeds is now a mapping of usernames to YouTubeVideoFeed instances,
> > > or None if could not be fetched.
>
> > > 2009/3/18 Joe Bowman :
>
> > > > This may be a really dumb question, but.. I'm still learning so...
>
> > > > Is there a way to do something other than a direct api call
> > > > asynchronously? I'm writing a script that pulls from multiple sources,
> > > > sometimes with higher level calls that use urlfetch, such as gdata.
> > > > Since I'm attempting to pull from multiple sources, and sometimes
> > > > multiple urls from each source, I'm trying to figure out if it's
> > > > possible to run other methods at the same time.
>
> > > > For example, I want to pull a youtube entry for several different
> > > > authors. The youtube api doesn't allow multiple authors in a request
> > > > (I have a enhancement request in for that though), so I need to do a
> > > > yt_service.GetYouTubeVideoFeed() for each author, then splice them
> > > > together into one feed. As I'm also working with Boss, and eventually
> > > > Twitter, I'll have feeds to pull from those sources as well.
>
> > > > My current application layout is using appengine-pa

[google-appengine] Re: Parallel urlfetch utility class / function.

2009-03-18 Thread Joe Bowman

Ah ha.. thanks David.

And for the views, if I really wanted to launch everything at once, I
could map my boss, youtube, twitter, etc etc pulls to their own urls,
and use megafetch in my master view to pull those urls all at once
too.

On Mar 18, 5:14 am, David Wilson  wrote:
> Hey Joe,
>
> With the gdata package you can do something like this instead:
>
> As usual, completely untested code, but looks about right..
>
> from youtube import YouTubeVideoFeedFromString
>
> def get_feeds_async(usernames):
>     fetcher = megafetch.Fetcher()
>     output = {}
>
>     def cb(username, result):
>         if isinstance(output, Exception):
>             logging.error('could not fetch: %s', output)
>             content = None
>         else:
>             content = YouTubeVideoFeedFromString(result.content)
>         output[username] = content
>
>     for username in usernames:
>         url = 'http://gdata.youtube.com/feeds/api/users/%s/uploads'%\
>             (username,)
>         fetcher.start(url, lambda result: cb(username, result))
>
>     fetcher.wait()
>     return output
>
> feeds = get_feeds_async([ 'davemw', 'waverlyflams', 'googletechtalks',
>                           'TheOnion', 'winterelaxation' ])
>
> # feeds is now a mapping of usernames to YouTubeVideoFeed instances,
> or None if could not be fetched.
>
> 2009/3/18 Joe Bowman :
>
>
>
> > This may be a really dumb question, but.. I'm still learning so...
>
> > Is there a way to do something other than a direct api call
> > asynchronously? I'm writing a script that pulls from multiple sources,
> > sometimes with higher level calls that use urlfetch, such as gdata.
> > Since I'm attempting to pull from multiple sources, and sometimes
> > multiple urls from each source, I'm trying to figure out if it's
> > possible to run other methods at the same time.
>
> > For example, I want to pull a youtube entry for several different
> > authors. The youtube api doesn't allow multiple authors in a request
> > (I have a enhancement request in for that though), so I need to do a
> > yt_service.GetYouTubeVideoFeed() for each author, then splice them
> > together into one feed. As I'm also working with Boss, and eventually
> > Twitter, I'll have feeds to pull from those sources as well.
>
> > My current application layout is using appengine-patch to provide
> > django. I've set up a Boss and Youtube "model" with get methods that
> > handle getting the data. So I can do something similar to:
>
> > web_results = models.Boss.get(request.GET['term'], start=start)
> > news_results = models.Boss.get(request.GET['term'], vertical="news",
> > start=start)
> > youtube = models.Youtube.get(request.GET['term'], start=start)
>
> > Ideally, I'd like some of those models to be able to do asynchronous
> > tasks within their get function, and then also, I'd like to run the
> > above requests at the same, which should really speed the request up.
>
> > On Mar 17, 9:20 am, Joe Bowman  wrote:
> >> Thanks,
>
> >> I'm going to give it a go for urlfetch calls for one project I'm
> >> working on this week.
>
> >> Not sure when I'd be able to include it in gaeutiltiies for cron and
> >> such, that project is currently lower on my priority list at the
> >> moment, but can't wait until I get a chance to play with it. Another
> >> idea I had for it is the ROTmodel (retry on timeout model) in the
> >> project, which could speed that process up.
>
> >> On Mar 17, 9:11 am, David Wilson  wrote:
>
> >> > 2009/3/16 Joe Bowman :
>
> >> > > Wow that's great. The SDK might be problematic for you, as it appears
> >> > > to be very single threaded, I know for a fact it can't reply to
> >> > > requests to itself.
>
> >> > > Out of curiosity, are you still using base urlfetch, or is it your own
> >> > > creation? While when Google releases their scheduled tasks
> >> > > functionality it will be less of an issue, if your solution had the
> >> > > ability to fire off urlfetch calls and not wait for a response, it
> >> > > could be a perfect fit for the gaeutilities cron utility.
>
> >> > > Currently it grabs a list of tasks it's supposed to run on request,
> >> > > sets a timestamp, runs one, the compares now() to the timestamp and if
> >> > > the 

[google-appengine] Re: Parallel urlfetch utility class / function.

2009-03-17 Thread Joe Bowman

This may be a really dumb question, but.. I'm still learning so...

Is there a way to do something other than a direct api call
asynchronously? I'm writing a script that pulls from multiple sources,
sometimes with higher level calls that use urlfetch, such as gdata.
Since I'm attempting to pull from multiple sources, and sometimes
multiple urls from each source, I'm trying to figure out if it's
possible to run other methods at the same time.

For example, I want to pull a youtube entry for several different
authors. The youtube api doesn't allow multiple authors in a request
(I have a enhancement request in for that though), so I need to do a
yt_service.GetYouTubeVideoFeed() for each author, then splice them
together into one feed. As I'm also working with Boss, and eventually
Twitter, I'll have feeds to pull from those sources as well.

My current application layout is using appengine-patch to provide
django. I've set up a Boss and Youtube "model" with get methods that
handle getting the data. So I can do something similar to:

web_results = models.Boss.get(request.GET['term'], start=start)
news_results = models.Boss.get(request.GET['term'], vertical="news",
start=start)
youtube = models.Youtube.get(request.GET['term'], start=start)

Ideally, I'd like some of those models to be able to do asynchronous
tasks within their get function, and then also, I'd like to run the
above requests at the same, which should really speed the request up.


On Mar 17, 9:20 am, Joe Bowman  wrote:
> Thanks,
>
> I'm going to give it a go for urlfetch calls for one project I'm
> working on this week.
>
> Not sure when I'd be able to include it in gaeutiltiies for cron and
> such, that project is currently lower on my priority list at the
> moment, but can't wait until I get a chance to play with it. Another
> idea I had for it is the ROTmodel (retry on timeout model) in the
> project, which could speed that process up.
>
> On Mar 17, 9:11 am, David Wilson  wrote:
>
> > 2009/3/16 Joe Bowman :
>
> > > Wow that's great. The SDK might be problematic for you, as it appears
> > > to be very single threaded, I know for a fact it can't reply to
> > > requests to itself.
>
> > > Out of curiosity, are you still using base urlfetch, or is it your own
> > > creation? While when Google releases their scheduled tasks
> > > functionality it will be less of an issue, if your solution had the
> > > ability to fire off urlfetch calls and not wait for a response, it
> > > could be a perfect fit for the gaeutilities cron utility.
>
> > > Currently it grabs a list of tasks it's supposed to run on request,
> > > sets a timestamp, runs one, the compares now() to the timestamp and if
> > > the timedelta is more than 1 second, stops running tasks and finishes
> > > the request. It already appears your project would be perfect for
> > > running all necessary tasks at once, and the MIT License I believe is
> > > compatible with the BSD license I've released gaeutilities, so would
> > > you have any personal objection to me including it in gaeutilities at
> > > some point, with proper attribution of course?
>
> > Sorry I missed this in the first reply - yeah work away! :)
>
> > David
>
> > > If you haven't see that project, it's url 
> > > ishttp://gaeutilities.appspot.com/
>
> > > On Mar 16, 11:03 am, David Wilson  wrote:
> > >> Joe,
>
> > >> I've only tested it in production. ;)
>
> > >> The code should work serially on the SDK, but I haven't tried yet.
>
> > >> David.
>
> > >> 2009/3/16 Joe Bowman :
>
> > >> > Does the batch fetching working on live appengine applications, or
> > >> > only on the SDK?
>
> > >> > On Mar 16, 10:19 am, David Wilson  wrote:
> > >> >> I have no idea how definitive this is, but literally it means wall
> > >> >> clock time seems to be how CPU cost is measured. I guess this makes
> > >> >> sense for a few different reasons.
>
> > >> >> I found some internal function
> > >> >> "google3.apphosting.runtime._apphosting_runtime___python__apiproxy.get_requ
> > >> >>  est_cpu_usage"
> > >> >> with the docstring:
>
> > >> >>     Returns the number of megacycles used so far by this request.
> > >> >>     Does not include CPU used by API calls.
>
> > >> >> Calling it, then running time.sleep(5), then calling it a

[google-appengine] Re: Parallel urlfetch utility class / function.

2009-03-17 Thread Joe Bowman

Thanks,

I'm going to give it a go for urlfetch calls for one project I'm
working on this week.

Not sure when I'd be able to include it in gaeutiltiies for cron and
such, that project is currently lower on my priority list at the
moment, but can't wait until I get a chance to play with it. Another
idea I had for it is the ROTmodel (retry on timeout model) in the
project, which could speed that process up.

On Mar 17, 9:11 am, David Wilson  wrote:
> 2009/3/16 Joe Bowman :
>
>
>
>
>
> > Wow that's great. The SDK might be problematic for you, as it appears
> > to be very single threaded, I know for a fact it can't reply to
> > requests to itself.
>
> > Out of curiosity, are you still using base urlfetch, or is it your own
> > creation? While when Google releases their scheduled tasks
> > functionality it will be less of an issue, if your solution had the
> > ability to fire off urlfetch calls and not wait for a response, it
> > could be a perfect fit for the gaeutilities cron utility.
>
> > Currently it grabs a list of tasks it's supposed to run on request,
> > sets a timestamp, runs one, the compares now() to the timestamp and if
> > the timedelta is more than 1 second, stops running tasks and finishes
> > the request. It already appears your project would be perfect for
> > running all necessary tasks at once, and the MIT License I believe is
> > compatible with the BSD license I've released gaeutilities, so would
> > you have any personal objection to me including it in gaeutilities at
> > some point, with proper attribution of course?
>
> Sorry I missed this in the first reply - yeah work away! :)
>
> David
>
>
>
>
>
> > If you haven't see that project, it's url ishttp://gaeutilities.appspot.com/
>
> > On Mar 16, 11:03 am, David Wilson  wrote:
> >> Joe,
>
> >> I've only tested it in production. ;)
>
> >> The code should work serially on the SDK, but I haven't tried yet.
>
> >> David.
>
> >> 2009/3/16 Joe Bowman :
>
> >> > Does the batch fetching working on live appengine applications, or
> >> > only on the SDK?
>
> >> > On Mar 16, 10:19 am, David Wilson  wrote:
> >> >> I have no idea how definitive this is, but literally it means wall
> >> >> clock time seems to be how CPU cost is measured. I guess this makes
> >> >> sense for a few different reasons.
>
> >> >> I found some internal function
> >> >> "google3.apphosting.runtime._apphosting_runtime___python__apiproxy.get_request_cpu_usage"
> >> >> with the docstring:
>
> >> >>     Returns the number of megacycles used so far by this request.
> >> >>     Does not include CPU used by API calls.
>
> >> >> Calling it, then running time.sleep(5), then calling it again,
> >> >> indicates thousands of megacycles used, yet in real terms the CPU was
> >> >> probably doing nothing. I guess Datastore CPU, etc., is added on top
> >> >> of this, but it seems to suggest to me that if you can drastically
> >> >> reduce request time, quota usage should drop too.
>
> >> >> I have yet to do any kind of rough measurements of Datastore CPU, so
> >> >> I'm not sure how correct this all is.
>
> >> >> David.
>
> >> >>  - One of the guys on IRC suggested this means that per-request cost
> >> >> is scaled during peak usage (and thus internal services running
> >> >> slower).
>
> >> >> 2009/3/16 peterk :
>
> >> >> > A couple of questions re. CPU usage..
>
> >> >> > "CPU time quota appears to be calculated based on literal time"
>
> >> >> > Can you clarify what you mean here? I presume each async request eats
> >> >> > into your CPU budget. But you say:
>
> >> >> > "since you can burn a whole lot more AppEngine CPU more cheaply using
> >> >> > the async api"
>
> >> >> > Can you clarify how that's the case?
>
> >> >> > I would guess as long as you're being billed for the cpu-ms spent in
> >> >> > your asynchronous calls, Google would let you hang yourself with them
> >> >> > when it comes to billing.. :) so I presume they'd let you squeeze in
> >> >> > as many as your original request, and its limit, will allow for?
>
> >> >> > Thanks again.
>
> >> >> &

[google-appengine] Re: Accessing the datastore

2009-03-16 Thread Joe Bowman

I think you're looking for this: 
http://code.google.com/appengine/articles/remote_api.html

On Mar 16, 1:01 am, pokiman  wrote:
> Hello All,
>
> Is there a way to access the google datastore via an external python
> program? What I want to basically do is run a computationally
> intensive process on my home machine and update the google datastore
> periodically, preferably automatically.
>
> Thanks in advance.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Parallel urlfetch utility class / function.

2009-03-16 Thread Joe Bowman

I imagine keeping the request open until everything is done isn't
going to go away any time soon, it's how http responses work and the
scheduled tasks on the roadmap would be better suited to providing
better support for that. I also agree on the batch put and get
functionality for the most part is there.

My experience from mass delete scripts has been delete is extremely
heavy, and before the runtime length was extended, I came up with the
number 75 being the safe amount of entities to delete in a request
without encountering timeouts for the most part. I ended up using
javascript with a simple protocol (responses of "there's more" and
"all done" in order to delete 10k+ objects at a time). During that
time I did notice that repeated writing to the datastore (or delete in
my case) also caused other errors, which it looked like I was being
throttled, so that's something else you may encounter if you continue
to work on asynchronous datastore calls.

On Mar 16, 1:12 pm, David Wilson  wrote:
> I forgot to mention, AppEngine does not close the request until all
> asynchronous requests have ended. This means it's not truly "fire and
> forget". Regardless of whether you're waiting for a response or not,
> if a request is in progress, the HTTP response body is not returned to
> the client.
>
> I created a simple function this morning to call datastore_v3.Delete
> on a set of key objects, it appeared to work but I didn't test beyond
> ensuring the callback didn't receive an exception. Pretty untested
> code here: <http://pastie.org/417496>.
>
> For simple uses, it's probably not all that useful to call Datastore
> asynchronously is all that useful anyway, since unlike urlfetch, you
> can already minimize latency by making batch calls at the start/end of
> your request for all the keys you want to load/save. It's possibly
> useful to use it to concurrently commit a bunch of different
> transactions, but the code for this is less trivial than the urlfetch
> case. Probably best to see what the AppEngine team themselves provide
> for this. ;)
>
> David.
>
> 2009/3/16 bFlood :
>
>
>
>
>
> > @joe - fire/forget - you can just skip the fetcher.wait() call (which
> > call AsyncAPIProxy.wait). I'm not sure of you would need a valid
> > callback but even if you did it could be a simple stub that does
> > nothing.
>
> > @david - have you made this work with datastore calls yet? having some
> > issues trying to figure out how to set pbrequest/pbresponse variables
>
> > cheers
> > brian
>
> > On Mar 16, 12:05 pm, Joe Bowman  wrote:
> >> Wow that's great. The SDK might be problematic for you, as it appears
> >> to be very single threaded, I know for a fact it can't reply to
> >> requests to itself.
>
> >> Out of curiosity, are you still using base urlfetch, or is it your own
> >> creation? While when Google releases their scheduled tasks
> >> functionality it will be less of an issue, if your solution had the
> >> ability to fire off urlfetch calls and not wait for a response, it
> >> could be a perfect fit for the gaeutilities cron utility.
>
> >> Currently it grabs a list of tasks it's supposed to run on request,
> >> sets a timestamp, runs one, the compares now() to the timestamp and if
> >> the timedelta is more than 1 second, stops running tasks and finishes
> >> the request. It already appears your project would be perfect for
> >> running all necessary tasks at once, and the MIT License I believe is
> >> compatible with the BSD license I've released gaeutilities, so would
> >> you have any personal objection to me including it in gaeutilities at
> >> some point, with proper attribution of course?
>
> >> If you haven't see that project, it's url 
> >> ishttp://gaeutilities.appspot.com/
>
> >> On Mar 16, 11:03 am, David Wilson  wrote:
>
> >> > Joe,
>
> >> > I've only tested it in production. ;)
>
> >> > The code should work serially on the SDK, but I haven't tried yet.
>
> >> > David.
>
> >> > 2009/3/16 Joe Bowman :
>
> >> > > Does the batch fetching working on live appengine applications, or
> >> > > only on the SDK?
>
> >> > > On Mar 16, 10:19 am, David Wilson  wrote:
> >> > >> I have no idea how definitive this is, but literally it means wall
> >> > >> clock time seems to be how CPU cost is measured. I guess this makes
> >> > >> sense for a few different reasons.
>
> >> >

[google-appengine] Re: Parallel urlfetch utility class / function.

2009-03-16 Thread Joe Bowman

Wow that's great. The SDK might be problematic for you, as it appears
to be very single threaded, I know for a fact it can't reply to
requests to itself.

Out of curiosity, are you still using base urlfetch, or is it your own
creation? While when Google releases their scheduled tasks
functionality it will be less of an issue, if your solution had the
ability to fire off urlfetch calls and not wait for a response, it
could be a perfect fit for the gaeutilities cron utility.

Currently it grabs a list of tasks it's supposed to run on request,
sets a timestamp, runs one, the compares now() to the timestamp and if
the timedelta is more than 1 second, stops running tasks and finishes
the request. It already appears your project would be perfect for
running all necessary tasks at once, and the MIT License I believe is
compatible with the BSD license I've released gaeutilities, so would
you have any personal objection to me including it in gaeutilities at
some point, with proper attribution of course?

If you haven't see that project, it's url is http://gaeutilities.appspot.com/

On Mar 16, 11:03 am, David Wilson  wrote:
> Joe,
>
> I've only tested it in production. ;)
>
> The code should work serially on the SDK, but I haven't tried yet.
>
> David.
>
> 2009/3/16 Joe Bowman :
>
>
>
>
>
> > Does the batch fetching working on live appengine applications, or
> > only on the SDK?
>
> > On Mar 16, 10:19 am, David Wilson  wrote:
> >> I have no idea how definitive this is, but literally it means wall
> >> clock time seems to be how CPU cost is measured. I guess this makes
> >> sense for a few different reasons.
>
> >> I found some internal function
> >> "google3.apphosting.runtime._apphosting_runtime___python__apiproxy.get_request_cpu_usage"
> >> with the docstring:
>
> >>     Returns the number of megacycles used so far by this request.
> >>     Does not include CPU used by API calls.
>
> >> Calling it, then running time.sleep(5), then calling it again,
> >> indicates thousands of megacycles used, yet in real terms the CPU was
> >> probably doing nothing. I guess Datastore CPU, etc., is added on top
> >> of this, but it seems to suggest to me that if you can drastically
> >> reduce request time, quota usage should drop too.
>
> >> I have yet to do any kind of rough measurements of Datastore CPU, so
> >> I'm not sure how correct this all is.
>
> >> David.
>
> >>  - One of the guys on IRC suggested this means that per-request cost
> >> is scaled during peak usage (and thus internal services running
> >> slower).
>
> >> 2009/3/16 peterk :
>
> >> > A couple of questions re. CPU usage..
>
> >> > "CPU time quota appears to be calculated based on literal time"
>
> >> > Can you clarify what you mean here? I presume each async request eats
> >> > into your CPU budget. But you say:
>
> >> > "since you can burn a whole lot more AppEngine CPU more cheaply using
> >> > the async api"
>
> >> > Can you clarify how that's the case?
>
> >> > I would guess as long as you're being billed for the cpu-ms spent in
> >> > your asynchronous calls, Google would let you hang yourself with them
> >> > when it comes to billing.. :) so I presume they'd let you squeeze in
> >> > as many as your original request, and its limit, will allow for?
>
> >> > Thanks again.
>
> >> > On Mar 16, 2:00 pm, David Wilson  wrote:
> >> >> It's completely undocumented (at this stage, anyway), but definitely
> >> >> seems to work. A few notes I've come gathered:
>
> >> >>  - CPU time quota appears to be calculated based on literal time,
> >> >> rather than e.g. the UNIX concept of "time spent in running state".
>
> >> >>  - I can fetch 100 URLs in 1.3 seconds from a machine colocated in
> >> >> Germany using the asynchronous API. I can't begin to imagine how slow
> >> >> (and therefore expensive in monetary terms) this would be using the
> >> >> standard API.
>
> >> >>  - The user-specified callback function appears to be invoked in a
> >> >> separate thread; the RPC isn't "complete" until this callback
> >> >> completes. The callback thread is still subject to the request
> >> >> deadline.
>
> >> >>  - It's a standard interface, and seems to have no parallel
> >> >> re

[google-appengine] Re: Parallel urlfetch utility class / function.

2009-03-16 Thread Joe Bowman

Does the batch fetching working on live appengine applications, or
only on the SDK?

On Mar 16, 10:19 am, David Wilson  wrote:
> I have no idea how definitive this is, but literally it means wall
> clock time seems to be how CPU cost is measured. I guess this makes
> sense for a few different reasons.
>
> I found some internal function
> "google3.apphosting.runtime._apphosting_runtime___python__apiproxy.get_request_cpu_usage"
> with the docstring:
>
>     Returns the number of megacycles used so far by this request.
>     Does not include CPU used by API calls.
>
> Calling it, then running time.sleep(5), then calling it again,
> indicates thousands of megacycles used, yet in real terms the CPU was
> probably doing nothing. I guess Datastore CPU, etc., is added on top
> of this, but it seems to suggest to me that if you can drastically
> reduce request time, quota usage should drop too.
>
> I have yet to do any kind of rough measurements of Datastore CPU, so
> I'm not sure how correct this all is.
>
> David.
>
>  - One of the guys on IRC suggested this means that per-request cost
> is scaled during peak usage (and thus internal services running
> slower).
>
> 2009/3/16 peterk :
>
>
>
>
>
> > A couple of questions re. CPU usage..
>
> > "CPU time quota appears to be calculated based on literal time"
>
> > Can you clarify what you mean here? I presume each async request eats
> > into your CPU budget. But you say:
>
> > "since you can burn a whole lot more AppEngine CPU more cheaply using
> > the async api"
>
> > Can you clarify how that's the case?
>
> > I would guess as long as you're being billed for the cpu-ms spent in
> > your asynchronous calls, Google would let you hang yourself with them
> > when it comes to billing.. :) so I presume they'd let you squeeze in
> > as many as your original request, and its limit, will allow for?
>
> > Thanks again.
>
> > On Mar 16, 2:00 pm, David Wilson  wrote:
> >> It's completely undocumented (at this stage, anyway), but definitely
> >> seems to work. A few notes I've come gathered:
>
> >>  - CPU time quota appears to be calculated based on literal time,
> >> rather than e.g. the UNIX concept of "time spent in running state".
>
> >>  - I can fetch 100 URLs in 1.3 seconds from a machine colocated in
> >> Germany using the asynchronous API. I can't begin to imagine how slow
> >> (and therefore expensive in monetary terms) this would be using the
> >> standard API.
>
> >>  - The user-specified callback function appears to be invoked in a
> >> separate thread; the RPC isn't "complete" until this callback
> >> completes. The callback thread is still subject to the request
> >> deadline.
>
> >>  - It's a standard interface, and seems to have no parallel
> >> restrictions at least for urlfetch and Datastore. However, I imagine
> >> that it's possible restrictions may be placed here at some later
> >> stage, since you can burn a whole lot more AppEngine CPU more cheaply
> >> using the async api.
>
> >>  - It's "standard" only insomuch as you have to fiddle with
> >> AppEngine-internal protocolbuffer definitions for each service type.
> >> This mostly means copy-pasting the standard sync call code from the
> >> SDK, and hacking it to use pubsubhubub's proxy code.
>
> >> Per the last point, you might be better waiting for an officially
> >> sanctioned API for doing this, albeit I doubt the protocolbuffer
> >> definitions change all that often.
>
> >> Thanks for Brett Slatkin & co. for doing the digging required to get
> >> the async stuff working! :)
>
> >> David.
>
> >> 2009/3/16 peterk :
>
> >> > Very neat.. Thank you.
>
> >> > Just to clarify, can we use this for all API calls? Datastore too? I
> >> > didn't look very closely at the async proxy in pubsubhubub..
>
> >> > Asynchronous calls available on all apis might give a lot to chew
> >> > on.. :) It's been a while since I've worked with async function calls
> >> > or threading, might have to dig up some old notes to see where I could
> >> > extract gains from it in my app. Some common cases might be worth the
> >> > community documenting for all to benefit from, too.
>
> >> > On Mar 16, 1:26 pm, David Wilson  wrote:
> >> >> I've created a Google Code project to contain some batch utilities I'm
> >> >> working on, based on async_apiproxy.py from pubsubhubbub[0]. The
> >> >> project currently contains just a modified async_apiproxy.py that
> >> >> doesn't require dummy google3 modules on the local machine, and a
> >> >> megafetch.py, for batch-fetching URLs.
>
> >> >>    http://code.google.com/p/appengine-async-tools/
>
> >> >> David
>
> >> >> [0]http://code.google.com/p/pubsubhubbub/source/browse/trunk/hub/async_a...
>
> >> >> --
> >> >> It is better to be wrong than to be vague.
> >> >>   — Freeman Dyson
>
> >> --
> >> It is better to be wrong than to be vague.
> >>   — Freeman Dyson
>
> --
> It is better to be wrong than to be vague.
>   — Freeman Dyson
--~--~-~--~~~---~--~~
You received th

[google-appengine] Re: Can Appengine provide Browser information..?

2009-03-16 Thread Joe Bowman

import os

os.environ['REMOTE_ADDR'] is the IP
os.environ['HTTP_USER_AGENT'] is the user agent

I'm sure there's more, those are the two I needed when I created the
sessions utility.

Note: I've found that they don't always populate, more than likely a
per browser issue. I was confused to see REMOTE_ADDR not always
populate.

On Mar 15, 12:06 pm, xml2jsonp  wrote:
> Using JavaScript:
>
> if (/msie/.test(navigator.userAgent.toLowerCase())
> && !/opera/.test(navigator.userAgent.toLowerCase())) {
> [...]
>
> On Mar 15, 7:56 am, jago  wrote:
>
> > Hi,
>
> > Can I write some Python code that creates an HTML which prints if the
> > client is running Firefox or IE ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: using memcache for caching query results

2009-03-16 Thread Joe Bowman

Check out the cache utility in gaeutilities. 
http://gaeutilities.appspot.com/cache

Looking at the demo, it appears I need to update that page. Anyhow,
cache uses both the datastore and the memcache.

When you write a cache entry, it writes to the datastore, then to
memcache.
When you attempt to read a cache entry, it first tries memcache, then
the datastore.
If there's a hit in the datastore, but not the memcache, it populates
the memcache.

Supports timeout functionality (my cache hit is only good for 5
minutes), and can be used as a standard dictionary object

c = cache.Cache()
c['cachehit'] = "test value"
if 'cachehit' in c:
do_something()

It was originally written before appengine had memcache support, and
was updated when that was provided.

On Mar 3, 8:02 am, Jonathan  wrote:
> I am using a restful interface for an ajax application and want to be
> able to store the results of queries in memcache, as much of this data
> is read much more often than it is written, but it is occasionally
> written.
>
> I have been trying to think of strategies for how to do this, whilst
> also maintaining the ability to invalidate the cache when necessary.
>
> so for example:
> the user requests page 1 of their objects (0-9) and I store them with
> a key of "modelName-userName-pageNum"
> the user requests page 2 of their objects (10-19) and I store them
> with a key of "modelName-userName-pageNum"
> the user modifies an object on page 2, (or deletes it, or creates a
> new one) and I want to invalidate all "modelName-userName" cached
> lists.
>
> how do I do this???
>
> jonathan
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: garutilities Session objects in Django app

2009-03-11 Thread Joe Bowman

Hi,

gaeutilities includes a session middleware

Just add it in your settings.py

For example, for one app I have

MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'common.appengine_utilities.django-
middleware.middleware.SessionMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
)

Note, this only verified with appengine-patch, and not the django0.96
bundled with appengine. Not saying it doesn't work, just saying I've
never tried it.

Also, the middleware takes advantage of the new session cookie writer.
This is a large performance improvement by storing all session data in
cookies for anonymous requests. In order to switch to the datastore
backed solution for your logged in users, you'll need to reset the
session when they log in. The easiest way to do this after you've
authenticated the user, use request.session.save() to convert the
session to the datastore backed solution.

ie:

user = auth.authenticate()
request.session.save()
auth.login(request, user)

On Mar 11, 12:40 pm, Ritesh Nadhani  wrote:
> Hi
>
> So I was reading the session utility 
> athttp://code.google.com/p/gaeutilities/wiki/Session. I am using Django
> instead of web.py.
>
> The sample shows code:
>
> self.session = Session()
>
> In django, we dont get an object, rather a method is called, how can I
> store the session object similar to self.session.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Somewhat Disappointed.

2009-03-09 Thread Joe Bowman

Server side javascript would be awesome, I have to admit.

The challenges you are running into with having to use python and
javascript is just the nature of the game right now in most shops.
Don't forget CSS, and making sure both your CSS and javascript work
across all browsers. ugh

On Mar 9, 12:12 am, Owen  wrote:
> OK, I *love* GAE getting outa beta and charging.  Yup, I *wanna* pay,
> so that I can get what I'd like.
>
> BUT: the downside is that they haven't offered me what I want.  Django
> templates are fine, as is Python.  But on the client side I still have
> to wrestle with Javascript .. which is also a fine language.
>
> So what's my beef?  That I gotta use *two* languages.  Wimp that I am,
> I'm sorta getting tired of this.  I've written a fairly complicated
> GAE app, with Google Maps (and lately a few JS libraries), and I still
> get the two wonderful languages mixed up.
>
> So if I'm going to start paying, I want some love.  Either:
>
> 1 - A Python environment that emits Javascript for the browser (think
> GWT)
> .. or
> 2 - A server-side Javascript solution, like Aptana & Jaxer .
>
> After looking at the *huge* advances in Javascript code, I'm tempted
> to move from 1 to 2.  Lively Kernel is really nice, as is the JS
> version of Processing.org's graphics.
>
> But Google, as much as I love ya, you're still in beta.  I like your
> approach much more than Amazon's, and I think your getting there, but
> you're puzzling a lot of us who see you using GWT for high end apps,
> but not giving us GAE folks the full Monty.
>
> So what's your next move?
>
>    -- Owen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: ANNOUCEMENT: gaeutilities version 1.2.1 is now available. session bugfix

2009-03-08 Thread Joe Bowman

Forgot to post the url: http://gaeutilities.appspot.com

On Mar 8, 7:50 pm, "bowman.jos...@gmail.com" 
wrote:
> Not much new in this release. There was an issue where entities with
> empty sid values were getting created along with valid new sessions,
> this bug has been fixed.
> cache has also been expanded to have a has_key() method.
> Work has begun on a pagination library for gaeutilities, more
> information can be found in this 
> post:http://groups.google.com/group/appengine-utilities/browse_thread/thre...
>
> Users of version 1.2 (and any other version) are strongly recommended
> to upgrade if you are using session.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---