[google-appengine] can i access appspot via freenet?

2013-03-06 Thread saintthor
or, do you have this plan in future?

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: App Engine Billing Migration Discounted Instance Hours

2013-03-06 Thread Krzysztof
Good question, I would also like to know the answer.

W dniu piątek, 8 lutego 2013 15:18:35 UTC+1 użytkownik Jason Cone napisał:

 On Tuesday, we were notified via email of a future change in the way that 
 App Engine handles their billing (changing from weekly to monthly). On 
 Wednesday, I sent a question regarding the change's impact on how 
 discounted instance hours are handled to 
 google-cloud-b...@google.comjavascript:and have yet to receive a response. 
 Perhaps a Google person here could 
 answer my question (which I'll just copy-and-paste):

 How will these changes affect the purchasing of discounted instance hours? 
 Will that remain a weekly allocation that can be modified week-to-week or 
 will that, too, change to a monthly allocation (and, presumably, be capable 
 of being modified only once a month)?

 Thanks,

 Jason


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: Blobstore filename created in MapReduce job too long to create BlobKey

2013-03-06 Thread bmurr
Well, I think I have it solved.

I was using an older version of the mapreduce library, which uses 
mapreduce.lib.files to interact with the blobstore.
The newer version of mapreduce uses google.appengine.api.files instead, 
which doesn't cause this problem.

These two libraries seem pretty similar -- so I'm not sure what precisely 
was causing the issue.



On Saturday, March 2, 2013 4:25:17 AM UTC, Jamie Niemasik wrote:

 I've been receiving intermittent errors from MapReduce jobs. I'm running 
 Python 2.7.

 The specific error is BadValueError: name must be under 500 bytes which 
 is raised when calling datastore.Key.from_path() within 
 blobstore.get_blob_key(); the filename being provided is way too long to 
 make a key from.

 This all occurs within the code in the mapreduce package… nothing in my 
 code seems to affect it.

 Some of the filenames are 288 bytes long, while some are 992. The M/R spec 
 name and id in each case is nearly the same and is very short; I don't see 
 where this variance comes from.

 The sequence of events is this:
 mapreduce.output_writers.init_job() creates a reasonable, short filename 
 and passes it to files.blobstore.create()
 create() calls files.file._create('blobstore', …, filename)
 _create() sets up an rpc with that filename and calls _make_call('Create', 
 ...)

 And that call sometimes returns a filename that's 288 bytes, sometimes 
 992. I have no idea why or how to work around this — any help would be 
 appreciated.

 Thanks,
 Jamie


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Re: Blobstore filename created in MapReduce job too long to create BlobKey

2013-03-06 Thread Jamie Niemasik
: )  I tried the same thing last night, but wasn't ready to declare victory
because I saw new errors. Happily, it turns out those were from old tasks
(with retry counts nearing 1000) whose states were not compatible with the
new MR code. I've purged the queue and cleaned out the associated blobs and
everything's humming along now.

When I first started using MR, I had to make a lot of modifications to get
it working with py2.7 and NDB, using two different versions of the MR lib
that I found. I was hesitant to touch that code since it had been working
perfectly for so long, until sometime in February when I started
experiencing these errors. I'm still not sure why that happened.

But, happily, all I had to do to the svn version of MR this time around was
change mapreduce/main.py to mapreduce.main.APP in include.yaml. Nice!

On Wed, Mar 6, 2013 at 5:13 AM, bmurr bmur...@tethras.com wrote:

 Well, I think I have it solved.

 I was using an older version of the mapreduce library, which uses
 mapreduce.lib.files to interact with the blobstore.
 The newer version of mapreduce uses google.appengine.api.files instead,
 which doesn't cause this problem.

 These two libraries seem pretty similar -- so I'm not sure what precisely
 was causing the issue.



 On Saturday, March 2, 2013 4:25:17 AM UTC, Jamie Niemasik wrote:

 I've been receiving intermittent errors from MapReduce jobs. I'm running
 Python 2.7.

 The specific error is BadValueError: name must be under 500 bytes which
 is raised when calling datastore.Key.from_path() within
 blobstore.get_blob_key(); the filename being provided is way too long to
 make a key from.

 This all occurs within the code in the mapreduce package… nothing in my
 code seems to affect it.

 Some of the filenames are 288 bytes long, while some are 992. The M/R
 spec name and id in each case is nearly the same and is very short; I don't
 see where this variance comes from.

 The sequence of events is this:
 mapreduce.output_writers.init_**job() creates a reasonable, short
 filename and passes it to files.blobstore.create()
 create() calls files.file._create('blobstore'**, …, filename)
 _create() sets up an rpc with that filename and calls
 _make_call('Create', ...)

 And that call sometimes returns a filename that's 288 bytes, sometimes
 992. I have no idea why or how to work around this — any help would be
 appreciated.

 Thanks,
 Jamie

  --
 You received this message because you are subscribed to a topic in the
 Google Groups Google App Engine group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/google-appengine/I0pXHW1poWU/unsubscribe?hl=en
 .
 To unsubscribe from this group and all its topics, send an email to
 google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Error: Server Error

2013-03-06 Thread Julie Smith
Please check your log files from the admin console.



On 5 March 2013 23:41, 血未冷 zhiwenzho...@gmail.com wrote:

 Error: Server Error The server encountered an error and could not
 complete your request.

 If the problem persists, please 
 reporthttp://code.google.com/appengine/community.htmlyour problem and 
 mention this error message and the query that caused it.

 这是什么问题?

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Re: Datastore Read Operations Quota Whacked-Out?

2013-03-06 Thread Faisal Raja
Just encountered this today. Looking all over the place to find out what 
consumed my quota. I tried appstats to check if what I executed on remote 
api shell was consuming that much, but everything works fine locally. The 
difference on mine is I don't have a filter it just needs to loop to all 
data, which is around 12k, but stopped at around 1k with overquota error. 
But my local test on the same script for 1.5k shows this on appstats 263 
RPCs, cost=405950, billed_ops=[DATASTORE_READ:1505, DATASTORE_WRITE:3006]

SoI could probably try add a filter to the keys see if the temp fix works.
BuBut there might actually be something weird with the remote_api_shell.py

On Friday, November 2, 2012 9:32:53 AM UTC-7, Brian wrote:

 I think I have narrowed the problem down to differences in 
 query behavior between the local dev SDK and production appspot. (FYI -- I 
 have used SDKs starting at 1.3.1 through 1.7.2.) Here's what I've found: 

 Via the local dev_server, if you execute a python api query using a filter 
 for a string property set to an empty string, and the local datastore does 
 not contain a match (because they have string values with lengths  0) then 
 the query will return with no results, just as one would expect. 

 However, if you execute the same query via production appspot on an exact 
 copy of the data, then one of the following results will occur:

 1) The query will never finish and after running for several hours will 
 consume all the datastore read quota until it crashes due to a lack of 
 quota. Typically, the error message will be: 

 google.appengine.runtime.apiproxy_errors.OverQuotaError: The API call 
 datastore_v3.RunQuery() required more quota than is available.


 2) The query will crash due to a time-out error. Typically, the error 
 message will be:

 urllib2.URLError: urlopen error (10060, 'Operation timed out')


 If anyone else runs into this problem, here's the solution that (so-far) 
 is working for me:

 Change the query filter from = '' (empty string) to  ' ' (space 
 character). Then order (sort) the query results by that same property. 
 Since using an inequality in the filter will not work with the standard 
 (hidden) entity indexes, you will need to build custom indexes.

 If anyone knows of a better solution, let me know!


  


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] gae mapreduce problem

2013-03-06 Thread Jonas Heyden
I am fiddling around with GAE mapreduce and have one question:

Is it possible to change a variable only for a certain job in mapreduce?

The reason I am asking is:

The input csv and output csv of my mapreduce job are supposed to have the
same header row - however, the header row is somewhere in the output csv,
but never at the top. To get the right header row, I inserted a counter
into my reduce function that checks the current iteration of the reduce job
and if it is 0, it will pass the hard-coded header-row to the pipeline. The
counter gets reset when the output csv gets stored in the blobstore.

The problem: More often than not the counter resets itself randomly,
probably because I had to define it as global variable reduce_counter = 0
outside of the function.

Is there any method to chain a variable/parameter to a job or is there any
better way to get the header_row?

I don't think that I can work with the DictReader or csv module as the
output is stored in the blobstore and blobstore objects cannot be altered
as far as I know.

You can find my code on www.github.com/jvdheyden/ste in the main.py
document.

Thanks!


-- 
Jonas von der Heyden
+49 163 2464010
http://de.linkedin.com/in/jvheyden

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: Blobstore filename created in MapReduce job too long to create BlobKey

2013-03-06 Thread Ryan Huebsch
Hi,

We've filed an 
issue https://code.google.com/p/googleappengine/issues/detail?id=8932 to 
track the issue. We are investigating the problem.


On Friday, March 1, 2013 8:25:17 PM UTC-8, Jamie Niemasik wrote:

 I've been receiving intermittent errors from MapReduce jobs. I'm running 
 Python 2.7.

 The specific error is BadValueError: name must be under 500 bytes which 
 is raised when calling datastore.Key.from_path() within 
 blobstore.get_blob_key(); the filename being provided is way too long to 
 make a key from.

 This all occurs within the code in the mapreduce package… nothing in my 
 code seems to affect it.

 Some of the filenames are 288 bytes long, while some are 992. The M/R spec 
 name and id in each case is nearly the same and is very short; I don't see 
 where this variance comes from.

 The sequence of events is this:
 mapreduce.output_writers.init_job() creates a reasonable, short filename 
 and passes it to files.blobstore.create()
 create() calls files.file._create('blobstore', …, filename)
 _create() sets up an rpc with that filename and calls _make_call('Create', 
 ...)

 And that call sometimes returns a filename that's 288 bytes, sometimes 
 992. I have no idea why or how to work around this — any help would be 
 appreciated.

 Thanks,
 Jamie


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: Can anyone confirm or deny that round about 1st March, GoogleAppEngine trippled the length of Blobstore filenames from around 300 chars up to around 900 chars?

2013-03-06 Thread Ryan Huebsch
We've filed an 
issue https://code.google.com/p/googleappengine/issues/detail?id=8932 to 
track the issue and have begun investigating.

On Tuesday, March 5, 2013 6:39:44 AM UTC-8, Andrew Bindon wrote:

 Can anyone confirm or deny that round about 1st March, GoogleAppEngine 
 trippled the length of Blobstore filenames from around 300 chars up to 
 around 900 chars?


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Will I be able to create a Google App (static hosting) with a domain name I used as an alias on a Google Apps for Business primary domain?

2013-03-06 Thread Ken Langston
So if I...
1. Started a Google Apps for Business plan with aaa.com (example) and 
updated MX email settings on GoDaddy
2. Then added bbb.com as an *alias domain* and updated MX email settings on 
GoDaddy

Will I be able to set up static hosting for both domains as Google Apps 
(appspot) with their real domain names (not an appspot subdomain)?

Thanks all for your help!

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Any plan to remove inactive app to free app identifier?

2013-03-06 Thread Edward Fung
Tried to register an identifier for my new project. (e.g. 
ihatetheworld.appspot.com)
Can't find an available name without a hyphen...
I wonder if it is becoz of some bots or guys just registered all the names 
and waiting someone to buy it?
how many app identifiers are registered but without making page actually 
handled by the app + no activity for a long period of time
Is google going to remove or disable such app?

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] AppScale IRC Office Hours Wednesday 8am PST

2013-03-06 Thread Hannah Anderson
Hi all,
  Join usAppScale the open source implementation of App Engine on IRC tomorrow 
morning at #appscale on freenode.net with any questions or feature requests 
about AppScale, or just to say 'hi'! We're doing them every Wednesday from 
8am PST - 9am PST, so feel freeto join us and drop us a line!

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: Alternate to Conversion API

2013-03-06 Thread Retze Faber
We run an online service that certainly offers the HTML to PDF conversion 
capabilities you had and more. It's quite affordable and very easy to 
implement.

Have a look at http://www.htm2pdf.co.uk/features for the different options 
we offer.

On Tuesday, August 21, 2012 1:00:57 PM UTC+2, aswath wrote:

 Hello,
 We were deeply involved in utilizing the conversion api for the HTML to 
 PDF conversion.  Suddenly, I got the email from Google about the plan for 
 decommissioning from Nov 2012.

 Does anyone has suggestions for doing the HTML to PDF conversion that is 
 compatible with Google Appengine for Java.  


 Regards
 -Aswath
 www.AccountingGuru.in


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: Any plan to remove inactive app to free app identifier?

2013-03-06 Thread Joakim
I don't think Google will ever release a taken App ID into the wild.
And besides that, not only does your App ID have to be unique on AppEngine, 
it also cannot be the name of a gmail account, for example.
The solution is to buy a normal domain and tie that to your app. At that 
point, the App ID doesn't matter at all.

On Wednesday, March 6, 2013 4:24:57 PM UTC+1, Edward Fung wrote:

 Tried to register an identifier for my new project. (e.g. 
 ihatetheworld.appspot.com)
 Can't find an available name without a hyphen...
 I wonder if it is becoz of some bots or guys just registered all the names 
 and waiting someone to buy it?
 how many app identifiers are registered but without making page actually 
 handled by the app + no activity for a long period of time
 Is google going to remove or disable such app?


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: Many cold start problems recently.

2013-03-06 Thread Cesium
Tapir,

This is an ongoing issue we have suffered with.

Get used to it. There will be no (useful) response from G.

David

On Tuesday, March 5, 2013 6:52:28 PM UTC-7, Tapir wrote:
 There is a resident instance obviously there. But the GAE scheduler often 
 ignores it totally and always creates a new instance then the let the new 
 instacne to handle the new request.
 What is the functionality of the resident instance? If the scheduler always 
 creates a new instance, the why put a resident instance there?
  
 It really make the user experience very bad.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: App Engine Billing Migration Discounted Instance Hours

2013-03-06 Thread Steven Klein
Discounted Instance Hours will continue to be reserved and modifiable on a 
weekly basis.  However, charging of your credit card will switch to monthly.

On Wednesday, March 6, 2013 1:44:30 AM UTC-8, Krzysztof wrote:

 Good question, I would also like to know the answer.

 W dniu piątek, 8 lutego 2013 15:18:35 UTC+1 użytkownik Jason Cone napisał:

 On Tuesday, we were notified via email of a future change in the way that 
 App Engine handles their billing (changing from weekly to monthly). On 
 Wednesday, I sent a question regarding the change's impact on how 
 discounted instance hours are handled to google-cloud-b...@google.comand 
 have yet to receive a response. Perhaps a Google person here could 
 answer my question (which I'll just copy-and-paste):

 How will these changes affect the purchasing of discounted instance 
 hours? Will that remain a weekly allocation that can be modified 
 week-to-week or will that, too, change to a monthly allocation (and, 
 presumably, be capable of being modified only once a month)?

 Thanks,

 Jason



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.