Re: [google-appengine] Understanding the value proposition of App Engine for data processing

2015-08-11 Thread 'Tom Kaitchuck' via Google App Engine
I think you may be more interested in Cloud Dataflow:
https://cloud.google.com/dataflow/what-is-google-cloud-dataflow

On Tue, Aug 11, 2015 at 11:30 AM, Duo Zmo duo...@gmail.com wrote:

 I'm just digging into map reduce on Google App Engine, and my early
 results are discouraging. I had in mind that I'd process about 10GB of data
 for an analysis I want to do, and I didn't even think that'd be that big a
 deal (given all the talk about petabyte-scale storage and such), but it's
 currently looking impossible.

 I did a simple word count mapreduce on some Gutenberg books (63MB zipped,
 166MB unzipped), once using Google's Python mapreduce example (
 https://cloud.google.com/appengine/docs/python/dataprocessing/) and once
 using the dumb-as-rocks standalone Python scripts posted at the top of
 Michael Noll's Hadoop tutorial (
 http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
 ).

 Experimental results:

 Simple Python: 1 minute 22 seconds
 GAE dev server: 2 hours 17 minutes 12 seconds

 Given the staggering difference in run time, even if computation in the
 cloud were free, I'd still opt to compute locally unless my hand were
 forced somehow (e.g. input files that didn't fit on my disk). Of course,
 the computation is not free, which means you're not only enduring all that
 overhead, but paying for it too.

 I did try running this same test in production, i.e. on Google Cloud's
 infrastructure. At first it failed, because just getting the job started
 exceeded the 128MB memory limit for the free tier. I turned on billing,
 bumped up the instance class to F4, and let it go. It chewed through the
 free tier quickly, then about USD$8 of instance time before one of the
 shuffle-merge shards seemed to enter an infinite loop (ran for 2 hours, no
 errors in logs). I aborted and gave up at that point.

 Everything I hear about cloud computing makes it sound like the gleaming,
 glossy future, but these results makes it seem expensive and slow. $8+ to
 do a mapreduce across 60MB of data just doesn't seem like a good deal to
 me. At that rate, there's no way I can afford to process my 10GB dataset on
 App Engine. I understand that with the pipeline model you get fault
 tolerance and status reports and basic job management, but none of that is
 worth the expense or a 100x performance hit.

 I think there's two possible problems going on here:

 1) I made a technical mistake in my experiment and my results are invalid
 2) I'm not understanding the benefits / value proposition of App Engine

 Are my results consistent with what others would expect? Do either or both
 of my candidate explanations ring true? What else am I not considering? I
 think this is more a discussion topic than a discrete ask-and-answer, which
 is why I'm posting here instead of Stack Overflow.

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/google-appengine/467aec55-e518-40b8-81b4-d62fb54a3dcb%40googlegroups.com
 https://groups.google.com/d/msgid/google-appengine/467aec55-e518-40b8-81b4-d62fb54a3dcb%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAN4PiZERboinWnN65N16b0CA2O%3Dx%2BUbhdB%3DDL52s7VB-oeVhww%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] MapReduce not stopping MR controllers when completed - frontend charges increasing in bill

2015-07-15 Thread 'Tom Kaitchuck' via Google App Engine
I think this may be your problem:
https://github.com/GoogleCloudPlatform/appengine-mapreduce/issues/69

On Wed, Jul 15, 2015 at 9:03 AM, Camilo Silva camilo.si...@citrix.com
wrote:

 So I've been working on Map Reduce Google library for quite a while on
 some Python App Engine projects. And to this day, I cannot comprehend why
 there are mrcontrollers alive doing callbacks everytime right after all
 processing is done (i.e., all shards terminated successfully). I always
 have to go back and purge the taskqueue so that unnecessary frontend calls
 are stopped -- this is an issue because the billing is affected due to
 this.
 Any feedback is welcomed.
 Thanks for your help.

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/google-appengine/cc62f26e-8277-422e-968b-2dd5e662c17a%40googlegroups.com
 https://groups.google.com/d/msgid/google-appengine/cc62f26e-8277-422e-968b-2dd5e662c17a%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAN4PiZF1otn%3DyhOk_Dfy5C85LBrEjwWdDpGJwa7PjP-OQ9dwcg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] Saving an entity from a Reducer invoking my DAO (DatastoreService.put)

2015-07-09 Thread 'Tom Kaitchuck' via Google App Engine
Correct. You still have to deal with slice / shard retries. But it should
work fine.

On Wed, Jul 8, 2015 at 11:58 PM, Antonio Fornié Casarrubios 
antonio.for...@gmail.com wrote:

 Thanks Tom. So it's that all? The only difference/disadvantage of this
 approach would be to have less performance?


 El miércoles, 8 de julio de 2015, 20:52:25 (UTC+2), Tom Kaitchuck escribió:

 The point of DatastoreMutationPool is to provide batching of updates to
 increase throughput. It is fine to use something else.

 On Wed, Jul 8, 2015 at 7:34 AM, Antonio Fornié Casarrubios 
 antonio...@gmail.com wrote:


 Hi all. I didn't find any answer or info for this:

 *Context*: Java Google App Engine project, using Datastore. For a
 certain kind, Sale, I save the entities in two ways:


1. From a typical SaleDAO#save(saleEntity)
2. During MapReduce, from my SaleReducer#emit(saleEntity)


 So I was investigating how to reuse some behavior as part of both cases,
 so I made something like:

 class SaleDAO {
 DatastoreService datastore
 public save(Entity entity) {
 // My common behavior that I want to reuse
 datastore.put(entity)
 // Some more common behavior that I want to reuse
 }
 }



 And then for the MapReduce part, I extended OutputWriter to call my DAO,
 instead of the DatastoreMutationPool#put(entity) method provided for that
 purpose.


 public class SaleOutputWriter extends OutputWriterEntity {
 @Override
 public void write(Entity entity) {
 //pool.put(entity); - try to invoke the DAO instead
 saleDAO.save(sale)
 }
 }




1. My surprise is that it works! Well, the MR jobs seem to go to a
strange state, but the entities are correctly saved. Still I'm 99% sure
that this must be a bad practice or bring some problems. Can somebody 
 tell
me
2. If that's not the case, why are we given a certain API for
MapReduce and another one for normal persistence? Will be the same to
invoke DatastoreMutationPool.put than DatastoreService.put (any
implementation)?


 Honestly, this idea sounded crazy in my head and now that it seems to
 work I want to know more about it.

 --
 You received this message because you are subscribed to the Google
 Groups Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to google-appengi...@googlegroups.com.
 To post to this group, send email to google-a...@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/google-appengine/61276548-616e-4f94-9263-00bcfcb5e33d%40googlegroups.com
 https://groups.google.com/d/msgid/google-appengine/61276548-616e-4f94-9263-00bcfcb5e33d%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/google-appengine/1cc85d13-ff75-4284-8e50-06b8a4eb17a5%40googlegroups.com
 https://groups.google.com/d/msgid/google-appengine/1cc85d13-ff75-4284-8e50-06b8a4eb17a5%40googlegroups.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAN4PiZH3Ex%2BMzkK1DJWX7qCNW2i74BVgoTVoPWbC%2Bdb3sxa4YA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] Saving an entity from a Reducer invoking my DAO (DatastoreService.put)

2015-07-08 Thread 'Tom Kaitchuck' via Google App Engine
The point of DatastoreMutationPool is to provide batching of updates to
increase throughput. It is fine to use something else.

On Wed, Jul 8, 2015 at 7:34 AM, Antonio Fornié Casarrubios 
antonio.for...@gmail.com wrote:


 Hi all. I didn't find any answer or info for this:

 *Context*: Java Google App Engine project, using Datastore. For a certain
 kind, Sale, I save the entities in two ways:


1. From a typical SaleDAO#save(saleEntity)
2. During MapReduce, from my SaleReducer#emit(saleEntity)


 So I was investigating how to reuse some behavior as part of both cases,
 so I made something like:

 class SaleDAO {
 DatastoreService datastore
 public save(Entity entity) {
 // My common behavior that I want to reuse
 datastore.put(entity)
 // Some more common behavior that I want to reuse
 }
 }



 And then for the MapReduce part, I extended OutputWriter to call my DAO,
 instead of the DatastoreMutationPool#put(entity) method provided for that
 purpose.


 public class SaleOutputWriter extends OutputWriterEntity {
 @Override
 public void write(Entity entity) {
 //pool.put(entity); - try to invoke the DAO instead
 saleDAO.save(sale)
 }
 }




1. My surprise is that it works! Well, the MR jobs seem to go to a
strange state, but the entities are correctly saved. Still I'm 99% sure
that this must be a bad practice or bring some problems. Can somebody tell
me
2. If that's not the case, why are we given a certain API for
MapReduce and another one for normal persistence? Will be the same to
invoke DatastoreMutationPool.put than DatastoreService.put (any
implementation)?


 Honestly, this idea sounded crazy in my head and now that it seems to work
 I want to know more about it.

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/google-appengine/61276548-616e-4f94-9263-00bcfcb5e33d%40googlegroups.com
 https://groups.google.com/d/msgid/google-appengine/61276548-616e-4f94-9263-00bcfcb5e33d%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAN4PiZHeGR%3Dt7NeY8PeG9eJmKe06UTbkm0XTRsPG8mBJe%3DNx9g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] App Engine Java Map Reduce Completion Callback

2015-02-04 Thread 'Tom Kaitchuck' via Google App Engine
You can use pipelines:
https://github.com/GoogleCloudPlatform/appengine-pipelines
to chain things to run after a MR job. One of the included examples does
this:
https://github.com/GoogleCloudPlatform/appengine-mapreduce/blob/master/java/example/src/com/google/appengine/demos/mapreduce/entitycount/ChainedMapReduceJob.java

On Thu, Jan 29, 2015 at 10:21 AM, Jim jeb62...@gmail.com wrote:

 Hello everyone,

 I've just started working with Java Map Reduce on App Engine, and I want
 to take advantage of the Completion Callback function so I can write a
 servlet that processes the final output of my Map Reduce job.  I'd like to
 drop the output data into a task and then persist it into my datastore.
 Ikai Lan's tutorial has been very helpful to me as I work my way through
 this, however it was written 4 1/2 years ago and the Map Reduce library has
 progressed a lot since then.

 In particular, Ikai's page
 http://ikaisays.com/2010/07/09/using-the-java-mapper-framework-for-app-engine/
 shows how to define a URI for a callback after completion of your Map
 Reduce job.  He shows how to define a job in mapreduce.xml which includes a
 setting for mapreduce.appengine.donecallback.url.   However, with the
 version I'm using, configuration of jobs is done in code using the
 MapReduceSpecification builder.  I can't find any reference there or in any
 of the other configuration objects for input, mapper, reducer, output, etc.
 to set a URI callback like Ikai describes when using the old mapreduce.xml
 config file.

 I have a feeling I'm overlooking something simple.  Can somebody point me
 in the right direction?

 If the callback method is no longer available, I'm thinking a custom
 output method would be the right place to capture my final output and drop
 to the task queue.  Any other ideas?

 Thanks for your help,

 Jim

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/google-appengine/cf47c5eb-f8ad-4376-90c7-86e44549f3d6%40googlegroups.com
 https://groups.google.com/d/msgid/google-appengine/cf47c5eb-f8ad-4376-90c7-86e44549f3d6%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAN4PiZE3Myjg2VrojYjDGNGZ8LoB3TWyq2R%2BpMKpc8ycptZyAA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] MapReduce import woes

2014-11-13 Thread 'Tom Kaitchuck' via Google App Engine
Take a look at the build.sh script:
https://github.com/GoogleCloudPlatform/appengine-mapreduce/blob/master/python/build.sh
It compiles (and runs) the demo application using the checked out Mapreduce.
It currently has an issue that makes it kindof annoying:
https://github.com/GoogleCloudPlatform/appengine-mapreduce/issues/17

However if you just want to depend on MapReduce in your application, the
easiest thing is to get it from Pypi.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] Will cloud Dataflows replace the MapReduce/Pipeline API?

2014-07-29 Thread 'Tom Kaitchuck' via Google App Engine
Yes. Multiple namespaces are not directly supported. Given a finite known
set, the simplest thing to do would be to construct an InMemoryInput and
have the mapper operate on a namespace. You could also, for instance create
a new input class that iterated over namespaces.

Both of the above are predicated on the idea that the mapper can operate on
a namespace as a single item. If this is not practical, there is not much
that can be done. It is not possible to issue / shard a datastore query
that crosses namespaces. So there is no good way to shard the data.


On Tue, Jul 29, 2014 at 4:26 AM, Aswath Satrasala 
aswath.satras...@gmail.com wrote:

 That is correct.  It support one namespace.  If there are 1000 namespaces,
 then we have to setup 1000 mapreduce jobs, and monitor them.
 It is more work and not convenient, if you are doing data processing more
 often.
 Please star this issue, if you are using namespace and mapreduce
 https://code.google.com/p/appengine-mapreduce/issues/detail?id=108




 On Fri, Jul 25, 2014 at 12:13 AM, 'Tom Kaitchuck' via Google App Engine 
 google-appengine@googlegroups.com wrote:

 Mapreduce does support namespaces:

 https://code.google.com/p/appengine-mapreduce/source/browse/trunk/java/src/main/java/com/google/appengine/tools/mapreduce/inputs/DatastoreInput.java#28


 On Thu, Jul 24, 2014 at 12:28 AM, Aswath Satrasala 
 aswath.satras...@gmail.com wrote:

 Currently, mapreduce does not support namespaces well and takes takes
 longer time to setup and code.
 Any idea, on the the dataflow when it will be for public?  Will it
 support for processing namespaces data in appengine?

 -Aswath



 On Fri, Jun 27, 2014 at 12:18 AM, 'Tom Kaitchuck' via Google App Engine
 google-appengine@googlegroups.com wrote:

 No

 --
 You received this message because you are subscribed to the Google
 Groups Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google
 Groups Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] Will cloud Dataflows replace the MapReduce/Pipeline API?

2014-07-24 Thread 'Tom Kaitchuck' via Google App Engine
Mapreduce does support namespaces:
https://code.google.com/p/appengine-mapreduce/source/browse/trunk/java/src/main/java/com/google/appengine/tools/mapreduce/inputs/DatastoreInput.java#28


On Thu, Jul 24, 2014 at 12:28 AM, Aswath Satrasala 
aswath.satras...@gmail.com wrote:

 Currently, mapreduce does not support namespaces well and takes takes
 longer time to setup and code.
 Any idea, on the the dataflow when it will be for public?  Will it support
 for processing namespaces data in appengine?

 -Aswath



 On Fri, Jun 27, 2014 at 12:18 AM, 'Tom Kaitchuck' via Google App Engine 
 google-appengine@googlegroups.com wrote:

 No

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] Will cloud Dataflows replace the MapReduce/Pipeline API?

2014-06-26 Thread 'Tom Kaitchuck' via Google App Engine
No

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] HTTP 429 Returned for /mapreduce/workerCallback task

2014-06-20 Thread 'Tom Kaitchuck' via Google App Engine
The 429 is the mapreduce framework pushing back to the task queue because
there is not enough memory on the instance to handle the number of requests
it was given. This isn't necessarily a problem, and if there are no other
issues, can be safely ignored. However if it is occurring a lot, you could
improve performance by lowering max-concurrent-requests in your modules
configuration:
https://developers.google.com/appengine/docs/java/modules/#Java_Configuration

You should be able to see the progress of the shards in the lower right
panel of the UI. Provided all of the shards are making progress, you
probably have nothing to worry about.


On Mon, Jun 9, 2014 at 9:56 AM, Bill Speirs bill.spe...@gmail.com wrote:


 I'm attempting to run a map-reduce job from Java. The job kicked off and
 started without a hitch, but now it's stuck waiting on ~34 tasks to finish.
 The job is stuck in the ExamineStatusAndReturnResult phase, after the
 ShardedJob phase. I see ~34 tasks in my queue all of the
 form: POST /mapreduce/workerCallback/map-*hex-numbers* They all say that
 the previous run returned a 429 Too Many Requests (
 http://tools.ietf.org/html/rfc6585#section-4). I'm guessing I've hit some
 kind of limit/quota, but I cannot tell what/where this quota is.

 How can I find out what is causing the 429 response code?

 Thanks!

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] Re: Best way to update 400,000 entities at once?

2014-02-28 Thread Tom Kaitchuck
This is exactly the sort of task the MapReduce was meant for. It should be
really a lot easier than managing the partitioning, error recovery, etc
yourself.
Take a look at our new docs:
https://developers.google.com/appengine/docs/java/dataprocessing/mapreduce_library
hopefully they should make it less overwhelming.


On Thu, Feb 27, 2014 at 5:04 AM, de Witte wd.dewi...@gmail.com wrote:

 Use a backend instance and keep it running until done.

 Or

 Use two tasks. One for retrieving 1000 keys at the time and a second one
 to update the entities in a batch of 1000.

 Done it for 300.000 entities in less than a 20 mins. ~300 tasks

 Op vrijdag 7 februari 2014 22:43:33 UTC+1 schreef Keith Lea:

 Hi everyone,

 I'm a long time App Engine user for my app's backend, but I'm really
 still a novice about the datastore.

 I'd like to add a new property (and index) for all entities of a certain
 type. I have about 400,000 of this type of entity in the datastore, and I'd
 like to load each one, add a property, and save it back to the datastore.
 400,000 times.

 This will obviously take a long time, so I'd really like to split it up
 into ~100 tasks that each take 1/100th of the entities (~4,000 entities)
 and perform this operation.

 But I really don't know how to do this using queries, and the Java
 MapReduce library is overwhelmingly complicated.

 So how can I create 100 tasks that each take a unique chunk of the
 entities to operate on? Is this called sharding? Is there a way for a
 task to say give me entity #200,000 thru #204,000? (My entity's keys are
 strings, which were generated by my application and generally look like
 928348-com.example-iOS.)

 I'm using Java and Objectify btw. Thanks for any help or guidance!!

 Keith

  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [google-appengine] Re: 1.9.0 Pre-Release SDKs are now available.

2014-02-28 Thread Tom Kaitchuck
The new docs are here:
https://developers.google.com/appengine/docs/java/dataprocessing/
These are a replacement for the ones on code.google.com
Kirill: Your compatibility question is addressed here:
https://developers.google.com/appengine/docs/java/dataprocessing/mapreduce_update


On Mon, Feb 10, 2014 at 4:50 PM, Kirill Lebedev
k.lebe...@electionear.comwrote:

 Thanks for this information. Actually one of our appengines is already on
 1.9.0. And I have a compatibility question based on your notes:

  We are actively using Java version of MapReduce library. We are still on
 0.2 version (Blobstore-based intermediate storage and InMemory Shuffling).
 We tried to update our system to 0.3 and 0.4 (current SVN state)  and it
 was not successful. More over 0.4 SVN version has import
 com.google.appengine.api.labs.modules.ModulesService; and import
 com.google.appengine.api.labs.modules.ModulesServiceFactory; imports in
 MapReduceJob.java. So how this release will affect MapReduce compatibility?
 Will 0.2 version still work on 1.9.0. Do you have any plans to publicly
 release 0.4 version of Java MapReduce that will reflect 1.9.0 changes? It
 is critical cause our code relies on that libraries.

 Thanks,
 Kirill Lebedev

 вторник, 4 февраля 2014 г., 17:43:41 UTC-8 пользователь Richmond Manzana
 написал:

 We want to inform you that the pre-release SDKs for Python, PHP and Java
 are now available.

 As previously 
 announcedhttp://google-opensource.blogspot.com/2013/05/a-change-to-google-code-download-service.htmlin
  a Google code site announcement, new App Engine Binaries are no longer
 available at:
 http://code.google.com/p/googleappengine/downloads/list

 Older binaries will remain available at the code.google.com site.

 1.9.0 Pre-release SDKs are now available at these links:

 App Engine 1.9.0 Java prerelease 
 SDKhttp://commondatastorage.googleapis.com/appengine-sdks%2Ffeatured%2Fappengine-java-sdk-1.9.0_prerelease.zip

 App Engine 1.9.0 Python prerelease 
 SDKhttp://commondatastorage.googleapis.com/appengine-sdks%2Ffeatured%2Fgoogle_appengine-1.9.0_prerelease.zip

 App Engine 1.9.0 PHP prerelease 
 SDKhttp://commondatastorage.googleapis.com/appengine-sdks%2Ffeatured%2Fgoogle_appengine-php-sdk-1.9.0_prerelease.zip

 In the future, please look forward to the finding the latest binaries at
 https://developers.google.com/appengine/downloads


 Also, please see the pre-release notes below.

 Cheers,

 Richmond Manzana
 Technical Program Manager
 Google App Engine

 App Engine SDK - Pre-Release Notes

 Version 1.9.0

 Python  PHP
 ==
 - New App Engine Application Identifiers must now start with a letter,
   in addition to the existing requirements that the identifier be 6-30
   characters which are letters, numbers, and hyphens, and not start or
 end with
   a hyphen.

 Python
 ==
 - The size limit on the Search API is now computed and enforced on a
 per-index
   basis, rather than for the app as a whole. The per-index limit is now
 10GB.
   There is no fixed limit on the number of indexes, or on the total
 amount of
   Search API storage an application may use.
 - Users now have the ability to embed images in emails via the Content-Id
   attachment header.
 https://code.google.com/p/googleappengine/issues/detail?id=965
 https://code.google.com/p/googleappengine/issues/detail?id=10503
 - Fixed an issue with NDB backup/restore corrupting certain compressed
   entities.
 https://code.google.com/p/googleappengine/issues/detail?id=8599

 PHP
 ==
 - The PHP interpreter has been upgraded from PHP 5.4.19 to PHP 5.4.22.
 - Autoloading is now available in the SDK so developers will no longer
 need to
   explicitly require SDK files.
 - Expanded php.ini setting google_appengine.allow_include_gs_buckets to
 allow
   a path filter be included for improved security.
 - A warning message now appears if an application moves a user uploaded
 file to
   a Google Cloud Storage bucket/path. This is due to the fact that code
 may be
   included and lead to a local file inclusion vulnerability.
 - Added API functions CloudStorageTools::getMetadata() and
   CloudStorageTools::getContentType() for retrieving the metadata and
 content
   type of Google Cloud Storage objects.
 https://code.google.com/p/googleappengine/issues/detail?id=10182
 - Fixed an issue with GCS folders not displaying correctly in Developers
   Console.
 - Fixed an issue with PHP_SELF and SCRIPT_NAME not being implemented
 correctly.
 https://code.google.com/p/googleappengine/issues/detail?id=9989
 https://code.google.com/p/googleappengine/issues/detail?id=10478

 Java
 ==
 - Java 6 applications cannot be deployed to Google App Engine from any
 version
   of the SDK. Existing Java 6 applications will continue to run. If you
 are
   still relying on a Java 6 application in Google App Engine, we strongly
   encourage you to start 

Re: [google-appengine] Re: Issue with Cloud Storage GCS API on local Development Server

2013-12-19 Thread Tom Kaitchuck
Namespaces are a datastore specific concept. It caused trouble in this
instance because the local mock of GCS uses the datastore for its metadata.
Real GCS does not depend on datastore, so this problem will not occur with
deployed code.


On Fri, Dec 13, 2013 at 6:05 AM, Troy High troy.h...@metablock.com wrote:

 I was having the exact same problem with namespaces and the local dev
 server not honoring them for some entries. I applied the same workaround as
 Ken and am able to retrieve files programmatically.

 So is this a bug in the Dev server implementation or does this mean cloud
 storage does not officially support namespaces?  I haven't pushed this code
 to the google servers yet so I am not sure if I would encounter the same
 issue without the workaround.

 Thanks,
 Troy


 On Friday, November 22, 2013 12:40:05 PM UTC-5, k...@form-runner.comwrote:

 Nailed the culprit!
 We make extensive use of namespaces, and in my original post, there
 turns out to be an unfortunate interaction between namespaces and the GCS
 client API.

 Down in my original post, where I laid out all the data I could find in
 the Development Console, you will see that there are entries in the empty
 namespace and a different namespace:

 In __GsFileInfo__ (in the empty Namespace)
 ….blah … blah

 In Namespace 5629499534213120 (where everything ran):
 In _ah_FakeCloudStorate_formrunnerbucket-r7yh23nb2:
 …. blah…. blah

 The code for writing into GCS ran in namespace 5629499534213120, but it
 wrote part of it's data into the empty namespace.

 The (successful) experiment was to ensure that all the GCS API code
 (read and write) runs in the empty namespace, conceptually thus:

 private byte[] readFromFile(GcsFilename fullGcsFilename)  throws
 IOException
 {
 String in_ns = NamespaceManager.get();
 NamespaceManager.set(null);
  int fileSize = (int) gcsService.getMetadata(
 fullGcsFilename).getLength();
 ByteBuffer result = ByteBuffer.allocate(fileSize);
  GcsInputChannel readChannel = gcsService.openReadChannel(fullGcsFilename,
 0);
 try {
   readChannel.read(result);
  } finally {
   readChannel.close();
 }
 byte[] toreturn=result.array();
  NamespaceManager.set(in_ns);
 return toreturn;
 }

 Now the FileNotFound does not occur, and I do get (some) bytes back,
 about 1/10 of the 1.2MB.  Presumably when I switch to ObjectStreaming, I'll
 get all.
 Cheers,
 --Ken

  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [google-appengine] Issue with Cloud Storage GCS API on local Development Server

2013-11-21 Thread Tom Kaitchuck
I'm a bit confused by your statement.
If you want to run in the DevAppserver or in AppEngine then you don't need
to use the LocalServiceTestHelper at all. The LocalExample only does that
as a demo of how to run it as a local executable or if you want to use it
within Junit.
If you don't include such a line, like in GcsExampleServlet then it should
work as part of your deployed application.


On Thu, Nov 21, 2013 at 3:41 PM, Ken Bowen k...@form-runner.com wrote:

 Thanks Tom.
 I went through pretty much the same servlet exercise a couple of hours
 before seeing your post this afternoon.
 The problem was that I missed the point the the docs that the code /must/
 run in a servlet.
 (I'm not clear where it's stated, but I guessed from the comment about the
 test harness in LocalExample.)
 I assumed that it would be ok to run in a deployed app on AppEngine.
 Thanks again,
 --Ken

 On Nov 21, 2013, at 2:51 PM, Tom Kaitchuck wrote:

  Using the 1.8.8 version of the SDK and depending on appengine-gcs-client
 0.3.3, the following servlet based on your above example works:
 
  public class GcsTest extends HttpServlet {
private void writeToFile(GcsService gcsService, GcsFilename
 fullGcsFilename, byte[] content)
throws IOException {
  System.out.println(writeToFile:full= + fullGcsFilename.toString());
  GcsOutputChannel outputChannel =
  gcsService.createOrReplace(fullGcsFilename,
 GcsFileOptions.getDefaultInstance());
  outputChannel.write(ByteBuffer.wrap(content));
  outputChannel.close();
}
 
 
private byte[] readFromFile(GcsService gcsService, GcsFilename
 fullGcsFilename)
throws IOException {
  System.out.println(readFromFile:full= +
 fullGcsFilename.toString());
  int fileSize = (int)
 gcsService.getMetadata(fullGcsFilename).getLength();
  ByteBuffer result = ByteBuffer.allocate(fileSize);
  GcsInputChannel readChannel =
 gcsService.openReadChannel(fullGcsFilename, 0);
  try {
readChannel.read(result);
  } finally {
readChannel.close();
  }
  return result.array();
}
 
@Override
public void doGet(HttpServletRequest req, HttpServletResponse resp)
 throws IOException {
  GcsService gcsService = GcsServiceFactory.createGcsService();
  GcsFilename fullGcsFilename = new GcsFilename(Foo, Bar);
  byte[] content = new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
  writeToFile(gcsService, fullGcsFilename, content);
  byte[] result = readFromFile(gcsService, fullGcsFilename);
  PrintWriter writer = resp.getWriter();
  writer.append(Arrays.toString(content));
  writer.append(Arrays.toString(result));
}
  }
 
  I'm not really sure what could be different about your setup to cause
 that. Try creating a new project with the minimal possible dependencies and
 run the servlet above and see what happens.
 
 
 
  On Tue, Nov 19, 2013 at 12:57 PM, k...@form-runner.com wrote:
  I've encountered a problem using the Google Cloud Storage GCS Client
 API, running on the local development server.  I'm trying to write the
 bytes from a PDF file, and then read them back.
 
  The code appears to write the (local fake)GCS file ok: (1) There appears
 to be an appropriate entry in the Development Console, and (2) there's a
 physical file in ~war/WEB-APP/appengine-generated (details below).
  However, when I attempt to read the bytes from the GCS file, it throws a
 FileNotFound exception when it attempts to get the metadata (filesize).
 
  First, here's the core code, versions of GCS client read/write, with my
 debugging stmts left in for reference:
 
  private void writeToFile(GcsFilename fullGcsFilename, byte[] content)
throws IOException
  {
  System.out.println(writeToFile:full=+fullGcsFilename.toString());
  GcsOutputChannel outputChannel =
gcsService.createOrReplace(fullGcsFilename,
 GcsFileOptions.getDefaultInstance());
  outputChannel.write(ByteBuffer.wrap(content));
  outputChannel.close();
  }
 
  private byte[] readFromFile(GcsFilename fullGcsFilename)
throws IOException
  {
  System.out.println(readFromFile:full=+fullGcsFilename.toString());
  int fileSize = (int)
 gcsService.getMetadata(fullGcsFilename).getLength();   [*][Exception thrown
 here]
  ByteBuffer result = ByteBuffer.allocate(fileSize);
  GcsInputChannel readChannel =
 gcsService.openReadChannel(fullGcsFilename, 0);
  try {
readChannel.read(result);
  } finally {
readChannel.close();
  }
  return result.array();
  }
 
  Here's the debugging output (in/out filenames appear to be the same):
  
  writeToFile:full=GcsFilename(formrunnerbucket-r7yh23nb2,
 FA/MasterFormStore-6649846324789248)
 
  
  readFromFile:full=GcsFilename(formrunnerbucket-r7yh23nb2,
 FA/MasterFormStore-6649846324789248)
 
  
  Here's the observed results:
 
  IN ~war/WEB-APP/appengine-generated:
 
  -rw-r--r--  1 ken  staff  1679407 Nov 19 09:19

Re: [google-appengine] Issue with Cloud Storage GCS API on local Development Server

2013-11-21 Thread Tom Kaitchuck
Correct. It does not have to be directly in the servlet. For example App
Engine MapReduce: https://code.google.com/p/appengine-mapreduce/
uses the GCS client to write out data, but it is many levels removed from a
servlet.


On Thu, Nov 21, 2013 at 4:44 PM, Ken Bowen k...@form-runner.com wrote:

 Hi Tom,

 Re:
  you don't need to use the LocalServiceTestHelper

 Understood.  I ran LocalExample once when I began, just to follow the
 documentation.

 The original description I posted is from a running app on the DevServer,
 no TestHelper involved.  In fact, the test jars are not even in the project.

 After posting, I was reviewing LocalExample to make comparisons, and the
 phrase

 ...run locally as opposed to in a deployed servlet

 caught my eye.  So I wrote a version of what I posted as a servlet (grabs
 a 1.2MB PDF out of a resource, stores it with the GCS library, retrieves
 it, and writes out the number of bytes it got).  That works fine.

 If you say this doesn't /have/ to be directly in a servlet, I'll dig in
 further to my original.

 Cheers,
 --Ken

 On Nov 21, 2013, at 5:07 PM, Tom Kaitchuck wrote:

  I'm a bit confused by your statement.
  If you want to run in the DevAppserver or in AppEngine then you don't
 need to use the LocalServiceTestHelper at all. The LocalExample only does
 that as a demo of how to run it as a local executable or if you want to use
 it within Junit.
  If you don't include such a line, like in GcsExampleServlet then it
 should work as part of your deployed application.
 
 
  On Thu, Nov 21, 2013 at 3:41 PM, Ken Bowen k...@form-runner.com wrote:
  Thanks Tom.
  I went through pretty much the same servlet exercise a couple of hours
 before seeing your post this afternoon.
  The problem was that I missed the point the the docs that the code
 /must/ run in a servlet.
  (I'm not clear where it's stated, but I guessed from the comment about
 the test harness in LocalExample.)
  I assumed that it would be ok to run in a deployed app on AppEngine.
  Thanks again,
  --Ken
 
  On Nov 21, 2013, at 2:51 PM, Tom Kaitchuck wrote:
 
   Using the 1.8.8 version of the SDK and depending on
 appengine-gcs-client 0.3.3, the following servlet based on your above
 example works:
  
   public class GcsTest extends HttpServlet {
 private void writeToFile(GcsService gcsService, GcsFilename
 fullGcsFilename, byte[] content)
 throws IOException {
   System.out.println(writeToFile:full= +
 fullGcsFilename.toString());
   GcsOutputChannel outputChannel =
   gcsService.createOrReplace(fullGcsFilename,
 GcsFileOptions.getDefaultInstance());
   outputChannel.write(ByteBuffer.wrap(content));
   outputChannel.close();
 }
  
  
 private byte[] readFromFile(GcsService gcsService, GcsFilename
 fullGcsFilename)
 throws IOException {
   System.out.println(readFromFile:full= +
 fullGcsFilename.toString());
   int fileSize = (int)
 gcsService.getMetadata(fullGcsFilename).getLength();
   ByteBuffer result = ByteBuffer.allocate(fileSize);
   GcsInputChannel readChannel =
 gcsService.openReadChannel(fullGcsFilename, 0);
   try {
 readChannel.read(result);
   } finally {
 readChannel.close();
   }
   return result.array();
 }
  
 @Override
 public void doGet(HttpServletRequest req, HttpServletResponse resp)
 throws IOException {
   GcsService gcsService = GcsServiceFactory.createGcsService();
   GcsFilename fullGcsFilename = new GcsFilename(Foo, Bar);
   byte[] content = new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
   writeToFile(gcsService, fullGcsFilename, content);
   byte[] result = readFromFile(gcsService, fullGcsFilename);
   PrintWriter writer = resp.getWriter();
   writer.append(Arrays.toString(content));
   writer.append(Arrays.toString(result));
 }
   }
  
   I'm not really sure what could be different about your setup to cause
 that. Try creating a new project with the minimal possible dependencies and
 run the servlet above and see what happens.
  
  
  
   On Tue, Nov 19, 2013 at 12:57 PM, k...@form-runner.com wrote:
   I've encountered a problem using the Google Cloud Storage GCS Client
 API, running on the local development server.  I'm trying to write the
 bytes from a PDF file, and then read them back.
  
   The code appears to write the (local fake)GCS file ok: (1) There
 appears to be an appropriate entry in the Development Console, and (2)
 there's a physical file in ~war/WEB-APP/appengine-generated (details
 below).  However, when I attempt to read the bytes from the GCS file, it
 throws a FileNotFound exception when it attempts to get the metadata
 (filesize).
  
   First, here's the core code, versions of GCS client read/write, with
 my debugging stmts left in for reference:
  
   private void writeToFile(GcsFilename fullGcsFilename, byte[] content)
 throws IOException
   {
   System.out.println(writeToFile:full

Re: [google-appengine] Issue with Cloud Storage GCS API on local Development Server

2013-11-21 Thread Tom Kaitchuck
Using the 1.8.8 version of the SDK and depending on appengine-gcs-client
0.3.3, the following servlet based on your above example works:

public class GcsTest extends HttpServlet {
  private void writeToFile(GcsService gcsService, GcsFilename
fullGcsFilename, byte[] content)
  throws IOException {
System.out.println(writeToFile:full= + fullGcsFilename.toString());
GcsOutputChannel outputChannel =
gcsService.createOrReplace(fullGcsFilename,
GcsFileOptions.getDefaultInstance());
outputChannel.write(ByteBuffer.wrap(content));
outputChannel.close();
  }


  private byte[] readFromFile(GcsService gcsService, GcsFilename
fullGcsFilename)
  throws IOException {
System.out.println(readFromFile:full= + fullGcsFilename.toString());
int fileSize = (int)
gcsService.getMetadata(fullGcsFilename).getLength();
ByteBuffer result = ByteBuffer.allocate(fileSize);
GcsInputChannel readChannel =
gcsService.openReadChannel(fullGcsFilename, 0);
try {
  readChannel.read(result);
} finally {
  readChannel.close();
}
return result.array();
  }

  @Override
  public void doGet(HttpServletRequest req, HttpServletResponse resp)
throws IOException {
GcsService gcsService = GcsServiceFactory.createGcsService();
GcsFilename fullGcsFilename = new GcsFilename(Foo, Bar);
byte[] content = new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
writeToFile(gcsService, fullGcsFilename, content);
byte[] result = readFromFile(gcsService, fullGcsFilename);
PrintWriter writer = resp.getWriter();
writer.append(Arrays.toString(content));
writer.append(Arrays.toString(result));
  }
}

I'm not really sure what could be different about your setup to cause that.
Try creating a new project with the minimal possible dependencies and run
the servlet above and see what happens.



On Tue, Nov 19, 2013 at 12:57 PM, k...@form-runner.com wrote:

 I've encountered a problem using the Google Cloud Storage GCS Client API,
 running on the local development server.  I'm trying to write the bytes
 from a PDF file, and then read them back.

 The code appears to write the (local fake)GCS file ok: (1) There appears
 to be an appropriate entry in the Development Console, and (2) there's a
 physical file in ~war/WEB-APP/appengine-generated (details below).
  However, when I attempt to read the bytes from the GCS file, it throws a
 FileNotFound exception when it attempts to get the metadata (filesize).

 First, here's the core code, versions of GCS client read/write, with my
 debugging stmts left in for reference:

 private void writeToFile(GcsFilename fullGcsFilename, byte[] content)
  throws IOException
 {
 System.out.println(writeToFile:full=+fullGcsFilename.toString());
 GcsOutputChannel outputChannel =
   gcsService.createOrReplace(fullGcsFilename,
 GcsFileOptions.getDefaultInstance());
 outputChannel.write(ByteBuffer.wrap(content));
 outputChannel.close();
 }

 private byte[] readFromFile(GcsFilename fullGcsFilename)
 throws IOException
 {
 System.out.println(readFromFile:full=+fullGcsFilename.toString());
 int fileSize = (int)
 gcsService.getMetadata(fullGcsFilename).getLength();   [*][Exception thrown
 here]
 ByteBuffer result = ByteBuffer.allocate(fileSize);
 GcsInputChannel readChannel =
 gcsService.openReadChannel(fullGcsFilename, 0);
 try {
   readChannel.read(result);
 } finally {
   readChannel.close();
 }
 return result.array();
 }

 Here's the debugging output (in/out filenames appear to be the same):
 
 writeToFile:full=GcsFilename(formrunnerbucket-r7yh23nb2,
 FA/MasterFormStore-6649846324789248)

 
 readFromFile:full=GcsFilename(formrunnerbucket-r7yh23nb2,
 FA/MasterFormStore-6649846324789248)

 
 Here's the observed results:

 IN ~war/WEB-APP/appengine-generated:

 -rw-r--r--  1 ken  staff  1679407 Nov 19 09:19
 encoded_gs_key:L2dzL2Zvcm1ydW5uZXJidWNrZXQtcjd5aDIzbmIyL0ZBL01hc3RlckZvcm1TdG9yZS02NjQ5ODQ2MzI0Nzg5MjQ4
 This is the expected PDF file, which can be opened (with Preview on a Mac).

 
 In the Development Console (http://localhost:/_ah/admin/datastore):
 In __GsFileInfo__ (in the empty Namespace)

 Key:
 ag5mb3JtcnVubmVyLWhyZHJ7CxIOX19Hc0ZpbGVJbmZvX18iZ2VuY29kZWRfZ3Nfa2V5OkwyZHpMMlp2Y20xeWRXNXVaWEppZFdOclpYUXRjamQ1YURJemJtSXlMMFpCTDAxaGMzUmxja1p2Y20xVGRHOXlaUzAyTmpRNU9EUTJNekkwTnpnNU1qUTQM

 ID/Name:
 encoded_gs_key:L2dzL2Zvcm1ydW5uZXJidWNrZXQtcjd5aDIzbmIyL0ZBL01hc3RlckZvcm1TdG9yZS02NjQ5ODQ2MzI0Nzg5MjQ4

 content_type: application/octet-stream

 filename:
  /gs/formrunnerbucket-r7yh23nb2/FA/MasterFormStore-6649846324789248

 size: 1679407

 [[The ID/Name appears to be identical to the filename appearing in
 ~war/WEB-APP/appengine-generated]]

 ---
 In the Development Console (http://localhost:/_ah/admin/datastore),
 in Namespace 5629499534213120 (where everything ran):
 In _ah_FakeCloudStorate_formrunnerbucket-r7yh23nb2:

 Key:
 

Re: [google-appengine] Have a front end for Cloud Storage.

2013-11-06 Thread Tom Kaitchuck
The appengine gcs client: https://code.google.com/p/appengine-gcs-client/
has no size restriction. If you want to upload to/from appengine it is the
prefered solution.
If you want users to upload and download externally the prefered solution
is to use createUploadUrl:
https://developers.google.com/appengine/docs/java/javadoc/com/google/appengine/api/blobstore/BlobstoreService#createUploadUrl(java.lang.String,
com.google.appengine.api.blobstore.UploadOptions)
for uploading data and serve:
https://developers.google.com/appengine/docs/java/javadoc/com/google/appengine/api/blobstore/BlobstoreService#serve(com.google.appengine.api.blobstore.BlobKey,
HttpServletResponse)
for download.

The advantage of createUploadUrl and serve over having the user post and
get from the bucket directly (Which of course works) is that it allows your
application to have more control as you can run code in response to each
request which lets you do things like custom permissions, logging, rate
limiting etc.



On Wed, Nov 6, 2013 at 5:59 AM, Vinny P vinny...@gmail.com wrote:

 On Tue, Nov 5, 2013 at 3:56 PM, Pushpinder Jaswal 
 pushpinder.jas...@gmail.com wrote:

 I would like to know the best API and practices for this purpose, also
 how should I proceed in this scenario for example have a client to add
 objects through App Engine to cloud storage or directly add objects to
 cloud storage. (I would like to mention that the objects that I am talking
 about would be more than 32 Mb in size.)



 Both options are fine, but the better option would probably be to add
 objects through the App Engine service. You can use the Blobstore upload
 handler to write to GCS. Also, this way there is one central point to
 manage permissioning, bucket management, etc.

 Larger than 32MB files are fine, just note that you'll have to use
 Blobstore upload and serving options.


 On Tue, Nov 5, 2013 at 3:56 PM, Pushpinder Jaswal 
 pushpinder.jas...@gmail.com wrote:

 I would like have options to maintain the permissions and deleting
 buckets as well.



 That can be easily done through the XML/JSON Cloud Storage API, or by
 using the Java client library.


 -
 -Vinny P
 Technology  Media Advisor
 Chicago, IL

 App Engine Code Samples: http://www.learntogoogleit.com

  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [google-appengine] Will 1.8.2 Datastore Admin Scheduled Backup tool support the new Cloud Storage Client Library?

2013-07-09 Thread Tom Kaitchuck
You don't have to use the datastore backup format to get data into
bigQuery. (You could if you wanted, the code is open source) But BigQuery
supports Json and CSV directly, which is often easier.
We actually have a guide on extracting data from Datastore and using a
MapReduce to load it into BigQuery here:
https://developers.google.com/bigquery/articles/datastoretobigquery
This guide was specifically made because it is quite common to want to
either import only a subset of the data into BigQuery or to first run some
transform on it.

Following the guide it shouldn't take much effort to get things up and
working. In the SVN there is already an output writer (still being tested)
included in the mapreduce library to write using the GCS client library,
which should allow you to run without using the FilesAPI at all. (You can
use it now, it's name starts with an _ as we are still testing it, once
that work is completed it will be renamed to remove the _)

We are actively working on baking all of this in, to make things much
easier, but I can't commit to a specific date. If your only concern is this
going to happen in a timely way, then you should wait. That being said,
running your own MR a la the example above is a good option anyway as it
gives you more control over exactly what your are putting in to BigQuery
and the format it is in.



On Tue, Jul 9, 2013 at 6:30 AM, Jason Collins jason.a.coll...@gmail.comwrote:

 Thanks Tom.

 The next part of my story is that we use the backup files with a BigQuery
 ingestion job - that is, the BigQuery ingestion job uses the native output
 from the Datastore Admin Scheduled Backups stored on Cloud Storage.
 ('sourceFormat': 'DATASTORE_BACKUP')

 Presumably, I'd also have to replicate the format if I were to roll my own
 GCS/MapReduce hybrid and continue to use the same BigQuery ingestion
 approach.

 Any suggestions on that front? Or maybe just an approximate ETA and save
 me a bunch of work? ;)

 j


 On Monday, 8 July 2013 18:23:38 UTC-6, Tom Kaitchuck wrote:

 This is something we are working hard on. We're updating many code paths
 to fix a lot of issues and migrate over to the GCS client. Changes won't
 roll out in one big release, rather updates will be released as they are
 completed. If you don't want to wait, it is absolutely supported to use the
 GCS client to write out data from within your own MapReduce.


 On Thu, Jul 4, 2013 at 9:01 AM, Jason Collins jason.a...@gmail.comwrote:

 Our backups are really flakey and take a long time, apparently because
 of the sketchy Files API link.

 Will the GAE 1.8.2 release of Datastore Admin Scheduled Backup tool
 support the new Cloud Storage Client Library?

 This is all wrapped up in MapReduce/Pipelines, so there are a lot of
 moving parts. My goal is to simply use stock, Google-supplied tools and
 have this stuff be resilient and predictable.

 Any info is appreciated. If something in this area is not imminent, we
 will have to start looking for alternate ways to achieve reliable backups.

 Thanks,
 j

 --
 You received this message because you are subscribed to the Google
 Groups Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to google-appengi...@**googlegroups.com.
 To post to this group, send email to google-a...@googlegroups.**com.

 Visit this group at 
 http://groups.google.com/**group/google-appenginehttp://groups.google.com/group/google-appengine
 .
 For more options, visit 
 https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
 .




  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] How to upload files to blobstore programmatically , without form.

2013-07-09 Thread Tom Kaitchuck
If you want to write to blobstore programmatically you can
call createUploadUrl as many times as you like and pass those to whatever
is going to do the upload. This is supported and not going away.
If you want to upload to GCS you can use the Manager as Vinny mentioned, or
you can use gsutil from a command line or script:
https://developers.google.com/storage/docs/gsutil
or you can use the GCS client from within your App Engine application.


On Mon, Jul 1, 2013 at 11:40 AM, Vinny P vinny...@gmail.com wrote:

 On Mon, Jul 1, 2013 at 6:40 AM, omair.shams...@arbisoft.com wrote:

 Hi ! i want to upload many files to GAE blob store but i want to do that
 programmatically , instead of using the form and browsing the file . for
 example uploading all files in a particular folder to the GAE blobstore.
 Is there any method to do so ?



 Writing programmatically to the blobstore is deprecated. What you can do
 is create a Google Cloud Storage bucket, and use the GCS Manager (
 https://developers.google.com/storage/docs/gsmanager ) to upload files.
 GCS Manager supports drag and drop, so you can simply drag files from your
 computer to your storage bucket.

 -
 -Vinny P
 Technology  Media Advisor
 Chicago, IL

 App Engine Code Samples: http://www.learntogoogleit.com



  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Will 1.8.2 Datastore Admin Scheduled Backup tool support the new Cloud Storage Client Library?

2013-07-08 Thread Tom Kaitchuck
This is something we are working hard on. We're updating many code paths to
fix a lot of issues and migrate over to the GCS client. Changes won't roll
out in one big release, rather updates will be released as they are
completed. If you don't want to wait, it is absolutely supported to use the
GCS client to write out data from within your own MapReduce.


On Thu, Jul 4, 2013 at 9:01 AM, Jason Collins jason.a.coll...@gmail.comwrote:

 Our backups are really flakey and take a long time, apparently because of
 the sketchy Files API link.

 Will the GAE 1.8.2 release of Datastore Admin Scheduled Backup tool
 support the new Cloud Storage Client Library?

 This is all wrapped up in MapReduce/Pipelines, so there are a lot of
 moving parts. My goal is to simply use stock, Google-supplied tools and
 have this stuff be resilient and predictable.

 Any info is appreciated. If something in this area is not imminent, we
 will have to start looking for alternate ways to achieve reliable backups.

 Thanks,
 j

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] appengine-gcs-client library and acl

2013-06-18 Thread Tom Kaitchuck
It is not currently possible to do that with that library. However you can
set and edit permissions in much more complex ways using this library:
https://developers.google.com/storage/docs/json_api/v1/api-lib/java

There is an ongoing discussion as to how these two libraries should
interrelate. If you have a thoughts on that, please email me off list.


On Thu, Jun 13, 2013 at 2:08 PM, dragan slice.of.life@gmail.com wrote:

 I analyzed the code a bit. It seems to me that the acl method only sets
 the x-goog-acl header which is only good for applying *predefined* ACLs
 to buckets and objects. I need something like 
 thishttps://developers.google.com/storage/docs/accesscontrol#aclquery.
 Is it possible to do this with the current library?


 On Wednesday, June 12, 2013 11:57:30 PM UTC+2, Tom Kaitchuck wrote:

 Whatever you specify as part of the GcsFileOptions will be passed
 directly to GCS. So you can refer to the GCS access control documentation
 for how to set things and how varrious ACLs will be interpreted:
 https://developers.google.com/**storage/docs/accesscontrol#**
 About-Access-Control-Listshttps://developers.google.com/storage/docs/accesscontrol#About-Access-Control-Lists
 If you go to the cloud console, you should be able to see the email
 addresses associated with your account including the one that is used for
 the App Engine Application's identity.


 On Wed, Jun 12, 2013 at 4:17 AM, dragan slice.of...@gmail.com wrote:

 I'm trying to convert my current code from the File API to the Cloud
 Storage API (using the 
 appengine-gcs-clienthttps://code.google.com/p/appengine-gcs-client/library).
  I want to ensure that only the GAE application has access to the
 newly created files on the Cloud Storage. How can I do that? Te examples i
 have seen all use the following code or something similar:

 GcsFileOptions options = new GcsFileOptions.Builder().mimeT**ype(
 text/xml).acl(public-**read).build();


 This makes the file accessible to everybody for reading. What should I
 put in the acl call to make the file accessible only to the GAE
 application. I figured that I can add the application to the team as the
 editor but I can not figure out the next step. A code snippet would be nice.
 Thanks in advance.

 --
 You received this message because you are subscribed to the Google
 Groups Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to google-appengi...@**googlegroups.com.
 To post to this group, send email to google-a...@googlegroups.**com.

 Visit this group at http://groups.google.com/**
 group/google-appengine?hl=enhttp://groups.google.com/group/google-appengine?hl=en
 .
 For more options, visit 
 https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
 .




  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine.

 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] appengine-gcs-client library and acl

2013-06-12 Thread Tom Kaitchuck
Whatever you specify as part of the GcsFileOptions will be passed directly
to GCS. So you can refer to the GCS access control documentation for how to
set things and how varrious ACLs will be interpreted:
https://developers.google.com/storage/docs/accesscontrol#About-Access-Control-Lists
If you go to the cloud console, you should be able to see the email
addresses associated with your account including the one that is used for
the App Engine Application's identity.


On Wed, Jun 12, 2013 at 4:17 AM, dragan slice.of.life@gmail.com wrote:

 I'm trying to convert my current code from the File API to the Cloud
 Storage API (using the 
 appengine-gcs-clienthttps://code.google.com/p/appengine-gcs-client/library).
  I want to ensure that only the GAE application has access to the
 newly created files on the Cloud Storage. How can I do that? Te examples i
 have seen all use the following code or something similar:

 GcsFileOptions options = new GcsFileOptions.Builder().mimeType(text/xml
 ).acl(public-read).build();


 This makes the file accessible to everybody for reading. What should I put
 in the acl call to make the file accessible only to the GAE application. I
 figured that I can add the application to the team as the editor but I can
 not figure out the next step. A code snippet would be nice.
 Thanks in advance.

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Re: 1.8.1 Pre-release SDKs Available.

2013-06-10 Thread Tom Kaitchuck
Vinny:
The intended replacement for the FilesAPI is not the library you link to,
but the one here: https://code.google.com/p/appengine-gcs-client/
It is very deliberately modeled after the FilesAPI.


On Sat, Jun 8, 2013 at 8:33 AM, Vinny P vinny...@gmail.com wrote:

 Hi Chris, thanks for stopping by.

 *My need: Better libraries.*

 What I liked about the Files API (and particularly in regards to the
 Blobstore) is that it made writing files so unbelievably easy. For example,
 in Java all I needed to do was get an instance of FileService, then I could
 write and read using openWriteChannel/openReadChannel. The Files API
 handled the dirty part of configuring access to the datastore, managing the
 write, etc. Frankly, I think the Files API is one of the best engineered
 parts of GAE* (give whoever wrote that API a raise and a promotion
 please!).*

 But you look at the javadoc for the Java Cloud Storage library, and it's
 an utter mess. See for yourself:
 https://developers.google.com/resources/api-libraries/documentation/storage/v1beta2/java/latest/
  .
 For one, there's not enough examples. Two, I have to mess around with
 BucketAccessControls and Builders and a whole mess of things. Chris, I just
 want to write some files to persistent storage, I don't want to have to
 micromanage everything else and deal with extra fluff. I'll micromanage if
 I have to, but the Blobstore took care of that for me.

 Get the guy who wrote the Files API and put him to work on writing the GCS
 library.

 -
 -Vinny P
 Technology  Media Advisor
 Chicago, IL

 My Go side project: http://invalidmail.com/


 On Saturday, June 8, 2013 1:04:23 AM UTC-5, Chris Ramsdale wrote:

 a bunch of great feedback that we'll continue to address.  in regards to
 timeline, we have no plans of decommissioning this API before end of year.
 that said, assuming the following:

- App Engine = Google Cloud Storage performance is equivalent to
(if not better than) App Engine = Blobstore
- all blobs were auto-migrated over to Cloud Storage (free of charge)
- all existing URLs just worked

 what would keep you from migrating over to a Cloud Storage-based solution?

 -- Chris

  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Re: 1.8.1 Pre-release SDKs Available.

2013-06-10 Thread Tom Kaitchuck
Jeff: Replies inline


On Sat, Jun 8, 2013 at 9:49 AM, Jeff Schnitzer j...@infohazard.org wrote:


 Some questions:

  * Will the current upload mechanism be preserved? Looking through the
 docs it appears the answer is that you create a signed url directly into
 GCS and have the client POST/PUT it, which seems like it should be
 compatible with the existing BlobstoreService.getUploadUrl() approach. But
 how do we get notification when the upload is complete? Right now the
 blobstore upload mechanism gives us a callback, and I do important things
 on this callback.


This will continue to function as it does now. This api is not affected.


  * Will this work with the image service the way the blobstore does now? I
 transform, resize, and crop images on the fly - this rarely-lauded feature
 is actually one of my favorite parts of GAE.


Yes. In fact you can already use the image service with files in GCS and
blobstore using the same API.


  * Will existing blobstore-based image urls be preserved? I have a lot of
 these in my datastore.


Today's announcement will have no effect on files store in blobstore.


  * What does the GAE dev environment do with the GCS apis? What about the
 Local Unit Testing framework?


It will work and use local disk as the backing store.


 As long as there are sane answers to these questions, I have no objection
 to GCS... although it will require that I rewrite and some code:

  * I read PDF data out of the blobstore using the files api, send it off
 to a service for transformation into an image, then write the image back to
 the blobstore. This sounds pretty straightforward with GCS.

  * I de-dup all uploaded images using the hash, and track image
 references. This means I have a lot of data referencing BlobKeys in the
 datastore. This brings up the question, if data is migrated from Blobstore
 to GCS, what are the new keys? Will it be clear how to migrate this data?


You can generate a BlobKey from an Item in GCS so this code would not need
to be changed much.
Data migration is not being done, nor is it necessary for you to plan for
at this time.


 I don't object to rewriting code as long as the migration path is clear. I
 can appreciate consolidating development effort around a single
 blobstore-ish offering.

 Thanks,
 Jeff



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Re: 1.8.1 Pre-release SDKs Available.

2013-06-10 Thread Tom Kaitchuck
Jon the three criteria you specify are all available today:
You can use the Images API with files in GCS by getting a blobKey for them
which can be done by calling getBlobKey.
File upload is supported.
Headers can be specified at upload time.

On Sat, Jun 8, 2013 at 5:26 PM, jon jonni.g...@gmail.com wrote:

 I will migrate to GCS if:
 * All conditions stated by Chris Ramsdale are met
 * It is fully compatible with Blobstore's dynamic resize feature (eg the
 =s parameter still works)
 * It allows for a file upload (especially from mobile apps) to be
 completed in one HTTP request
 * It sets far future expiry header


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Re: 1.8.1 Pre-release SDKs Available.

2013-06-05 Thread Tom Kaitchuck
Jon: The example you are pointing to is using two APIs. In java things
under the package: com.google.appengine.api.blobstore are _not_ affected by
this announcement.
Things under the java package: com.google.appengine.api.files are affected by
this announcement.
We will be providing a migration guide for how to change code to use the Google
Cloud Storage Client Libraryhttps://code.google.com/p/appengine-gcs-client/.
This library is Preview as of 1.8.1 and as such is guaranteed to move to
GA.

Just to emphasize, this is a deprecation announcement, not a decommission.
So your application will _not_ break with 1.8.1, but you should begin
looking at how to port it to the new library.
If you have questions about how to do this, feel free to create a thread on
this group.



On Wed, Jun 5, 2013 at 3:28 AM, jon jonni.g...@gmail.com wrote:

 Chris just to clarify, this is *not* being deprecated is it?
 https://developers.google.com/appengine/docs/java/blobstore/overview#Writing_Files_to_the_Blobstore

 I sure hope not because I'm using it heavily in my apps.

  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] MapReduce Failures

2013-05-30 Thread Tom Kaitchuck
A RetrySliceError will result in a retry. If it is not, it could be that
you have the max reattempts on your task queue set too low. (Because Map
Reduce manages retries based on it's configuration, it is safe to set this
to unlimited.) Also you may want to take a look at shard retry:
https://code.google.com/p/appengine-mapreduce/wiki/PythonShardRetry which
is a new feature designed to make python Map Reduce more relyable.


On Fri, May 24, 2013 at 8:20 AM, Ranjit Chacko rjcha...@gmail.com wrote:

 I'm seeing shards abruptly fail in my MR jobs for no apparent reason and
 without retrying:

 task_name=appengine-mrshard-1581047187783C3601732-14-2-retry-0
 app_engine_release=1.8.0 instance=00c61b117c53a40e120ac864168a3fe51c2ce

 Shard 1581047187783C3601732-14 failed permanently.

 Is there some adjustment I can make to my queue parameters to avoid or
 reduce these issues?

 Recently I had been having problems with MR jobs throwing UnknownErrors
 and ApplicationError followed by RetrySliceErrors, and setting the
 min_backoff_seconds to 1 seemed to help with reducing the retry errors.



  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: Attention Java MapReduce users

2013-05-29 Thread Tom Kaitchuck
My fault. I forgot to update the revision of the GCS client the ant
build.xml loads. If you want the fix right away just change the build.xml
to have the line where it defines the property gcsversion to be
r54.(You will also to need to delete the Jar if it has been previously
downloaded into the same workspace.)

I'll push out an update later today.


On Wed, May 29, 2013 at 1:32 PM, Carter Maslan car...@maslan.com wrote:

 We get compile errors when we ant the latest version of the java mapreduce
 library with java SDK 1.8.0. Error is Builder() has private access.
 Is there an existing fix for that?

 [javac]
 /mr2/appengine-mapreduce-read-only/java/src/com/google/appengine/tools/mapreduce/outputs/GoogleCloudStorageFileOutputWriter.java:39:
 Builder() has private access in
 com.google.appengine.tools.cloudstorage.GcsFileOptions.Builder
 [javac] GCS_SERVICE.createOrReplace(file, new
 GcsFileOptions.Builder().mimeType(mimeType).build());
 [javac]   ^
 [javac]
 /mr2/appengine-mapreduce-read-only/java/src/com/google/appengine/tools/mapreduce/outputs/GoogleCloudStorageFileOutputWriter.java:39:
 cannot find symbol
 [javac] symbol  : method mimeType(java.lang.String)
 [javac] location: class
 com.google.appengine.tools.cloudstorage.GcsFileOptions.Builder
 [javac] GCS_SERVICE.createOrReplace(file, new
 GcsFileOptions.Builder().mimeType(mimeType).build());
 [javac]
 ^





 On Wed, May 22, 2013 at 4:48 PM, Tim Jones palan...@gmail.com wrote:

 Awesome, I downloaded a few minutes ago and have been running against it
 with no problems.  Thanks!


 On Wed, May 22, 2013 at 4:46 PM, Tom Kaitchuck tkaitch...@google.comwrote:

 The issue mentioned above (The NPE when the last item was already
 written) has been fixed in the version 464 in the public svn.
 https://code.google.com/p/appengine-mapreduce/source/detail?r=464


 On Mon, May 20, 2013 at 6:32 PM, Tom Kaitchuck tkaitch...@google.comwrote:

 This is a bug I am working on. It occurs when the last record to be
 written encounters a keyOrderingException (Meaning it was already written
 but an ACK was not received so it was retried.) So it should be rare and
 the retry of the shuffle that was added should cause it to be harmless.

 If you are seeing any broader problems send me a message off-list with
 your appId.

 I'll post an update here when a patch has been pushed out.



 On Mon, May 20, 2013 at 4:28 PM, Tim Jones palan...@gmail.com wrote:

 After upgrading, I'm getting the following NullPointerException in the
 InMemoryShuffler.  Is this a known issue?

 Caused by: java.lang.NullPointerException
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob$1.run(InMemoryShuffleJob.java:234)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob$1.run(InMemoryShuffleJob.java:231)
 at
 com.google.appengine.tools.mapreduce.impl.util.RetryHelper.doRetry(RetryHelper.java:62)
  at
 com.google.appengine.tools.mapreduce.impl.util.RetryHelper.runWithRetries(RetryHelper.java:101)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.closeFinally(InMemoryShuffleJob.java:231)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.writeOutput(InMemoryShuffleJob.java:227)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.writeOutputs(InMemoryShuffleJob.java:243)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.run(InMemoryShuffleJob.java:253)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.run(InMemoryShuffleJob.java:42)
  ... 47 more

 On Wednesday, May 1, 2013 1:32:51 PM UTC-7, Tom Kaitchuck wrote:

 If you are using the experimental Java MapReduce library for App
 Engine, you are strongly encouraged to update to the latest version of 
 the
 library in the public svn: https://code.google.com/p/**
 appengine-mapreduce/source/**checkouthttps://code.google.com/p/appengine-mapreduce/source/checkout


 Background:

 We are rolling out a fix to a long standing interaction bug between
 the experimental MapReduce library and the experimental Files API that, 
 in
 certain circumstances, results in dropped data. Specifically this bug can
 cause some records emitted by the Map to be excluded from the input to
 Reduce.

 The bugfix involves patches to both the Files API and Java MapReduce.
 Unfortunately older versions of the Java MapReduce library running 
 against
 the patched Files API will drop Map output under more common 
 circumstances.
 The Files API fix will roll out on its own (no action required by you), 
 but
 in order to avoid dropped data you must update to the latest version
 of the Java MapReduce 
 libraryhttps://code.google.com/p/appengine-mapreduce/source/checkout
 .

 https://code.google.com/p/appengine-mapreduce/source/checkout

 We apologize for the trouble. Rest assured we are working
 aggressively to move MapReduce into a fully supported state.


 Tom Kaitchuck on behalf

[google-appengine] Re: Attention Java MapReduce users

2013-05-29 Thread Tom Kaitchuck
You should be able to sync to the SVN and get this update (r466)


On Wed, May 29, 2013 at 3:52 PM, Tom Kaitchuck tkaitch...@google.comwrote:

 My fault. I forgot to update the revision of the GCS client the ant
 build.xml loads. If you want the fix right away just change the build.xml
 to have the line where it defines the property gcsversion to be
 r54.(You will also to need to delete the Jar if it has been previously
 downloaded into the same workspace.)

 I'll push out an update later today.


 On Wed, May 29, 2013 at 1:32 PM, Carter Maslan car...@maslan.com wrote:

 We get compile errors when we ant the latest version of the java
 mapreduce library with java SDK 1.8.0. Error is Builder() has private
 access.
 Is there an existing fix for that?

 [javac]
 /mr2/appengine-mapreduce-read-only/java/src/com/google/appengine/tools/mapreduce/outputs/GoogleCloudStorageFileOutputWriter.java:39:
 Builder() has private access in
 com.google.appengine.tools.cloudstorage.GcsFileOptions.Builder
 [javac] GCS_SERVICE.createOrReplace(file, new
 GcsFileOptions.Builder().mimeType(mimeType).build());
 [javac]   ^
 [javac]
 /mr2/appengine-mapreduce-read-only/java/src/com/google/appengine/tools/mapreduce/outputs/GoogleCloudStorageFileOutputWriter.java:39:
 cannot find symbol
 [javac] symbol  : method mimeType(java.lang.String)
 [javac] location: class
 com.google.appengine.tools.cloudstorage.GcsFileOptions.Builder
 [javac] GCS_SERVICE.createOrReplace(file, new
 GcsFileOptions.Builder().mimeType(mimeType).build());
 [javac]
 ^





 On Wed, May 22, 2013 at 4:48 PM, Tim Jones palan...@gmail.com wrote:

 Awesome, I downloaded a few minutes ago and have been running against it
 with no problems.  Thanks!


 On Wed, May 22, 2013 at 4:46 PM, Tom Kaitchuck tkaitch...@google.comwrote:

 The issue mentioned above (The NPE when the last item was already
 written) has been fixed in the version 464 in the public svn.
 https://code.google.com/p/appengine-mapreduce/source/detail?r=464


 On Mon, May 20, 2013 at 6:32 PM, Tom Kaitchuck 
 tkaitch...@google.comwrote:

 This is a bug I am working on. It occurs when the last record to be
 written encounters a keyOrderingException (Meaning it was already written
 but an ACK was not received so it was retried.) So it should be rare and
 the retry of the shuffle that was added should cause it to be harmless.

 If you are seeing any broader problems send me a message off-list with
 your appId.

 I'll post an update here when a patch has been pushed out.



 On Mon, May 20, 2013 at 4:28 PM, Tim Jones palan...@gmail.com wrote:

 After upgrading, I'm getting the following NullPointerException in
 the InMemoryShuffler.  Is this a known issue?

 Caused by: java.lang.NullPointerException
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob$1.run(InMemoryShuffleJob.java:234)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob$1.run(InMemoryShuffleJob.java:231)
 at
 com.google.appengine.tools.mapreduce.impl.util.RetryHelper.doRetry(RetryHelper.java:62)
  at
 com.google.appengine.tools.mapreduce.impl.util.RetryHelper.runWithRetries(RetryHelper.java:101)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.closeFinally(InMemoryShuffleJob.java:231)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.writeOutput(InMemoryShuffleJob.java:227)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.writeOutputs(InMemoryShuffleJob.java:243)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.run(InMemoryShuffleJob.java:253)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.run(InMemoryShuffleJob.java:42)
  ... 47 more

 On Wednesday, May 1, 2013 1:32:51 PM UTC-7, Tom Kaitchuck wrote:

 If you are using the experimental Java MapReduce library for App
 Engine, you are strongly encouraged to update to the latest version of 
 the
 library in the public svn: https://code.google.com/p/**
 appengine-mapreduce/source/**checkouthttps://code.google.com/p/appengine-mapreduce/source/checkout


 Background:

 We are rolling out a fix to a long standing interaction bug between
 the experimental MapReduce library and the experimental Files API that, 
 in
 certain circumstances, results in dropped data. Specifically this bug 
 can
 cause some records emitted by the Map to be excluded from the input to
 Reduce.

 The bugfix involves patches to both the Files API and Java
 MapReduce. Unfortunately older versions of the Java MapReduce library
 running against the patched Files API will drop Map output under more
 common circumstances. The Files API fix will roll out on its own (no 
 action
 required by you), but in order to avoid dropped data you must update to 
 the latest
 version of the Java MapReduce 
 libraryhttps://code.google.com/p/appengine-mapreduce/source/checkout
 .

 https://code.google.com/p/appengine-mapreduce/source/checkout

[google-appengine] Re: Attention Java MapReduce users

2013-05-22 Thread Tom Kaitchuck
The issue mentioned above (The NPE when the last item was already written)
has been fixed in the version 464 in the public svn.
https://code.google.com/p/appengine-mapreduce/source/detail?r=464


On Mon, May 20, 2013 at 6:32 PM, Tom Kaitchuck tkaitch...@google.comwrote:

 This is a bug I am working on. It occurs when the last record to be
 written encounters a keyOrderingException (Meaning it was already written
 but an ACK was not received so it was retried.) So it should be rare and
 the retry of the shuffle that was added should cause it to be harmless.

 If you are seeing any broader problems send me a message off-list with
 your appId.

 I'll post an update here when a patch has been pushed out.



 On Mon, May 20, 2013 at 4:28 PM, Tim Jones palan...@gmail.com wrote:

 After upgrading, I'm getting the following NullPointerException in the
 InMemoryShuffler.  Is this a known issue?

 Caused by: java.lang.NullPointerException
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob$1.run(InMemoryShuffleJob.java:234)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob$1.run(InMemoryShuffleJob.java:231)
 at
 com.google.appengine.tools.mapreduce.impl.util.RetryHelper.doRetry(RetryHelper.java:62)
  at
 com.google.appengine.tools.mapreduce.impl.util.RetryHelper.runWithRetries(RetryHelper.java:101)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.closeFinally(InMemoryShuffleJob.java:231)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.writeOutput(InMemoryShuffleJob.java:227)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.writeOutputs(InMemoryShuffleJob.java:243)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.run(InMemoryShuffleJob.java:253)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.run(InMemoryShuffleJob.java:42)
  ... 47 more

 On Wednesday, May 1, 2013 1:32:51 PM UTC-7, Tom Kaitchuck wrote:

 If you are using the experimental Java MapReduce library for App Engine,
 you are strongly encouraged to update to the latest version of the library
 in the public svn: https://code.google.com/p/**
 appengine-mapreduce/source/**checkouthttps://code.google.com/p/appengine-mapreduce/source/checkout


 Background:

 We are rolling out a fix to a long standing interaction bug between the
 experimental MapReduce library and the experimental Files API that, in
 certain circumstances, results in dropped data. Specifically this bug can
 cause some records emitted by the Map to be excluded from the input to
 Reduce.

 The bugfix involves patches to both the Files API and Java MapReduce.
 Unfortunately older versions of the Java MapReduce library running against
 the patched Files API will drop Map output under more common circumstances.
 The Files API fix will roll out on its own (no action required by you), but
 in order to avoid dropped data you must update to the latest version of
 the Java MapReduce 
 libraryhttps://code.google.com/p/appengine-mapreduce/source/checkout
 .

 https://code.google.com/p/appengine-mapreduce/source/checkout

 We apologize for the trouble. Rest assured we are working aggressively
 to move MapReduce into a fully supported state.


 Tom Kaitchuck on behalf of the Google App Engine Team

  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine Pipeline API group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to app-engine-pipeline-api+unsubscr...@googlegroups.com.

 For more options, visit https://groups.google.com/groups/opt_out.






-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: Attention Java MapReduce users

2013-05-20 Thread Tom Kaitchuck
This is a bug I am working on. It occurs when the last record to be written
encounters a keyOrderingException (Meaning it was already written but an
ACK was not received so it was retried.) So it should be rare and the retry
of the shuffle that was added should cause it to be harmless.

If you are seeing any broader problems send me a message off-list with your
appId.

I'll post an update here when a patch has been pushed out.



On Mon, May 20, 2013 at 4:28 PM, Tim Jones palan...@gmail.com wrote:

 After upgrading, I'm getting the following NullPointerException in the
 InMemoryShuffler.  Is this a known issue?

 Caused by: java.lang.NullPointerException
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob$1.run(InMemoryShuffleJob.java:234)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob$1.run(InMemoryShuffleJob.java:231)
 at
 com.google.appengine.tools.mapreduce.impl.util.RetryHelper.doRetry(RetryHelper.java:62)
  at
 com.google.appengine.tools.mapreduce.impl.util.RetryHelper.runWithRetries(RetryHelper.java:101)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.closeFinally(InMemoryShuffleJob.java:231)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.writeOutput(InMemoryShuffleJob.java:227)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.writeOutputs(InMemoryShuffleJob.java:243)
  at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.run(InMemoryShuffleJob.java:253)
 at
 com.google.appengine.tools.mapreduce.impl.InMemoryShuffleJob.run(InMemoryShuffleJob.java:42)
  ... 47 more

 On Wednesday, May 1, 2013 1:32:51 PM UTC-7, Tom Kaitchuck wrote:

 If you are using the experimental Java MapReduce library for App Engine,
 you are strongly encouraged to update to the latest version of the library
 in the public svn: https://code.google.com/p/**
 appengine-mapreduce/source/**checkouthttps://code.google.com/p/appengine-mapreduce/source/checkout


 Background:

 We are rolling out a fix to a long standing interaction bug between the
 experimental MapReduce library and the experimental Files API that, in
 certain circumstances, results in dropped data. Specifically this bug can
 cause some records emitted by the Map to be excluded from the input to
 Reduce.

 The bugfix involves patches to both the Files API and Java MapReduce.
 Unfortunately older versions of the Java MapReduce library running against
 the patched Files API will drop Map output under more common circumstances.
 The Files API fix will roll out on its own (no action required by you), but
 in order to avoid dropped data you must update to the latest version of
 the Java MapReduce 
 libraryhttps://code.google.com/p/appengine-mapreduce/source/checkout
 .

 https://code.google.com/p/appengine-mapreduce/source/checkout

 We apologize for the trouble. Rest assured we are working aggressively to
 move MapReduce into a fully supported state.


 Tom Kaitchuck on behalf of the Google App Engine Team

  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine Pipeline API group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to app-engine-pipeline-api+unsubscr...@googlegroups.com.

 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Re: Attention Java MapReduce users

2013-05-10 Thread Tom Kaitchuck
Ah, yes. If the name writable name is  500 bytes it is hashed before being
stored to avoid the character limitation. This is done as follows:
Hashing.sha512().hashString(creationHandle, Charsets.US_ASCII).toString()

So the easiest way to access the file would probably to be to use the
FilesAPI and just pass it the writable name and let it do this for you,
then you'll have a file handle with the finalized name, which you can copy
and access via the blobstore API if you like. To do this construct an
AppEngineFile with the namepart as the writable file name including the
writable: prefix, and then invoke FileServiceImpl.getBlobKey()



On Fri, May 10, 2013 at 8:35 AM, Eric Jahn e...@ejahn.net wrote:

 Tom,
 The files I'm searching for in datastore viewer were finalized, and I was
 able to search for and find more recent finalized blobs using SELECT * FROM
 __BlobFileIndex__ WHERE __key__=KEY('__BlobFileIndex__', 'SoMeKeyHere');
 and for still writable this works:
 SELECT * FROM __BlobFileIndex__ WHERE __key__=KEY('__BlobFileIndex__',
 'writable:SoMeKeyHere');
 However, for the keys of the missing file, the query in the datastore
 viewer returns name must be under 500 bytes.  Those 500 byte keys were
 successful for accessing/writing to the original files with my AppEngine
 code before they were finalized, so why is this length a problem now?  Or
 was exceedingly long key generation part of the Files API bug?  I still
 need to access those files, since they took a lot of expensive backend time
 to generate them.  Thanks!  -Eric


 On Tuesday, May 7, 2013 3:13:45 PM UTC-4, Tom Kaitchuck wrote:

 If you have the writable file name you can use the datastore viewer to
 find the finalized file name.
 Go into the datastore viewer and enter the gql query: SELECT * FROM
 __BlobFileIndex__
 This will show you the mapping. Then you can narrow it down by specifying
 the ID/Name as the writable file name.


 On Sun, May 5, 2013 at 10:00 PM, Eric Jahn er...@ejahn.net wrote:

  Tom,
 This is great news.  I have one lingering problem as a result of the
 Files API Bug.  Before the Files API fix, I had persisted the file service
 urls whilst I had been writing to them, and then finalized them
 successfully.  But, because of this bug I couldn't retrieve a blobstore key
 by passing these urls to BlobKey.getKeyString().   btw, I'm not using java
 MapReduce , just the App Engine Files API and Blobstore).  Is there a way I
 can somehow retrieve my finalized blobstore files which aren't appearing in
 my App Engine dashboard Blobstore viewer?  If I start with a new file, I
 see them appear, but this is now after the Files API bug fix, I presume.
 Thanks for any thoughts.  -Eric



 On Wednesday, May 1, 2013 5:42:53 PM UTC-4, Tom Kaitchuck wrote:

 This is something we are aware of and are working on for future
 releases.

 For this update we encourage you to download and deploy the new code
 right away.

  --
 You received this message because you are subscribed to the Google
 Groups Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to google-appengi...@**googlegroups.com.
 To post to this group, send email to google-a...@googlegroups.**com.

 Visit this group at http://groups.google.com/**
 group/google-appengine?hl=enhttp://groups.google.com/group/google-appengine?hl=en
 .

 For more options, visit 
 https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
 .




  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Python Datstore copy. MapReduce

2013-05-10 Thread Tom Kaitchuck
It has not yet been deployed, but it will be.


On Fri, May 10, 2013 at 4:53 PM, Jay jbaker.w...@gmail.com wrote:

 Thanks Tom. Does that mean that the fix is deployed live on app engine too?


 On Thursday, May 9, 2013 7:51:46 PM UTC-5, Tom Kaitchuck wrote:

 Sorry that was an internal number. What I should have said is that this
 is fixed in SVN revision 438. If you sync to the latest version you should
 get the update.


 On Thu, May 9, 2013 at 12:07 PM, Tom Kaitchuck tkait...@google.comwrote:

  This should be resolved. Update to include cl/45189830.


 On Thu, May 9, 2013 at 10:14 AM, Jay jbake...@gmail.com wrote:

 I am seeing the same.


 On Tuesday, May 7, 2013 3:16:19 PM UTC-5, Sandeep wrote:

 Hi,

 I am trying to copy data from one application to other, I am using
 database admin to migrate data.

 Have configured the application to receive data. I used to do this
 earlier but today I see wired error and 0% tasks are successful.

 /_ah/mapreduce/kickoffjob_**call**back 500


  File 
 /base/python_runtime/python_**l**ib/versions/1/google/**appengine**/ext/mapreduce/model.**py,
  line 805, in to_dict
 input_reader_state: self.input_reader.to_json_str(),
   File 
 /base/python_runtime/python_**l**ib/versions/1/google/**appengine**/ext/mapreduce/model.**py,
  line 165, in to_json_str
 json = self.to_json()
   File 
 /base/python_runtime/python_**l**ib/versions/1/google/**appengine**/ext/mapreduce/input_**readers.**py,
  line 2148, in to_json
 json_dict = super(DatastoreKeyInputReader, self).to_json()
 TypeError: super(type, obj): obj must be an instance or subtype of type


 Can any one share, if there is any problem with API.

 Thanks in advance,
 My bad, hoping the data will get transferred, deleted the other
 database :( .
 --
 Regards
 Sandeep Koduri

  --
 You received this message because you are subscribed to the Google
 Groups Google App Engine Pipeline API group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to app-engine-pipeline-api+**unsubscr...@googlegroups.com.

 For more options, visit 
 https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
 .





  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine Pipeline API group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to app-engine-pipeline-api+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Python Datstore copy. MapReduce

2013-05-09 Thread Tom Kaitchuck
This should be resolved. Update to include cl/45189830.


On Thu, May 9, 2013 at 10:14 AM, Jay jbaker.w...@gmail.com wrote:

 I am seeing the same.


 On Tuesday, May 7, 2013 3:16:19 PM UTC-5, Sandeep wrote:

 Hi,

 I am trying to copy data from one application to other, I am using
 database admin to migrate data.

 Have configured the application to receive data. I used to do this
 earlier but today I see wired error and 0% tasks are successful.

 /_ah/mapreduce/kickoffjob_**callback 500


  File 
 /base/python_runtime/python_**lib/versions/1/google/**appengine/ext/mapreduce/model.**py,
  line 805, in to_dict
 input_reader_state: self.input_reader.to_json_str(**),
   File 
 /base/python_runtime/python_**lib/versions/1/google/**appengine/ext/mapreduce/model.**py,
  line 165, in to_json_str
 json = self.to_json()
   File 
 /base/python_runtime/python_**lib/versions/1/google/**appengine/ext/mapreduce/input_**readers.py,
  line 2148, in to_json
 json_dict = super(DatastoreKeyInputReader, self).to_json()
 TypeError: super(type, obj): obj must be an instance or subtype of type


 Can any one share, if there is any problem with API.

 Thanks in advance,
 My bad, hoping the data will get transferred, deleted the other database
 :( .
 --
 Regards
 Sandeep Koduri

  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine Pipeline API group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to app-engine-pipeline-api+unsubscr...@googlegroups.com.

 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Python Datstore copy. MapReduce

2013-05-09 Thread Tom Kaitchuck
Sorry that was an internal number. What I should have said is that this is
fixed in SVN revision 438. If you sync to the latest version you should get
the update.


On Thu, May 9, 2013 at 12:07 PM, Tom Kaitchuck tkaitch...@google.comwrote:

 This should be resolved. Update to include cl/45189830.


 On Thu, May 9, 2013 at 10:14 AM, Jay jbaker.w...@gmail.com wrote:

 I am seeing the same.


 On Tuesday, May 7, 2013 3:16:19 PM UTC-5, Sandeep wrote:

 Hi,

 I am trying to copy data from one application to other, I am using
 database admin to migrate data.

 Have configured the application to receive data. I used to do this
 earlier but today I see wired error and 0% tasks are successful.

 /_ah/mapreduce/kickoffjob_**callback 500


  File 
 /base/python_runtime/python_**lib/versions/1/google/**appengine/ext/mapreduce/model.**py,
  line 805, in to_dict
 input_reader_state: self.input_reader.to_json_str(**),
   File 
 /base/python_runtime/python_**lib/versions/1/google/**appengine/ext/mapreduce/model.**py,
  line 165, in to_json_str
 json = self.to_json()
   File 
 /base/python_runtime/python_**lib/versions/1/google/**appengine/ext/mapreduce/input_**readers.py,
  line 2148, in to_json
 json_dict = super(DatastoreKeyInputReader, self).to_json()
 TypeError: super(type, obj): obj must be an instance or subtype of type


 Can any one share, if there is any problem with API.

 Thanks in advance,
 My bad, hoping the data will get transferred, deleted the other database
 :( .
 --
 Regards
 Sandeep Koduri

  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine Pipeline API group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to app-engine-pipeline-api+unsubscr...@googlegroups.com.

 For more options, visit https://groups.google.com/groups/opt_out.






-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Re: Attention Java MapReduce users

2013-05-08 Thread Tom Kaitchuck
Ronoaldo: Correct, the old API version should not be affected.


On Wed, May 8, 2013 at 12:30 PM, Ronoaldo José de Lana Pereira 
rpere...@beneficiofacil.com.br wrote:

 Tom,

 Thanks for the update. We have some old code running with the first
 version of the Mapper API (the one that hasn't a reducer), specifically, we
 have fixed into the r218. Since the newer versions aren't API compabitle
 with the new implementation of Map/Reduce and we we're using only tha
 mappers to perform our operations, we see no problems so far.

 Does this old mapper api also may be affected by the new version of the
 Files API?

 Kind regards,

 Em terça-feira, 7 de maio de 2013 16h13min45s UTC-3, Tom Kaitchuck
 escreveu:

 If you have the writable file name you can use the datastore viewer to
 find the finalized file name.
 Go into the datastore viewer and enter the gql query: SELECT * FROM
 __BlobFileIndex__
 This will show you the mapping. Then you can narrow it down by specifying
 the ID/Name as the writable file name.


 On Sun, May 5, 2013 at 10:00 PM, Eric Jahn er...@ejahn.net wrote:

  Tom,
 This is great news.  I have one lingering problem as a result of the
 Files API Bug.  Before the Files API fix, I had persisted the file service
 urls whilst I had been writing to them, and then finalized them
 successfully.  But, because of this bug I couldn't retrieve a blobstore key
 by passing these urls to BlobKey.getKeyString().   btw, I'm not using java
 MapReduce , just the App Engine Files API and Blobstore).  Is there a way I
 can somehow retrieve my finalized blobstore files which aren't appearing in
 my App Engine dashboard Blobstore viewer?  If I start with a new file, I
 see them appear, but this is now after the Files API bug fix, I presume.
 Thanks for any thoughts.  -Eric



 On Wednesday, May 1, 2013 5:42:53 PM UTC-4, Tom Kaitchuck wrote:

 This is something we are aware of and are working on for future
 releases.

 For this update we encourage you to download and deploy the new code
 right away.

  --
 You received this message because you are subscribed to the Google
 Groups Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to google-appengi...@**googlegroups.com.
 To post to this group, send email to google-a...@googlegroups.**com.

 Visit this group at http://groups.google.com/**
 group/google-appengine?hl=enhttp://groups.google.com/group/google-appengine?hl=en
 .

 For more options, visit 
 https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
 .




  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine Pipeline API group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to app-engine-pipeline-api+unsubscr...@googlegroups.com.

 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Python Datstore copy. MapReduce

2013-05-08 Thread Tom Kaitchuck
Thanks goes to my teammate yey@. This is b/8790576


On Tue, May 7, 2013 at 1:16 PM, Sandeep Koduri sandeep.kod...@gmail.comwrote:

 Hi,

 I am trying to copy data from one application to other, I am using
 database admin to migrate data.

 Have configured the application to receive data. I used to do this earlier
 but today I see wired error and 0% tasks are successful.

 /_ah/mapreduce/kickoffjob_callback 500


  File 
 /base/python_runtime/python_lib/versions/1/google/appengine/ext/mapreduce/model.py,
  line 805, in to_dict
 input_reader_state: self.input_reader.to_json_str(),
   File 
 /base/python_runtime/python_lib/versions/1/google/appengine/ext/mapreduce/model.py,
  line 165, in to_json_str
 json = self.to_json()
   File 
 /base/python_runtime/python_lib/versions/1/google/appengine/ext/mapreduce/input_readers.py,
  line 2148, in to_json
 json_dict = super(DatastoreKeyInputReader, self).to_json()
 TypeError: super(type, obj): obj must be an instance or subtype of type


 Can any one share, if there is any problem with API.

 Thanks in advance,
 My bad, hoping the data will get transferred, deleted the other database
 :( .
 --
 Regards
 Sandeep Koduri

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Re: Attention Java MapReduce users

2013-05-07 Thread Tom Kaitchuck
If you have the writable file name you can use the datastore viewer to find
the finalized file name.
Go into the datastore viewer and enter the gql query: SELECT * FROM
__BlobFileIndex__
This will show you the mapping. Then you can narrow it down by specifying
the ID/Name as the writable file name.


On Sun, May 5, 2013 at 10:00 PM, Eric Jahn e...@ejahn.net wrote:

 Tom,
 This is great news.  I have one lingering problem as a result of the Files
 API Bug.  Before the Files API fix, I had persisted the file service urls
 whilst I had been writing to them, and then finalized them successfully.
 But, because of this bug I couldn't retrieve a blobstore key by passing
 these urls to BlobKey.getKeyString().   btw, I'm not using java MapReduce ,
 just the App Engine Files API and Blobstore).  Is there a way I can somehow
 retrieve my finalized blobstore files which aren't appearing in my App
 Engine dashboard Blobstore viewer?  If I start with a new file, I see them
 appear, but this is now after the Files API bug fix, I presume.  Thanks for
 any thoughts.  -Eric



 On Wednesday, May 1, 2013 5:42:53 PM UTC-4, Tom Kaitchuck wrote:

 This is something we are aware of and are working on for future releases.

 For this update we encourage you to download and deploy the new code
 right away.

  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine?hl=en.

 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: namespaces support for java mapper

2013-05-07 Thread Tom Kaitchuck
The Java Mapreduce currently does not have any code to take advantage of
Namespaces. The Python implementation does.


On Mon, May 6, 2013 at 10:34 AM, alexh alexanderhar...@gmail.com wrote:

 Hi - any updates on this one? I'd like to know the answer as well.


 On Friday, March 15, 2013 7:02:32 AM UTC-5, aswath wrote:

 Hello,
 Does java mapper have the support for namespaces?

 I am using namespaces, and the last time I tried, it was not supporting
 namespaces, that means, the entities in the namespaces were not processed

 -Aswath

  --
 You received this message because you are subscribed to the Google Groups
 Google App Engine Pipeline API group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to app-engine-pipeline-api+unsubscr...@googlegroups.com.

 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Attention Java MapReduce users

2013-05-01 Thread Tom Kaitchuck
If you are using the experimental Java MapReduce library for App Engine,
you are strongly encouraged to update to the latest version of the library
in the public svn:
https://code.google.com/p/appengine-mapreduce/source/checkout


Background:

We are rolling out a fix to a long standing interaction bug between the
experimental MapReduce library and the experimental Files API that, in
certain circumstances, results in dropped data. Specifically this bug can
cause some records emitted by the Map to be excluded from the input to
Reduce.

The bugfix involves patches to both the Files API and Java MapReduce.
Unfortunately older versions of the Java MapReduce library running against
the patched Files API will drop Map output under more common circumstances.
The Files API fix will roll out on its own (no action required by you), but
in order to avoid dropped data you must update to the latest version of the
Java MapReduce 
libraryhttps://code.google.com/p/appengine-mapreduce/source/checkout
.

https://code.google.com/p/appengine-mapreduce/source/checkout

We apologize for the trouble. Rest assured we are working aggressively to
move MapReduce into a fully supported state.


Tom Kaitchuck on behalf of the Google App Engine Team

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] Re: Attention Java MapReduce users

2013-05-01 Thread Tom Kaitchuck
This is something we are aware of and are working on for future releases.

For this update we encourage you to download and deploy the new code right 
away.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Re: New client library for Google Cloud Storage available for testing

2013-04-30 Thread Tom Kaitchuck
To answer some of the questions on this thread:

We are looking at the App Engine GCS Client library as a potential
successor to the Files API. We believe it to be stable enough for general
usage, but it is still a new library. For new users starting to write new
code we recommend using GCS Client. If it's not meeting your needs or you
encounter problems please file a bug at
https://code.google.com/p/appengine-gcs-client/issues/list

It is still the case that using the GCS client counts against URLFetch
quota. We are actively working on this, and hope to have a fix soon.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] export JSON to Cloud Storage

2013-04-04 Thread Tom Kaitchuck
The file is in LevelDB format by default: https://code.google.com/p/leveldb/
The MapReduce code contains support for reading and writing it, so that it
can be passed between one pipeline and another.
If you want to control the file format yourself I would recommend using the
Appengine GCS client: https://code.google.com/p/appengine-gcs-client/


On Wed, Mar 27, 2013 at 6:25 AM, Ranjit Chacko rjcha...@gmail.com wrote:

 I'm trying to create JSON representations of my models in the datastore
 and then write them to a file in Cloud Storage in a MapperPipeline.

 The map method looks something like this:

 def map_method(entity):
   json_message = make_json_message(entity)
   yield(json_message)


 and the params for the output_writer look something like this:

 output_writer:{
 filesystem: gs,
 gs_bucket_name: json_export,
 mime_type: text/utf-8
   }

 The pipeline completes successfully, but the file I'm getting in Cloud
 Storage seems to be in a binary format.

 How do I get a JSON file out that I can read?

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to google-appengine+unsubscr...@googlegroups.com.
 To post to this group, send email to google-appengine@googlegroups.com.
 Visit this group at http://groups.google.com/group/google-appengine?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] New client library for Google Cloud Storage available for testing

2013-01-31 Thread Tom Kaitchuck
Yes that is currently the case. It's mentioned here in the known 
issueshttps://code.google.com/p/appengine-gcs-client/wiki/KnownIssuessection.
We consider this a problem and are working on it.

On Wednesday, January 30, 2013 5:07:40 PM UTC-8, Mac wrote:

 Currently using the file API, the traffic between cloud storage and app 
 engine is free. So when app engine serves
 the file retrieved, that's the only bandwidth cost.

 in this new system, if app engine retrieves the file using urlFetch, 
 doesn't that mean we pay twice? once from cloud storage serving the file to 
 appengine's urlFtech, and once again when the appengine serves the file to 
 public?


 On Tue, Jan 29, 2013 at 5:15 PM, Tom Kaitchuck 
 tkait...@google.comjavascript:
  wrote:

 *Greetings,

 We’ve been hard at work on improving access to Google Cloud Storage from 
 App Engine, and today we’re making the first version of our new Google 
 Cloud Storage Client Library for App Engine available for developers to 
 test. This client library contains much of the functionality available in 
 the Files API, but provides key stability improvements and a better overall 
 developer experience.  In the upcoming months we’ll continue to make 
 improvements with the goal of making this library the preferred way of 
 accessing Google Cloud Storage from App Engine.

 To get started, check out our documentation at 
 https://code.google.com/p/appengine-gcs-client/.

 If you have any questions, comments or feedback please feel free to file 
 bugs or feature requests at 
 https://code.google.com/p/appengine-gcs-client/issues/list. 

 Thanks for your continued support of App Engine, we look forward to your 
 feedback!

 Regards,
 Tom, on behalf of the Google App Engine Team*

 -- 
 You received this message because you are subscribed to the Google Groups 
 Google App Engine group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to google-appengi...@googlegroups.com javascript:.
 To post to this group, send email to 
 google-a...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/google-appengine?hl=en
 .
 For more options, visit https://groups.google.com/groups/opt_out.
  
  




 -- 
 Omnem crede diem tibi diluxisse supremum. 


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[google-appengine] New client library for Google Cloud Storage available for testing

2013-01-29 Thread Tom Kaitchuck
*Greetings,

We’ve been hard at work on improving access to Google Cloud Storage from 
App Engine, and today we’re making the first version of our new Google 
Cloud Storage Client Library for App Engine available for developers to 
test. This client library contains much of the functionality available in 
the Files API, but provides key stability improvements and a better overall 
developer experience.  In the upcoming months we’ll continue to make 
improvements with the goal of making this library the preferred way of 
accessing Google Cloud Storage from App Engine.

To get started, check out our documentation at 
https://code.google.com/p/appengine-gcs-client/.

If you have any questions, comments or feedback please feel free to file 
bugs or feature requests at 
https://code.google.com/p/appengine-gcs-client/issues/list. 

Thanks for your continued support of App Engine, we look forward to your 
feedback!

Regards,
Tom, on behalf of the Google App Engine Team*

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.