[ 
https://issues.apache.org/jira/browse/BEAM-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15679617#comment-15679617
 ] 

Vikas Kedigehalli edited comment on BEAM-991 at 11/19/16 5:53 PM:
------------------------------------------------------------------

Joshua, all good solutions. 

I would prefer 3rd one, using 'getSerializedSize' to measure the approximate 
byte size and flush when it reaches ~10MB 
(https://github.com/apache/incubator-beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/datastore/DatastoreV1.java#L863)

Computing getSerializedSize shouldn't be problem because that value is memoized 
by protobuf and protobuf will anyway compute that later for serializing, so we 
shouldn't hit any additional performance penalty. 

PS: You more than welcome to submit a Pull Request to Apache Beam if you are 
interested to contribute. :)


was (Author: vikasrk):
Joshua, all good solutions. 

I would prefer 3rd one, using 'getSerializedSize' to measure the approximate 
byte size and flush when it reaches ~10MB 
(https://github.com/apache/incubator-beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/datastore/DatastoreV1.java#L863)

Computing getSerializedSize shouldn't be problem because that value is memoized 
by protobuf and protobuf will anyway compute that later for serializing, so we 
shouldn't hit any additional performance penalty. 

PS: If you more than welcome to submit a Pull Request to Apache Beam if you are 
interested to contribute. :)

> DatastoreIO Write should flush early for large batches
> ------------------------------------------------------
>
>                 Key: BEAM-991
>                 URL: https://issues.apache.org/jira/browse/BEAM-991
>             Project: Beam
>          Issue Type: Bug
>          Components: sdk-java-gcp
>            Reporter: Vikas Kedigehalli
>            Assignee: Vikas Kedigehalli
>
> If entities are large (avg size > 20KB) then the a single batched write (500 
> entities) would exceed the Datastore size limit of a single request (10MB) 
> from https://cloud.google.com/datastore/docs/concepts/limits.
> First reported in: 
> http://stackoverflow.com/questions/40156400/why-does-dataflow-erratically-fail-in-datastore-access



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to