[ 
https://issues.apache.org/jira/browse/BEAM-2768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16145446#comment-16145446
 ] 

Reuven Lax commented on BEAM-2768:
----------------------------------

Which runner are you using? I can't see anything wrong with the code, and I 
can't reproduce this with the DirectRunner tests, which do verify that job ids 
are not reused.

One possibility is that your workers are failing for a different reason, and 
then retrying the insert of the same table partition with the same job id. This 
is by design: we want duplicate inserts to fail. Do you have fuller logs from 
your workers?

> Fix bigquery.WriteTables generating non-unique job identifiers
> --------------------------------------------------------------
>
>                 Key: BEAM-2768
>                 URL: https://issues.apache.org/jira/browse/BEAM-2768
>             Project: Beam
>          Issue Type: Bug
>          Components: beam-model
>    Affects Versions: 2.0.0
>            Reporter: Matti Remes
>            Assignee: Reuven Lax
>
> This is a result of BigQueryIO not creating unique job ids for batch inserts, 
> thus BigQuery API responding with a 409 conflict error:
> {code:java}
> Request failed with code 409, will NOT retry: 
> https://www.googleapis.com/bigquery/v2/projects/<project_id>/jobs
> {code}
> The jobs are initiated in a step BatchLoads/SinglePartitionWriteTables, 
> called by step's WriteTables ParDo:
> https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BatchLoads.java#L511-L521
> https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/WriteTables.java#L148
> It would probably be a good idea to append a UUIDs as part of a job id.
> Edit: This is a major bug blocking using BigQuery as a sink for bounded input.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to