[ 
https://issues.apache.org/jira/browse/BEAM-5040?focusedWorklogId=128378&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-128378
 ]

ASF GitHub Bot logged work on BEAM-5040:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 28/Jul/18 01:29
            Start Date: 28/Jul/18 01:29
    Worklog Time Spent: 10m 
      Work Description: reuvenlax commented on issue #6080: [BEAM-5040] Fix 
retry bug for BigQuery jobs.
URL: https://github.com/apache/beam/pull/6080#issuecomment-408573241
 
 
   retest this please

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 128378)
    Time Spent: 20m  (was: 10m)

> BigQueryIO retries infinitely in WriteTable and WriteRename
> -----------------------------------------------------------
>
>                 Key: BEAM-5040
>                 URL: https://issues.apache.org/jira/browse/BEAM-5040
>             Project: Beam
>          Issue Type: Bug
>          Components: io-java-gcp
>    Affects Versions: 2.5.0
>            Reporter: Reuven Lax
>            Assignee: Reuven Lax
>            Priority: Major
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> BigQueryIO retries infinitely in WriteTable and WriteRename
> Several failure scenarios with the current code:
>  # It's possible for a load job to return failure even though it actually 
> succeeded (e.g. the reply might have timed out). In this case, BigQueryIO 
> will retry the job which will fail again (because the job id has already been 
> used), leading to indefinite retries. Correct behavior is to stop retrying as 
> the load job has succeeded.
>  # It's possible for a load job to be accepted by BigQuery, but then to fail 
> on the BigQuery side. In this case a retry with the same job id will fail as 
> that job id has already been used. BigQueryIO will sometimes detect this, but 
> if the worker has restarted it will instead issue a load with the old job id 
> and go into a retry loop. Correct behavior is to generate a new deterministic 
> job id and retry using that new job id.
>  # In many cases of worker restart, BigQueryIO ends up in infinite retry 
> loops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to