[ 
https://issues.apache.org/jira/browse/BEAM-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pablo Estrada updated BEAM-13931:
---------------------------------
    Priority: P0  (was: P2)

> BigQueryIO is sending rows that are too large to Deadletter Queue even on 
> RETRY_ALWAYS
> --------------------------------------------------------------------------------------
>
>                 Key: BEAM-13931
>                 URL: https://issues.apache.org/jira/browse/BEAM-13931
>             Project: Beam
>          Issue Type: Bug
>          Components: io-java-gcp
>    Affects Versions: 2.35.0, 2.36.0
>            Reporter: Pablo Estrada
>            Priority: P0
>             Fix For: 2.36.0, 2.37.0
>
>
> Note that BQ does not support requests over a certain size, and rows that go 
> past the size may be output into a dead-letter queue that they can get back 
> with 
> [BigQueryIO.Write.Result.getFailedInsertsWithErr|https://beam.apache.org/releases/javadoc/2.36.0/org/apache/beam/sdk/io/gcp/bigquery/WriteResult.html#getFailedInsertsWithErr--]
> A change went into Beam that outputs rows into the BQIO DLQ even if they're 
> meant to be retried indefinitely.
> [https://github.com/apache/beam/commit/1f08d1f3ddc2e7bc7341be4b29bdafaec18de9cc#diff-26dbe8f625f702ae3edacdbc02b12acc6e423542fe16835229e22ef8eb4e109cR979-R989]
>  
>  
> A workaround is to set this pipeline option to a larger amount: 
> [https://github.com/apache/beam/blob/1f08d1f3ddc2e7bc7341be4b29bdafaec18de9cc/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryOptions.java#L70]
>  
> Currently it's 64KB, which is relatively small. Setting it to 1MB or 5MB or 
> so should work around this issue (it should be larger than the maximum row 
> size) - gRPC should support up to 10MB request sizes.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to