This is an automated email from the ASF dual-hosted git repository.

ahmedabualsaud pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/beam.git


The following commit(s) were added to refs/heads/master by this push:
     new 0b61035f36f Increase retry backoff for Storage API batch to survive 
AppendRows quota refill (#31837)
0b61035f36f is described below

commit 0b61035f36fb099a8dcd39978c71779a2e81f957
Author: Ahmed Abualsaud <65791736+ahmedab...@users.noreply.github.com>
AuthorDate: Mon Jul 15 15:23:04 2024 -0400

    Increase retry backoff for Storage API batch to survive AppendRows quota 
refill (#31837)
    
    * Increase retry backoff for Storage API batch
    
    * longer waits for quota error only
    
    * cleanup
    
    * add to CHANGES.md
    
    * no need for quota backoff. just increase allowed retries
    
    * cleanup
---
 CHANGES.md                                                             | 3 ++-
 .../beam/sdk/io/gcp/bigquery/StorageApiWriteUnshardedRecords.java      | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/CHANGES.md b/CHANGES.md
index fc94877a2bb..243596e6f2e 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -68,6 +68,7 @@
 
 * Multiple RunInference instances can now share the same model instance by 
setting the model_identifier parameter (Python) 
([#31665](https://github.com/apache/beam/issues/31665)).
 * Added options to control the number of Storage API multiplexing connections 
([#31721](https://github.com/apache/beam/pull/31721))
+* [BigQueryIO] Better handling for batch Storage Write API when it hits 
AppendRows throughput quota 
([#31837](https://github.com/apache/beam/pull/31837))
 * [IcebergIO] All specified catalog properties are passed through to the 
connector ([#31726](https://github.com/apache/beam/pull/31726))
 * Removed a 3rd party LGPL dependency from the Go SDK 
([#31765](https://github.com/apache/beam/issues/31765)).
 * Support for MapState and SetState when using Dataflow Runner v1 with 
Streaming Engine (Java) 
([[#18200](https://github.com/apache/beam/issues/18200)])
@@ -83,7 +84,7 @@
 
 ## Bugfixes
 
-* Fixed a bug in BigQueryIO batch Storage Write API that frequently exhausted 
concurrent connections quota 
([#31710](https://github.com/apache/beam/pull/31710))
+* [BigQueryIO] Fixed a bug in batch Storage Write API that frequently 
exhausted concurrent connections quota 
([#31710](https://github.com/apache/beam/pull/31710))
 * Fixed X (Java/Python) ([#X](https://github.com/apache/beam/issues/X)).
 
 ## Security Fixes
diff --git 
a/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StorageApiWriteUnshardedRecords.java
 
b/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StorageApiWriteUnshardedRecords.java
index 21c1d961e84..f0c4a56ed3d 100644
--- 
a/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StorageApiWriteUnshardedRecords.java
+++ 
b/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/StorageApiWriteUnshardedRecords.java
@@ -771,7 +771,7 @@ public class StorageApiWriteUnshardedRecords<DestinationT, 
ElementT>
                 invalidateWriteStream();
                 allowedRetry = 5;
               } else {
-                allowedRetry = 10;
+                allowedRetry = 35;
               }
 
               // Maximum number of times we retry before we fail the work item.

Reply via email to