derrickaw commented on code in PR #35567:
URL: https://github.com/apache/beam/pull/35567#discussion_r2209090386
##########
sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryServicesImpl.java:
##########
@@ -1149,22 +1157,61 @@ <T> long insertAll(
// If this row's encoding by itself is larger than the maximum row
payload, then it's
// impossible to insert into BigQuery, and so we send it out through
the dead-letter
// queue.
- if (nextRowSize >= MAX_BQ_ROW_PAYLOAD) {
+ if (nextRowSize >= MAX_BQ_ROW_PAYLOAD_BYTES) {
InsertErrors error =
new InsertErrors()
.setErrors(ImmutableList.of(new
ErrorProto().setReason("row-too-large")));
// We verify whether the retryPolicy parameter expects us to
retry. If it does, then
// it will return true. Otherwise it will return false.
- Boolean isRetry = retryPolicy.shouldRetry(new
InsertRetryPolicy.Context(error));
- if (isRetry) {
+ if (retryPolicy.shouldRetry(new InsertRetryPolicy.Context(error)))
{
+ // Obtain table schema
+ TableSchema tableSchema = null;
+ try {
+ String tableSpec = BigQueryHelpers.toTableSpec(ref);
+ if (tableSchemaCache.containsKey(tableSpec)) {
+ tableSchema = tableSchemaCache.get(tableSpec);
+ } else {
+ Table table = getTable(ref);
+ if (table != null) {
+ table.getSchema();
+ tableSchema =
+
TableRowToStorageApiProto.schemaToProtoTableSchema(table.getSchema());
+ tableSchemaCache.put(tableSpec, tableSchema);
+ }
+ }
+ } catch (Exception e) {
+ LOG.warn("Could not fetch table schema for {}.", ref, e);
+ }
+
+ // Create BigQuery schema map to use for formatting
+ String rowDetails;
+ try {
+ if (tableSchema != null) {
+ // Creates bqSchemaMap containing field name, field type, and
+ // possibly field mode if available.
+ Map<String, String> bqSchemaMap =
+ tableSchema.getFieldsList().stream()
+ .collect(Collectors.toMap(f -> f.getName(), f ->
f.getType().name()));
+ rowDetails = formatRowWithSchema(row, bqSchemaMap);
Review Comment:
The issue that someone ran into was that the row schema was incorrect -
basically an incorrect field, which caused the large row. So I was trying to
head off a future situation to where maybe the type was incorrect as well.
Maybe I just fall back to just returning the row fields and leave it at that?
Thanks.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]