DanielMorales9 commented on issue #32746:
URL: https://github.com/apache/beam/issues/32746#issuecomment-2407162082

   I used 2.60.0-SNAPSHOT and a triggering frequency of 60s, but after some 
time I see the errors again:
   ```Operation ongoing in step 
Managed.ManagedTransform/ManagedSchemaTransformProvider.ManagedSchemaTransform/IcebergWriteSchemaTransformProvider.IcebergWriteSchemaTransform/IcebergIO.WriteRows/Write
 Rows to Destinations/AppendFilesToTables/Append metadata updates to tables for 
at least 20m00s without outputting or completing in state process in thread 
DataflowWorkUnits-175:17462430d947b538 with id 229
     at [email protected]/jdk.internal.misc.Unsafe.park(Native Method)
     at 
[email protected]/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
     at 
[email protected]/java.util.concurrent.FutureTask.awaitDone(FutureTask.java:447)
     at 
[email protected]/java.util.concurrent.FutureTask.get(FutureTask.java:190)
     at 
app//com.google.cloud.hadoop.util.BaseAbstractGoogleAsyncWriteChannel.waitForCompletionAndThrowIfUploadFailed(BaseAbstractGoogleAsyncWriteChannel.java:247)
     at 
app//com.google.cloud.hadoop.util.BaseAbstractGoogleAsyncWriteChannel.close(BaseAbstractGoogleAsyncWriteChannel.java:168)
     at [email protected]/java.nio.channels.Channels$1.close(Channels.java:177)
     at 
[email protected]/java.io.FilterOutputStream.close(FilterOutputStream.java:188)
     at 
app//com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.close(GoogleHadoopOutputStream.java:119)
     at 
app//org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
     at 
app//org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
     at 
app//org.apache.iceberg.hadoop.HadoopStreams$HadoopPositionOutputStream.close(HadoopStreams.java:188)
     at 
[email protected]/java.io.FilterOutputStream.close(FilterOutputStream.java:188)
     at 
[email protected]/java.io.FilterOutputStream.close(FilterOutputStream.java:188)
     at app//org.apache.avro.file.DataFileWriter.close(DataFileWriter.java:461)
     at 
app//org.apache.iceberg.avro.AvroFileAppender.close(AvroFileAppender.java:94)
     at app//org.apache.iceberg.ManifestWriter.close(ManifestWriter.java:213)
     at 
app//org.apache.iceberg.ManifestFiles.copyManifestInternal(ManifestFiles.java:337)
     at 
app//org.apache.iceberg.ManifestFiles.copyAppendManifest(ManifestFiles.java:264)
     at 
app//org.apache.iceberg.MergingSnapshotProducer.copyManifest(MergingSnapshotProducer.java:288)
     at 
app//org.apache.iceberg.MergingSnapshotProducer.add(MergingSnapshotProducer.java:279)
     at app//org.apache.iceberg.MergeAppend.appendManifest(MergeAppend.java:68)
     at 
app//org.apache.beam.sdk.io.iceberg.AppendFilesToTables$AppendFilesToTablesDoFn.processElement(AppendFilesToTables.java:104)
     at 
app//org.apache.beam.sdk.io.iceberg.AppendFilesToTables$AppendFilesToTablesDoFn$DoFnInvoker.invokeProcessElement(Unknown
 Source)
   ```
   
   However, it looks like it eventually succeeds and one snapshot is produced:
   ```
   2024-10-11 10:46:14.495000+0000, 3353657249539184893, 7738049219100488985, 
'append', 
'gs://<bucket0name>/eu/aqueduct_internal/stream_373_events-f47ac10b-58cc-4372-a567-0e02b2c3d478/metadata/snap-3353657249539184893-1-eea56100-7206-4546-95d6-923db083982f.avro',
 {'changed-partition-count': '1', 'added-data-files': '5527', 
'total-equality-deletes': '0', 'added-records': '325676', 
'total-position-deletes': '0', 'added-files-size': '288080637', 
'total-delete-files': '0', 'total-files-size': '1856738078', 'total-records': 
'570029', 'total-data-files': '89258'}
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to