[
https://issues.apache.org/jira/browse/BEAM-11134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17548934#comment-17548934
]
Danny McCormick commented on BEAM-11134:
----------------------------------------
This issue has been migrated to https://github.com/apache/beam/issues/20544
> Using WriteToBigQuery FILE_LOADS in a streaming pipeline does not delete temp
> tables
> ------------------------------------------------------------------------------------
>
> Key: BEAM-11134
> URL: https://issues.apache.org/jira/browse/BEAM-11134
> Project: Beam
> Issue Type: Bug
> Components: io-py-gcp
> Affects Versions: 2.24.0
> Environment: Running on DataflowRunner on GCP Dataflow.
> Reporter: Luke Kavenagh
> Priority: P3
> Labels: beam, dataflow, gcp, python
>
> Using the {{FILE_LOADS}} method in {{WriteToBigQuery}}, it initially appears
> to work, sending load jobs, which then (at least sometimes) succeed and the
> data goes into the correct tables.
> But the temporary tables that were created never get deleted. Often the data
> was just never even copied from the temp tables to the destination.
> In the code
> ([https://github.com/apache/beam/blob/aca9099acca969dc217ab183782e5270347cd354/sdks/python/apache_beam/io/gcp/bigquery_file_loads.py#L846)|https://github.com/apache/beam/blob/aca9099acca969dc217ab183782e5270347cd354/sdks/python/apache_beam/io/gcp/bigquery_file_loads.py#L846]
> ...it appears that after the load jobs, beam should wait for them to finish,
> then copy the data from the temp tables and delete them; however, it seems
> that when used with a streaming pipeline, it doesn't complete these steps.
>
> In case it's not clear, this is for the python SDK.
>
> For reference:
> https://stackoverflow.com/questions/64526500/using-writetobigquery-file-loads-in-a-streaming-pipeline-just-creates-a-lot-of-t/64543619#64543619
--
This message was sent by Atlassian Jira
(v8.20.7#820007)