[ https://issues.apache.org/jira/browse/BEAM-383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15407287#comment-15407287 ]
ASF GitHub Bot commented on BEAM-383: ------------------------------------- Github user asfgit closed the pull request at: https://github.com/apache/incubator-beam/pull/707 > BigQueryIO: update sink to shard into multiple write jobs > --------------------------------------------------------- > > Key: BEAM-383 > URL: https://issues.apache.org/jira/browse/BEAM-383 > Project: Beam > Issue Type: Bug > Components: sdk-java-gcp > Reporter: Daniel Halperin > Assignee: Ian Zhou > > BigQuery has global limits on both the # files that can be written in a > single job and the total bytes in those files. We should be able to modify > BigQueryIO.Write to chunk into multiple smaller jobs that meet these limits, > write to temp tables, and atomically copy into the destination table. > This functionality will let us safely stay within BQ's load job limits. -- This message was sent by Atlassian JIRA (v6.3.4#6332)