pabloem commented on a change in pull request #12203:
URL: https://github.com/apache/beam/pull/12203#discussion_r460207893



##########
File path: sdks/python/apache_beam/io/gcp/big_query_query_to_table_it_test.py
##########
@@ -305,7 +303,8 @@ def test_big_query_new_types_native(self):
         'use_standard_sql': False,
         'native': True,
         'wait_until_finish_duration': WAIT_UNTIL_FINISH_DURATION_MS,
-        'on_success_matcher': all_of(*pipeline_verifiers)
+        'on_success_matcher': all_of(*pipeline_verifiers),
+        'experiments': 'use_dataflow_bq_sink',

Review comment:
       I'm trying to preserve the same test coverage after the change, and 
remove BQSink tests later on. Would htis be fine?
   

##########
File path: CHANGES.md
##########
@@ -55,6 +55,9 @@
 
 * New overloads for BigtableIO.Read.withKeyRange() and 
BigtableIO.Read.withRowFilter()
   methods that take ValueProvider as a parameter (Java) 
([BEAM-10283](https://issues.apache.org/jira/browse/BEAM-10283)).
+* The WriteToBigQuery transform (Python) in Dataflow Batch no longer relies on 
BigQuerySource by default. It relies on 
+  a new, fully-featured transform based on file loads into BigQuery. To revert 
the behavior to the old implementation,
+  you may use `--experiments=use_dataflow_bq_sink`.

Review comment:
       Done everywhere. Thanks!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to