Hi,

I'm writing a simple streaming beam application. The application job is
doing following tasks:

1. Reads data from GCS bucket (project 1) and loads into Kafka topic
2. Reads data from Kafka topic and loads into BigQUery (project 3)

Composer running in Project 1
Data Flow running in project 2

I'm using BeamRunPythonPipelineOperator  and  DataflowConfiguration
configuration.

Is this the right setup? What would be the gcp_conn_id? Any suggestions?

Utkarsh

Reply via email to