Hello Ravi, I am not sure I follow what you are trying to do, but BigQueryIO.writeTableRows is a sink and will return only insertion errors.
If you already have table_A_records, why bother reading it again from BigQuery? You could use table_A_records to run any intermediary transforms and write to table_C, even though you are writing that to a staging area (table_B). In this way, you can also leverage some parallelism. Best, Bruno On Tue, Jun 14, 2022 at 11:12 AM Ravi Kapoor <[email protected]> wrote: > Team, > Any update on this? > > On Mon, Jun 13, 2022 at 8:39 PM Ravi Kapoor <[email protected]> > wrote: > >> Hi Team, >> >> I am currently using Beam in my project with Dataflow Runner. >> I am trying to create a pipeline where the data flows from the source to >> staging then to target such as: >> >> A (Source) -> B(Staging) -> C (Target) >> >> When I create a pipeline as below: >> >> PCollection<TableRow> table_A_records = p.apply(BigQueryIO.readTableRows() >> .from("project:dataset.table_A")); >> >> table_A_records.apply(BigQueryIO.writeTableRows(). >> to("project:dataset.table_B") >> >> .withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_NEVER) >> >> .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)); >> >> PCollection<TableRow> table_B_records = p.apply(BigQueryIO.readTableRows() >> .from("project:dataset.table_B")); >> table_B_records.apply(BigQueryIO.writeTableRows(). >> to("project:dataset.table_C") >> >> .withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_NEVER) >> >> .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)); >> p.run().waitUntilFinish(); >> >> >> It basically creates two parallel job graphs in dataflow instead creating >> a transformation as expected: >> A -> B >> B -> C >> I needed to create data pipeline which flows the data in chain like: >> D >> / >> A -> B -> C >> \ >> E >> Is there a way to achieve this transformation in between source and >> target tables? >> >> Thanks, >> Ravi >> > > > -- > Thanks, > Ravi Kapoor > +91-9818764564 > [email protected] >
