Hello,

I'm writing a Beam pipeline that does some relatively expensive reads from
BigQuery. I want to be able to run the pipeline in a development loop
without racking up a huge bill.

I know BigQuery has support for query caching, but from the docs, that only
works if you don't specify a destination table.

For the purposes of development, I don't mind trading off stale data (i.e.
reusing an existing destination table if it exists) to save money.

Is there any way to do this now, or relevant any open issues? I did a quick
pass through JIRA but couldn't find anything.

Thanks,
Matt

Reply via email to