On Wed, Feb 13, 2019 at 11:39 PM Steve Niemitz wrote:
>
> On Wed, Feb 13, 2019 at 5:01 PM Robert Bradshaw
> wrote:
>
>> On Wed, Feb 13, 2019 at 5:07 PM Steve Niemitz
>> wrote:
>>
>>> Thanks again for the answers so far! I really appreciate it. As for my
>>> specific use-case, we're using Bigt
On Wed, Feb 13, 2019 at 5:01 PM Robert Bradshaw wrote:
> On Wed, Feb 13, 2019 at 5:07 PM Steve Niemitz wrote:
>
>> Thanks again for the answers so far! I really appreciate it. As for my
>> specific use-case, we're using Bigtable as the final sink, and I'd prefer
>> to keep our writes fully ide
On Wed, Feb 13, 2019 at 5:07 PM Steve Niemitz wrote:
> Thanks again for the answers so far! I really appreciate it. As for my
> specific use-case, we're using Bigtable as the final sink, and I'd prefer
> to keep our writes fully idempotent for other reasons (ie no
> read-modify-write). We actu
On Wed, Feb 13, 2019 at 8:07 AM Steve Niemitz wrote:
> Thanks again for the answers so far! I really appreciate it. As for my
> specific use-case, we're using Bigtable as the final sink, and I'd prefer
> to keep our writes fully idempotent for other reasons (ie no
> read-modify-write). We actu
Thanks again for the answers so far! I really appreciate it. As for my
specific use-case, we're using Bigtable as the final sink, and I'd prefer
to keep our writes fully idempotent for other reasons (ie no
read-modify-write). We actually do track tentative vs final values
already, but checking t
To question 1, I also would have expected the pipeline to fail in the case
of files failing to load; I'm not sure why it's not. I thought the BigQuery
API returns a 400 level response code in the case of files failing and that
would bubble up to a pipeline execution error, but I haven't dug through
Hello,
I have a BigQuery table which ingests a Protobuf stream from Kafka with a
Beam pipeline. The Protobuf has a `log Map` column which
translates to a field "log" of type RECORD with unknown fields in BigQuery.
So I scanned my whole stream to know which schema fields to expect and
created an e
On Tue, Feb 12, 2019 at 7:38 PM Steve Niemitz wrote:
>
> wow, thats super unexpected and dangerous, thanks for clarifying! Time
to go re-think how we do some of our writes w/ early firings then.
>
> Are there any workarounds to make things happen in-order in dataflow? eg
if the sink gets fused t