Hi Davor,

My team wants to try and test streaming pipelines on Dataflow (Beam
pipelines of course) and I was wondering how this works in terms of
UnboundedSources - Kafka/Pubsub ? We currently use Kafka, and I was
wondering if I could "record" a chunk of (public) data we use and import it
? can I throttle the input ? loop-feed (just to keep it going for a while) ?

I thought I'd ask you to give me good pointers instead of looking around
myself (that's me being lazy whenever I can ;-) ).

On a side-note, I followed-up on the package tracking link you sent me and
it looks like it went from Seattle to Seattle - it was like that after
about 3-4 days and I sampled it once a week to see if this got updated
somehow, though now I think something might be wrong.

Thanks,
Amit

Reply via email to