but how can i write when the last point is the bigquery sink -> there
is nothing i can do after this

On Sun, Sep 10, 2017 at 6:16 PM, Eugene Kirpichov
<[email protected]> wrote:
> The common pattern for this sort of thing is to have the pipeline write its
> results to somewhere, eg to files, and have the client read them back. This
> also makes sense for pipelines that never finish.
>
> On Sun, Sep 10, 2017, 3:56 AM Chaim Turkel <[email protected]> wrote:
>
>> thanks for the reply.
>> I think it would have been nicer if i could get this information as a
>> combine at the end.
>> The current solution means that the client that deploys to job, needs
>> to wait for the results (block) and if the client machine dies, then i
>> loose the information.
>> If I could add it to the collection pipeline as a combine at then end
>> it would be better
>>
>> chaim
>>
>> On Sun, Sep 10, 2017 at 9:03 AM, Jean-Baptiste Onofré <[email protected]>
>> wrote:
>> > Hi Chaim,
>> >
>> > The pipeline result provides the metrics to you. You can periodically
>> poll
>> > (with a thread) the pipeline result object to get the updated data.
>> >
>> > Regards
>> > JB
>> >
>> >
>> > On 09/10/2017 07:45 AM, Chaim Turkel wrote:
>> >>
>> >> Hi,
>> >>    I am having trouble figuring out what to do with the results. I have
>> >> multiple collections running on the pipeline, and since the sink does
>> >> not give me the option to get the result, i need to wait for the
>> >> pipeline to finish and then poll the results.
>> >>  From what i can see my only option is to use the metrics, is there
>> >> another way to pass information from the collections to the results?
>> >>
>> >> chaim
>> >>
>> >
>> > --
>> > Jean-Baptiste Onofré
>> > [email protected]
>> > http://blog.nanthrax.net
>> > Talend - http://www.talend.com
>>

Reply via email to