c: Chesnay Schepler , "user@flink.apache.org"
Subject: Re: stream of large objects
Hi Ajay,
when repartitioning the stream the events need to transferred between
Taskmanagers (processes/nodes). Just passing a reference there won't work.
If it is serialization you are worried about and
eam tasks does not
> seem efficient.
>
>
>
>
>
> *From: *Chesnay Schepler
> *Date: *Sunday, February 10, 2019 at 4:57 AM
> *To: *"Aggarwal, Ajay" , "user@flink.apache.org"
>
> *Subject: *Re: stream of large objects
>
>
>
> *NetApp Sec
Keyed context,
so sharing all of these across all downstream tasks does not seem efficient.
From: Chesnay Schepler
Date: Sunday, February 10, 2019 at 4:57 AM
To: "Aggarwal, Ajay" , "user@flink.apache.org"
Subject: Re: stream of large objects
NetApp Security WARNING: This
y" ,
"user@flink.apache.org"
*Subject: *Re: stream of large objects
Whether a LargeMessage is serialized depends on how the job is structured.
For example, if you were to only apply map/filter functions after the
aggregation it is likely they wouldn't be serialized.
If you were to a
: Friday, February 8, 2019 at 8:45 AM
To: "Aggarwal, Ajay" , "user@flink.apache.org"
Subject: Re: stream of large objects
Whether a LargeMessage is serialized depends on how the job is structured.
For example, if you were to only apply map/filter functions after the
aggregati
Whether a LargeMessage is serialized depends on how the job is structured.
For example, if you were to only apply map/filter functions after the
aggregation it is likely they wouldn't be serialized.
If you were to apply another keyBy they will be serialized again.
When you say "small size" mess
In my use case my source stream contain small size messages, but as part of
flink processing I will be aggregating them into large messages and further
processing will happen on these large messages. The structure of this large
message will be something like this:
Class LargeMessage {