' section in the job detail. Does anyone
know how I might get the watermark information?
Thanks,
--
Gregory Fee
Engineer
On 23. Jul 2018, at 17:37, Gregory Fee wrote:
>
> Hi Aljoscha! I am not using any async i/o. I do use a trick similar to
> ContinuousFileReaderOperator
> <https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sou
write to the output asynchronously though.
On Mon, Jul 23, 2018 at 2:30 AM, Aljoscha Krettek
wrote:
> Hi Greg,
>
> just making sure but is there any asynchrony in your user functions? Any
> Async I/O operator maybe?
>
> Best,
> Aljoscha
>
> On 20. Jul 2
g about the cast exception, I haven't seen that before,
>>> sorry to be off topic.
>>>
>>>
>>>
>>>
>>> --
>>> *From:* Philip Doctor
>>> *Sent:* Thursday, July 19, 2018 9:27:15 PM
>>> *To:* Gregory Fee; user
>&g
more
--
*Gregory Fee*
Engineer
425.830.4734 <+14258304734>
[image: Lyft] <http://www.lyft.com>
orted state primitive would make
> that much easier.
>
> Best, Fabian
>
> 2018-06-28 6:41 GMT+02:00 Hequn Cheng :
>
>> Hi Gregory,
>>
>> What's the cause of your problem. It would be great if you can share your
>> experience which I think will defi
that ended up stripping off all watermarks.
On Wed, Jun 27, 2018 at 9:41 PM, Hequn Cheng wrote:
> Hi Gregory,
>
> What's the cause of your problem. It would be great if you can share your
> experience which I think will definitely help others.
>
>
> On Thu, Jun 28, 2018 at
t streams’ event times[1].
>
> Best, Hequn
>
> [1]: https://ci.apache.org/projects/flink/flink-docs-
> master/dev/event_time.html
>
> On Thu, Jun 28, 2018 at 1:58 AM, Gregory Fee wrote:
>
>> Thanks for your answers! Yes, it was based on watermarks.
>>
>> F
s correctly in the ITCase environment.
>> can you share more information? Does the same problem happen if you use
>> proctime?
>> I am guessing this could be highly correlated with how you set your
>> watermark strategy of your input streams of "user_things"
buffer there for a long
time (hours in some cases). Eventually something happens and the data
starts to flush through to the downstream operators. Can anyone help me
understand what is causing that behavior? I want the data to flow through
more consistently.
Thanks!
--
*Gregory Fee*
Engineer
the RocksDBStateBackend for
best performance on S3? Or any tips with how to get checkpoints with large
amounts of state to succeed?
Thanks!
--
*Gregory Fee*
Engineer
[image: Lyft] <http://www.lyft.com>
to me because I do not think I'm doing
a non-windowed GroupBy anywhere. Can anyone help me?
--
Gregory Fee
Engineer
on the events in the retract stream?
--
*Gregory Fee*
Engineer
425.830.4734 <+14258304734>
[image: Lyft] <http://www.lyft.com>
called on operators that are shutdown gracefully even in a failure
condition. Is that how Flink is supposed to work? Am I missing something?
--
*Gregory Fee*
Engineer
425.830.4734 <+14258304734>
[image: Lyft] <http://www.lyft.com>
happens if I have a stateful program
using Flink SQL and I want to update my Flink binaries. If the query plan
ends up changing based on that upgrade does it mean that the load of the
save point is going to fail?
Thanks!
--
*Gregory Fee*
Engineer
425.830.4734 <+14258304734>
[image: Lyft]
Hi group, I want to bootstrap some aggregates based on historic data in S3 and
then keep them updated based on a stream. To do this I was thinking of doing
something like processing all of the historic data, doing a save point, then
restoring my program from that save point but with a stream
source instead. Does this seem like a reasonable approach or is there a
better way to approach this functionality? There does not appear to be a
straightforward way of doing it the way I was thinking so
any advice would be appreciated.
--
*Gregory Fee*
Engineer
425.830.4734 <+14258304734>
[image
17 matches
Mail list logo