ok great,
that's done, the PR is rebased and squashed on top of master and is running
through Travis
https://github.com/apache/flink/pull/9494
Dyana
On Tue, 20 Aug 2019 at 15:32, Tzu-Li (Gordon) Tai
wrote:
> Hi Dyana,
>
> Regarding your question on the Chinese docs:
> Since the Chinese counte
@Dyana would you still be interested in carrying the responsibility and
> forwarding the effort?
>
> Thanks,
> Bowen
>
> [1] https://issues.apache.org/jira/browse/FLINK-12847
> [2] https://github.com/awslabs/amazon-kinesis-producer/releases
> [3] https://github.com/awslabs
ne, it
> > is just excluded when creating the release. So what needs to be done is
> to
> > remove the -Pinclude-kinesis cruft and make it part of the default
> modules
> > instead.
> >
> > Thomas
> >
> >
> > On Fri, Jun 14, 2019 at 10:06 AM Dy
nd the KPL when it's available under the
> Apache
> >> license) will allow the Kinesis connectors to be distributed in the core
> >> build. (making my life easier)
> >>
> >> I haven't seen a Jira ticket specifically for an upgrade in major
> >> v
Dyana Rose created FLINK-12847:
--
Summary: Update Kinesis Connectors to latest Apache licensed
libraries
Key: FLINK-12847
URL: https://issues.apache.org/jira/browse/FLINK-12847
Project: Flink
The Kinesis Client Library v2.x and the AWS Java SDK v2.x both are now on the
Apache 2.0 license.
https://github.com/awslabs/amazon-kinesis-client/blob/master/LICENSE.txt
https://github.com/aws/aws-sdk-java-v2/blob/master/LICENSE.txt
There is a PR for the Kinesis Producer Library to update it t
Just wanted to give an update on this.
Our ops team and myself independently came to the same conclusion that our
ZooKeeper quorum was having syncing issues.
After a bit more research, they have updated the initLimit and syncLimit in the
quorum configs to:
initLimit=10
syncLimit=5
After this c
Like all the best problems, I can't get this to reproduce locally.
Everything has worked as expected. I started up a test job with 5 retained
checkpoints, let it run and watched the nodes in zookeeper.
Then shut down and restarted the Flink cluster.
The ephemeral lock nodes in the retained chec
s case JM2 would successfully remove the job from s3, but because
> > its lockNode is different from JM1 it cannot delete the lock file in the
> > jobgraph folder and so can’t remove the jobgraph. Then Flink restarts and
> > tries to process the JobGraph it has found, but the S3 files have been
> > deleted.
> >
> > Possible related closed issues (fixes went in v1.7.0):
> > https://issues.apache.org/jira/browse/FLINK-10184 and
> > https://issues.apache.org/jira/browse/FLINK-10255
> >
> > Thanks for any insight,
> > Dyana
> >
>
--
Dyana Rose
Software Engineer
W: www.salecycle.com <http://www.salecycle.com/>
[image: The 2019 Look Book - Download Now]
<https://t.xink.io/Tracking/Index/WcwBAKNtAAAwphkA0>
Flink v1.7.1
After a Flink reboot we've been seeing some unexpected issues with excess
retained checkpoints not being able to be removed from ZooKeeper after a new
checkpoint is created.
I believe I've got my head around the role of ZK and lockNodes in Checkpointing
after going through the cod
Re: who's using the web ui
Though many mature solutions, with a fair amount of time/resources available
are likely running their own front ends, for teams like mine which are smaller
and aren't focused solely on working with Flink, having the web ui available
removes a large barrier to getting
t of supported
>> Reporters.
>> I was wondering why is that. Was there no demand from the community? Is it
>> related to licensing issue with AWS? Was it a technical concern?
>>
>> Would you accept this contribution into Flink?
>>
>> Thanks,
>> Rafi
>>
Hello,
We've received notification from AWS that the Kinesis Producer Library
versions < 0.12.6 will stop working after the 12th of June (assuming the
date in the email is in US format...)
Flink v1.5.0 has the KPL version at 0.12.6 so it will be fine when it's
released. However using the kinesis
The PR for this has a question that I'd like some feedback on.
https://github.com/apache/flink/pull/5295
The issue is, in order to be able to pass a typed event to the session gap
extractor, the assigner needs to be generic. And if the assigner is
generic, then the triggers need to be generic.
An
Dyana Rose created FLINK-8439:
-
Summary: Document using a custom AWS Credentials Provider with
flink-3s-fs-hadoop
Key: FLINK-8439
URL: https://issues.apache.org/jira/browse/FLINK-8439
Project: Flink
Dyana Rose created FLINK-8384:
-
Summary: Session Window Assigner with Dynamic Gaps
Key: FLINK-8384
URL: https://issues.apache.org/jira/browse/FLINK-8384
Project: Flink
Issue Type: Improvement
sue?>
>
> Btw, I would suggest to implement this as a new type of assigner,
something like DynamicSessionWindows.>
>
> Best,>
> Aljoscha>
>
> > On 29. Dec 2017, at 20:54, Dyana Rose wrote:>
> > >
> > I have a use case for non-static Session Window
I have a use case for non-static Session Window gaps.
For example, given a stream of IoT events, each device type could have a
different gap, and that gap could change while sessions are in flight.
I didn't want to have to run a stream processor for each potential gap
length, not to mention the h
Dyana Rose created FLINK-8267:
-
Summary: Kinesis Producer example setting Region key
Key: FLINK-8267
URL: https://issues.apache.org/jira/browse/FLINK-8267
Project: Flink
Issue Type: Bug
Dyana Rose created FLINK-4026:
-
Summary: Fix code, grammar, and link issues in the Streaming
documentation
Key: FLINK-4026
URL: https://issues.apache.org/jira/browse/FLINK-4026
Project: Flink
Dyana Rose created FLINK-3975:
-
Summary: docs build script isn't serving the preview on the
correct base url
Key: FLINK-3975
URL: https://issues.apache.org/jira/browse/FLINK-3975
Project:
21 matches
Mail list logo