Re: [ANNOUNCE] Apache Flink 1.13.2 released

2021-08-10 Thread Arvid Heise
Thank you!

On Tue, Aug 10, 2021 at 11:04 AM Jingsong Li  wrote:

> Thanks Yun Tang and everyone!
>
> Best,
> Jingsong
>
> On Tue, Aug 10, 2021 at 9:37 AM Xintong Song 
> wrote:
>
>> Thanks Yun and everyone~!
>>
>> Thank you~
>>
>> Xintong Song
>>
>>
>>
>> On Mon, Aug 9, 2021 at 10:14 PM Till Rohrmann 
>> wrote:
>>
>> > Thanks Yun Tang for being our release manager and the great work! Also
>> > thanks a lot to everyone who contributed to this release.
>> >
>> > Cheers,
>> > Till
>> >
>> > On Mon, Aug 9, 2021 at 9:48 AM Yu Li  wrote:
>> >
>> >> Thanks Yun Tang for being our release manager and everyone else who
>> made
>> >> the release possible!
>> >>
>> >> Best Regards,
>> >> Yu
>> >>
>> >>
>> >> On Fri, 6 Aug 2021 at 13:52, Yun Tang  wrote:
>> >>
>> >>>
>> >>> The Apache Flink community is very happy to announce the release of
>> >>> Apache Flink 1.13.2, which is the second bugfix release for the Apache
>> >>> Flink 1.13 series.
>> >>>
>> >>> Apache Flink® is an open-source stream processing framework for
>> >>> distributed, high-performing, always-available, and accurate data
>> streaming
>> >>> applications.
>> >>>
>> >>> The release is available for download at:
>> >>> https://flink.apache.org/downloads.html
>> >>>
>> >>> Please check out the release blog post for an overview of the
>> >>> improvements for this bugfix release:
>> >>> https://flink.apache.org/news/2021/08/06/release-1.13.2.html
>> >>>
>> >>> The full release notes are available in Jira:
>> >>>
>> >>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12350218&styleName=&projectId=12315522
>> >>>
>> >>> We would like to thank all contributors of the Apache Flink community
>> >>> who made this release possible!
>> >>>
>> >>> Regards,
>> >>> Yun Tang
>> >>>
>> >>
>>
>
>
> --
> Best, Jingsong Lee
>


Re: [ANNOUNCE] Apache Flink Stateful Functions 3.1.0 released

2021-09-06 Thread Arvid Heise
Congratulations! New features look awesome.

On Wed, Sep 1, 2021 at 9:10 AM Till Rohrmann  wrote:

> Great news! Thanks a lot for all your work on the new release :-)
>
> Cheers,
> Till
>
> On Wed, Sep 1, 2021 at 9:07 AM Johannes Moser  wrote:
>
>> Congratulations, great job. 🎉
>>
>> On 31.08.2021, at 17:09, Igal Shilman  wrote:
>>
>> The Apache Flink community is very happy to announce the release of Apache
>> Flink Stateful Functions (StateFun) 3.1.0.
>>
>> StateFun is a cross-platform stack for building Stateful Serverless
>> applications, making it radically simpler to develop scalable, consistent,
>> and elastic distributed applications.
>>
>> Please check out the release blog post for an overview of the release:
>> https://flink.apache.org/news/2021/08/31/release-statefun-3.1.0.html
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> Maven artifacts for StateFun can be found at:
>> https://search.maven.org/search?q=g:org.apache.flink%20statefun
>>
>> Python SDK for StateFun published to the PyPI index can be found at:
>> https://pypi.org/project/apache-flink-statefun/
>>
>> Official Docker images for StateFun are published to Docker Hub:
>> https://hub.docker.com/r/apache/flink-statefun
>>
>> The full release notes are available in Jira:
>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12350038&projectId=12315522
>>
>> We would like to thank all contributors of the Apache Flink community who
>> made this release possible!
>>
>> Thanks,
>> Igal
>>
>>
>>


Re: Some questions about limit push down

2021-01-05 Thread Arvid Heise
This is most likely a bug, could you reiterate a bit how it is invalid?
I'm also CCing Jark since he is one of the SQL experts.

On Mon, Dec 28, 2020 at 10:37 AM Jun Zhang 
wrote:

> when I query hive table by sql, like this `select * from hivetable where
> id = 1 limit 1`,   I found that the limit push down is invalid, is it a bug
> or was it designed like this?
>
> if the sql is  'select * from hivetable  limit 1'  ,it is ok
>
> thanks
>


-- 

Arvid Heise | Senior Java Developer

<https://www.ververica.com/>

Follow us @VervericaData

--

Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time

--

Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany

--
Ververica GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
(Toni) Cheng


Re: [Question] enable end2end Kafka exactly once processing

2020-03-02 Thread Arvid Heise
Hi Eleanore,

the flink runner is maintained by the Beam developers, so it's best to ask
on their user list.

The documentation is, however, very clear. "Flink runner is one of the
runners whose checkpoint semantics are not compatible with current
implementation (hope to provide a solution in near future)."
So, Beam uses a different approach to EOS than Flink and there is currently
no way around it. Maybe, you could use the EOS Kafka Sink of Flink directly
and use that in Beam somehow.

I'm not aware of any work with the Beam devs to actually make it work.
Independently, we started to improve our interfaces for two phase commit
sinks (which is our approach). It might coincidentally help Beam.

Best,

Arvid

On Sun, Mar 1, 2020 at 8:23 PM Jin Yi  wrote:

> Hi experts,
>
> My application is using Apache Beam and with Flink to be the runner. My
> source and sink are kafka topics, and I am using KafkaIO connector provided
> by Apache Beam to consume and publish.
>
> I am reading through Beam's java doc:
> https://beam.apache.org/releases/javadoc/2.16.0/org/apache/beam/sdk/io/kafka/KafkaIO.WriteRecords.html#withEOS-int-java.lang.String-
>
> It looks like Beam does not support Flink Runner for EOS, can someone
> please shad some lights on how to enable exactly once processing with
> Apache Beam?
>
> Thanks a lot!
> Eleanore
>


Re: How to change the flink web-ui jobServer?

2020-03-10 Thread Arvid Heise
Hi LakeShen,

you can change the port with

conf.setInteger(RestOptions.PORT, 8082);

or if want to be on the safe side specify a range

conf.setString(RestOptions.BIND_PORT, "8081-8099");


On Mon, Mar 9, 2020 at 10:47 AM LakeShen  wrote:

> Hi community,
>now I am moving the flink job to k8s,and I plan to use the ingress
> to show the flink web ui  , the problem is that fink job server aren't
> correct, so I want to change the flink web-ui jobserver ,I don't find the
> any method  to change it ,are there some method to do that?
>Thanks to your reply.
>
> Best wishes,
> LakeShen
>


Re: Flink(1.8.3) UI exception is overwritten, and the cause of the failure is not displayed

2020-06-23 Thread Arvid Heise
Hi Andrew,

this looks like your Flink cluster has a flaky connection to the Kafka
cluster or your Kafka cluster was down.

Since the operator failed on the sync part of the snapshot, it resorted to
failure to avoid having inconsistent operator state. If you configured
restarts, it just restart from your last checkpoint 86 and recompute the
data.

What would be your expectation? That the checkpoint fails but the job
continues without restart?

In general, the issue with Kafka is that the transaction used for exactly
once, eventually time out. So if too many checkpoints cannot be taken,
you'd ultimately have incorrect data. Hence, failing and restarting is
cleaner as it guarantees consistent data.

On Mon, Jun 22, 2020 at 10:16 AM Andrew <874269...@qq.com> wrote:

> versin: 1.8.3
> graph: source -> map -> sink
>
> Scenes:
>  source subtask failed causes the graph to restart, but the exception
> displayed on the flink UI is not the cause of the task failure
>
> displayed:
> JM log:
> 020-06-22 14:29:01.087 INFO
>  org.apache.flink.runtime.executiongraph.ExecutionGraph  - Job
> baseInfoAdapter_20601 (20601159280210484110080369520601) switched from
> state RUNNING to FAILING.
> java.lang.Exception: Could not perform checkpoint 87 for operator Sink:
> adapterOutput (19/30).
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:597)
> at
> org.apache.flink.streaming.runtime.io.BarrierTracker.notifyCheckpoint(BarrierTracker.java:270)
> at
> org.apache.flink.streaming.runtime.io.BarrierTracker.processBarrier(BarrierTracker.java:186)
> at
> org.apache.flink.streaming.runtime.io.BarrierTracker.getNextNonBlocked(BarrierTracker.java:105)
> at org.apache.flink.streaming.runtime.io
> .StreamInputProcessor.processInput(StreamInputProcessor.java:209)
> at
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:302)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:769)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.Exception: Could not complete snapshot 87 for
> operator Sink: adapterOutput (19/30).
> at
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:422)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1115)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1057)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:731)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:643)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:588)
> ... 8 common frames omitted
> Caused by: java.lang.Exception: Failed to send data to Kafka: The server
> disconnected before a response was received.
> at
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.checkErroneous(FlinkKafkaProducerBase.java:375)
> at
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.snapshotState(FlinkKafkaProducerBase.java:363)
> at
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118)
> at
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99)
> at
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:90)
> at
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:395)
> ... 13 common frames omitted
>
>
> TM log:Running to Cannceling
> 2020-06-22 15:39:19.816 INFO  com.xxx.client.consumer.GroupConsumer  -
> consumer xxx to jmq1230:xxx,READ,xxx,NONE is stopped.
> 2020-06-22 15:39:19.816 INFO  org.apache.flink.runtime.taskmanager.Task  -
> Source: baseInfo (79/90) (4e62a84f251d9c68a54e464cff51171e) switched from
> RUNNING to CANCELING.
>
>
> Is this a known issue?
>


-- 

Arvid Heise | Senior Java Developer

<https://www.ververica.com/>

Follow us @VervericaData

--

Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time

--

Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany

--
Ververica GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
(Toni) Cheng