+1 (non-binding)

- Build the Pravega Flink connector with RC artifacts and all tests pass
- Start a cluster, Run Pravega reader and writer application on it successfully 

Thanks,
Brian

-----Original Message-----
From: Leonard Xu <xbjt...@gmail.com> 
Sent: Thursday, April 29, 2021 16:53
To: dev
Subject: Re: [VOTE] Release 1.13.0, release candidate #2


[EXTERNAL EMAIL] 

+1 (non-binding)

- verified signatures and hashes
- built from source code with scala 2.11 succeeded
- started a cluster, WebUI was accessible, ran some simple SQL jobs, no 
suspicious log output
- tested time functions and time zone usage in SQL Client, the query result is 
as expected
- the web PR looks good
- found one minor exception message typo, will improve it later

Best,
Leonard Xu

> 在 2021年4月29日,16:11,Xingbo Huang <hxbks...@gmail.com> 写道:
> 
> +1 (non-binding)
> 
> - verified checksum and signature
> - test upload `apache-flink` and `apache-flink-libraries` to test.pypi
> - pip install `apache-flink-libraries` and `apache-flink` in mac os
> - started cluster and run row-based operation test
> - started cluster and test python general group window agg
> 
> Best,
> Xingbo
> 
> Dian Fu <dian0511...@gmail.com> 于2021年4月29日周四 下午4:05写道:
> 
>> +1 (binding)
>> 
>> - Verified the signature and checksum
>> - Installed PyFlink successfully using the source package
>> - Run a few PyFlink examples: Python UDF, Pandas UDF, Python 
>> DataStream API with state access, Python DataStream API with batch 
>> execution mode
>> - Reviewed the website PR
>> 
>> Regards,
>> Dian
>> 
>>> 2021年4月29日 下午3:11,Jark Wu <imj...@gmail.com> 写道:
>>> 
>>> +1 (binding)
>>> 
>>> - checked/verified signatures and hashes
>>> - started cluster and run some e2e sql queries using SQL Client, 
>>> results are as expect:
>>> * read from kafka source, window aggregate, lookup mysql database, 
>>> write into elasticsearch
>>> * window aggregate using legacy window syntax and new window TVF
>>> * verified web ui and log output
>>> - reviewed the release PR
>>> 
>>> I found the log contains some verbose information when using window 
>>> aggregate, but I think this doesn't block the release, I created 
>>> FLINK-22522 to fix it.
>>> 
>>> Best,
>>> Jark
>>> 
>>> 
>>> On Thu, 29 Apr 2021 at 14:46, Dawid Wysakowicz 
>>> <dwysakow...@apache.org>
>>> wrote:
>>> 
>>>> Hey Matthias,
>>>> 
>>>> I'd like to double confirm what Guowei said. The dependency is 
>>>> Apache 2 licensed and we do not bundle it in our jar (as it is in 
>>>> the runtime
>>>> scope) thus we do not need to mention it in the NOTICE file (btw, 
>>>> the best way to check what is bundled is to check the output of 
>>>> maven shade plugin). Thanks for checking it!
>>>> 
>>>> Best,
>>>> 
>>>> Dawid
>>>> 
>>>> On 29/04/2021 05:25, Guowei Ma wrote:
>>>>> Hi, Matthias
>>>>> 
>>>>> Thank you very much for your careful inspection.
>>>>> I check the flink-python_2.11-1.13.0.jar and we do not bundle
>>>>> org.conscrypt:conscrypt-openjdk-uber:2.5.1 to it.
>>>>> So I think we may not need to add this to the NOTICE file. (BTW 
>>>>> The
>> jar's
>>>>> scope is runtime)
>>>>> 
>>>>> Best,
>>>>> Guowei
>>>>> 
>>>>> 
>>>>> On Thu, Apr 29, 2021 at 2:33 AM Matthias Pohl 
>>>>> <matth...@ververica.com>
>>>>> wrote:
>>>>> 
>>>>>> Thanks Dawid and Guowei for managing this release.
>>>>>> 
>>>>>> - downloaded the sources and binaries and checked the checksums
>>>>>> - built Flink from the downloaded sources
>>>>>> - executed example jobs with standalone deployments - I didn't 
>>>>>> find anything suspicious in the logs
>>>>>> - reviewed release announcement pull request
>>>>>> 
>>>>>> - I did a pass over dependency updates: git diff release-1.12.2
>>>>>> release-1.13.0-rc2 */*.xml
>>>>>> There's one thing someone should double-check whether that's 
>>>>>> suppose
>> to
>>>> be
>>>>>> like that: We added org.conscrypt:conscrypt-openjdk-uber:2.5.1 as 
>>>>>> a dependency but I don't see it being reflected in the NOTICE 
>>>>>> file of
>> the
>>>>>> flink-python module. Or is this automatically added later on?
>>>>>> 
>>>>>> +1 (non-binding; please see remark on dependency above)
>>>>>> 
>>>>>> Matthias
>>>>>> 
>>>>>> On Wed, Apr 28, 2021 at 1:52 PM Stephan Ewen <se...@apache.org>
>> wrote:
>>>>>> 
>>>>>>> Glad to hear that outcome. And no worries about the false alarm.
>>>>>>> Thank you for doing thorough testing, this is very helpful!
>>>>>>> 
>>>>>>> On Wed, Apr 28, 2021 at 1:04 PM Caizhi Weng 
>>>>>>> <tsreape...@gmail.com>
>>>>>> wrote:
>>>>>>>> After the investigation we found that this issue is caused by 
>>>>>>>> the implementation of connector, not by the Flink framework.
>>>>>>>> 
>>>>>>>> Sorry for the false alarm.
>>>>>>>> 
>>>>>>>> Stephan Ewen <se...@apache.org> 于2021年4月28日周三 下午3:23写道:
>>>>>>>> 
>>>>>>>>> @Caizhi and @Becket - let me reach out to you to jointly debug 
>>>>>>>>> this
>>>>>>>> issue.
>>>>>>>>> I am wondering if there is some incorrect reporting of failed
>> events?
>>>>>>>>> 
>>>>>>>>> On Wed, Apr 28, 2021 at 8:53 AM Caizhi Weng 
>>>>>>>>> <tsreape...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>>> -1
>>>>>>>>>> 
>>>>>>>>>> We're testing this version on batch jobs with large 
>>>>>>>>>> (600~1000)
>>>>>>>>> parallelisms
>>>>>>>>>> and the following exception messages appear with high frequency:
>>>>>>>>>> 
>>>>>>>>>> 2021-04-27 21:27:26
>>>>>>>>>> org.apache.flink.util.FlinkException: An OperatorEvent from 
>>>>>>>>>> an OperatorCoordinator to a task was lost. Triggering task 
>>>>>>>>>> failover
>> to
>>>>>>>>> ensure
>>>>>>>>>> consistency. Event: '[NoMoreSplitEvent]', targetTask: <task 
>>>>>>>>>> name>
>> -
>>>>>>>>>> execution #0
>>>>>>>>>> at
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>> 
>>>> 
>> org.apache.flink.runtime.operators.coordination.SubtaskGatewayImpl.la
>> mbda$sendEvent$0(SubtaskGatewayImpl.java:81)
>>>>>>>>>> at
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>> 
>>>> 
>> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.ja
>> va:822)
>>>>>>>>>> at
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>> 
>>>> 
>> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableF
>> uture.java:797)
>>>>>>>>>> at
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>> 
>>>> 
>> java.util.concurrent.CompletableFuture$Completion.run(CompletableFutu
>> re.java:442)
>>>>>>>>>> at
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>> 
>>>> 
>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpc
>> Actor.java:440)
>>>>>>>>>> at
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>> 
>>>> 
>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaR
>> pcActor.java:208)
>>>>>>>>>> at
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>> 
>>>> 
>> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage
>> (FencedAkkaRpcActor.java:77)
>>>>>>>>>> at
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>> 
>>>> 
>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcA
>> ctor.java:158)
>>>>>>>>>> at 
>>>>>>>>>> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>>>>>>>>>> at 
>>>>>>>>>> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>>>>>>>>>> at
>>>>>> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123
>>>>>> )
>>>>>>>>>> at akka.japi.pf
>>>>>>> .UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>>>>>>>>>> at
>>>>>>> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:1
>>>>>>> 70)
>>>>>>>>>> at
>>>>>>> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:1
>>>>>>> 71)
>>>>>>>>>> at
>>>>>>> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:1
>>>>>>> 71)
>>>>>>>>>> at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
>>>>>>>>>> at 
>>>>>>>>>> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:22
>>>>>>>>>> 5) at 
>>>>>>>>>> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
>>>>>>>>>> at akka.actor.ActorCell.invoke(ActorCell.scala:561)
>>>>>>>>>> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
>>>>>>>>>> at akka.dispatch.Mailbox.run(Mailbox.scala:225)
>>>>>>>>>> at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
>>>>>>>>>> at
>>>>>> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>>>>>>>>> at
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>> 
>>>> 
>> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.ja
>> va:1339)
>>>>>>>>>> at
>>>>>>>> 
>> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>>>>>>>>> at
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>> 
>>>> 
>> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.
>> java:107)
>>>>>>>>>> Becket Qin is investigating this issue.
>>>>>>>>>> 
>>>> 
>>>> 
>> 
>> 

Reply via email to