Re: [ANNOUNCE] New PMC member: Piotr Nowojski

2020-07-06 Thread aihua li
Congratulations Piotr~!

> 2020年7月7日 上午10:07,Yun Gao  写道:
> 
> Congratulations Piotr !
> 
> Best,
> Yun
> 
> 
> --
> Sender:Austin Bennett
> Date:2020/07/07 10:06:15
> Recipient:
> Theme:Re: [ANNOUNCE] New PMC member: Piotr Nowojski
> 
> Thanks for what you do, Piotr!
> 
> 
> On Mon, Jul 6, 2020, 7:02 PM Matt Wang  wrote:
> 
>> Congratulations!
>> 
>> 
>> --
>> 
>> Best,
>> Matt Wang
>> 
>> 
>> On 07/7/2020 09:45,Xingbo Huang wrote:
>> Congratulations Piotr!
>> 
>> Best,
>> Xingbo
>> 
>> Dian Fu  于2020年7月7日周二 上午9:43写道:
>> 
>> Congrats, Piotr!
>> 
>> 在 2020年7月7日,上午7:06,Jeff Zhang  写道:
>> 
>> Congratulations Piotr!
>> 
>> Thomas Weise  于2020年7月7日周二 上午4:45写道:
>> 
>> Congratulations!
>> 
>> 
>> On Mon, Jul 6, 2020 at 10:58 AM Seth Wiesman 
>> wrote:
>> 
>> Congratulations Piotr!
>> 
>> On Mon, Jul 6, 2020 at 12:53 PM Peter Huang <
>> huangzhenqiu0...@gmail.com>
>> wrote:
>> 
>> Congratulations!
>> 
>> On Mon, Jul 6, 2020 at 10:36 AM Arvid Heise 
>> wrote:
>> 
>> Congratulations!
>> 
>> On Mon, Jul 6, 2020 at 7:07 PM Stephan Ewen 
>> wrote:
>> 
>> Hi all!
>> 
>> It is my pleasure to announce that Piotr Nowojski joined the Flink
>> PMC.
>> 
>> Many of you may know Piotr from the work he does on the data
>> processing
>> runtime and the network stack, from the mailing list, or the
>> release
>> manager work.
>> 
>> Congrats, Piotr!
>> 
>> Best,
>> Stephan
>> 
>> 
>> 
>> --
>> 
>> Arvid Heise | Senior Java Developer
>> 
>> 
>> 
>> Follow us @VervericaData
>> 
>> --
>> 
>> Join Flink Forward  - The Apache Flink
>> Conference
>> 
>> Stream Processing | Event Driven | Real Time
>> 
>> --
>> 
>> Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
>> 
>> --
>> Ververica GmbH
>> Registered at Amtsgericht Charlottenburg: HRB 158244 B
>> Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason,
>> Ji
>> (Toni) Cheng
>> 
>> 
>> 
>> 
>> 
>> 
>> --
>> Best Regards
>> 
>> Jeff Zhang
>> 
>> 
>> 
> 



[jira] [Created] (FLINK-18433) 1.11 has a regression

2020-06-24 Thread Aihua Li (Jira)
Aihua Li created FLINK-18433:


 Summary: 1.11 has a regression
 Key: FLINK-18433
 URL: https://issues.apache.org/jira/browse/FLINK-18433
 Project: Flink
  Issue Type: Bug
  Components: API / Core, API / DataStream
Affects Versions: 1.11.1
 Environment: 3 machines

[|https://github.com/Li-Aihua/flink/blob/test_suite_for_basic_operations_1.11/flink-end-to-end-perf-tests/flink-basic-operations/src/main/java/org/apache/flink/basic/operations/PerformanceTestJob.java]
Reporter: Aihua Li


 

I ran end-to-end performance tests between the Release-1.10 and Release-1.11. 
the results were as follows:
|scenarioName|release-1.10|release-1.11| |
|OneInput_Broadcast_LazyFromSource_ExactlyOnce_10_rocksdb|46.175|43.8133|-5.11%|
|OneInput_Rescale_LazyFromSource_ExactlyOnce_100_heap|211.835|200.355|-5.42%|
|OneInput_Rebalance_LazyFromSource_ExactlyOnce_1024_rocksdb|1721.041667|1618.32|-5.97%|
|OneInput_KeyBy_LazyFromSource_ExactlyOnce_10_heap|46|43.615|-5.18%|
|OneInput_Broadcast_Eager_ExactlyOnce_100_rocksdb|212.105|199.688|-5.85%|
|OneInput_Rescale_Eager_ExactlyOnce_1024_heap|1754.64|1600.12|-8.81%|
|OneInput_Rebalance_Eager_ExactlyOnce_10_rocksdb|45.9167|43.0983|-6.14%|
|OneInput_KeyBy_Eager_ExactlyOnce_100_heap|212.0816667|200.727|-5.35%|
|OneInput_Broadcast_LazyFromSource_AtLeastOnce_1024_rocksdb|1718.245|1614.381667|-6.04%|
|OneInput_Rescale_LazyFromSource_AtLeastOnce_10_heap|46.12|43.5517|-5.57%|
|OneInput_Rebalance_LazyFromSource_AtLeastOnce_100_rocksdb|212.038|200.388|-5.49%|
|OneInput_KeyBy_LazyFromSource_AtLeastOnce_1024_heap|1762.048333|1606.408333|-8.83%|
|OneInput_Broadcast_Eager_AtLeastOnce_10_rocksdb|46.0583|43.4967|-5.56%|
|OneInput_Rescale_Eager_AtLeastOnce_100_heap|212.233|201.188|-5.20%|
|OneInput_Rebalance_Eager_AtLeastOnce_1024_rocksdb|1720.66|1616.85|-6.03%|
|OneInput_KeyBy_Eager_AtLeastOnce_10_heap|46.14|43.6233|-5.45%|
|TwoInputs_Broadcast_LazyFromSource_ExactlyOnce_100_rocksdb|156.918|152.957|-2.52%|
|TwoInputs_Rescale_LazyFromSource_ExactlyOnce_1024_heap|1415.511667|1300.1|-8.15%|
|TwoInputs_Rebalance_LazyFromSource_ExactlyOnce_10_rocksdb|34.2967|34.1667|-0.38%|
|TwoInputs_KeyBy_LazyFromSource_ExactlyOnce_100_heap|158.353|151.848|-4.11%|
|TwoInputs_Broadcast_Eager_ExactlyOnce_1024_rocksdb|1373.406667|1300.056667|-5.34%|
|TwoInputs_Rescale_Eager_ExactlyOnce_10_heap|34.5717|32.0967|-7.16%|
|TwoInputs_Rebalance_Eager_ExactlyOnce_100_rocksdb|158.655|147.44|-7.07%|
|TwoInputs_KeyBy_Eager_ExactlyOnce_1024_heap|1356.611667|1292.386667|-4.73%|
|TwoInputs_Broadcast_LazyFromSource_AtLeastOnce_10_rocksdb|34.01|33.205|-2.37%|
|TwoInputs_Rescale_LazyFromSource_AtLeastOnce_100_heap|149.588|145.997|-2.40%|
|TwoInputs_Rebalance_LazyFromSource_AtLeastOnce_1024_rocksdb|1359.74|1299.156667|-4.46%|
|TwoInputs_KeyBy_LazyFromSource_AtLeastOnce_10_heap|34.025|29.6833|-12.76%|
|TwoInputs_Broadcast_Eager_AtLeastOnce_100_rocksdb|157.303|151.4616667|-3.71%|
|TwoInputs_Rescale_Eager_AtLeastOnce_1024_heap|1368.74|1293.238333|-5.52%|
|TwoInputs_Rebalance_Eager_AtLeastOnce_10_rocksdb|34.325|33.285|-3.03%|
|TwoInputs_KeyBy_Eager_AtLeastOnce_100_heap|162.5116667|134.375|-17.31%|

It can be seen that the performance of 1.11 has a regression, basically around 
5%, and the maximum regression is 17%. This needs to be checked.

the test code:

flink-1.10.0: 
[https://github.com/Li-Aihua/flink/blob/test_suite_for_basic_operations/flink-end-to-end-perf-tests/flink-basic-operations/src/main/java/org/apache/flink/basic/operations/PerformanceTestJob.java]

flink-1.11.0: 
[https://github.com/Li-Aihua/flink/blob/test_suite_for_basic_operations_1.11/flink-end-to-end-perf-tests/flink-basic-operations/src/main/java/org/apache/flink/basic/operations/PerformanceTestJob.java]

commit cmd like tis:

bin/flink run -d -m 192.168.39.246:8081 -c 
org.apache.flink.basic.operations.PerformanceTestJob 
/home/admin/flink-basic-operations_2.11-1.10-SNAPSHOT.jar --topologyName 
OneInput --LogicalAttributesofEdges Broadcast --ScheduleMode LazyFromSource 
--CheckpointMode ExactlyOnce --recordSize 10 --stateBackend rocksdb

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18116) E2E performance test

2020-06-03 Thread Aihua Li (Jira)
Aihua Li created FLINK-18116:


 Summary: E2E performance test
 Key: FLINK-18116
 URL: https://issues.apache.org/jira/browse/FLINK-18116
 Project: Flink
  Issue Type: Test
  Components: API / Core, API / DataStream, API / State Processor, 
Build System, Client / Job Submission
Affects Versions: 1.11.0
Reporter: Aihua Li
 Fix For: 1.11.0


it's mainly to verify the performance don't less than 1.10 version by checking 
the metrics of end-to-end performance test,such as qps,latency .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18115) StabilityTest

2020-06-03 Thread Aihua Li (Jira)
Aihua Li created FLINK-18115:


 Summary: StabilityTest
 Key: FLINK-18115
 URL: https://issues.apache.org/jira/browse/FLINK-18115
 Project: Flink
  Issue Type: Test
  Components: API / Core, API / State Processor, Build System, Client / 
Job Submission
Affects Versions: 1.10.0
Reporter: Aihua Li
 Fix For: 1.11.0


It mainly checks the flink job can recover from  various unabnormal situations 
including disk full, network interruption, zk unable to connect, rpc message 
timeout, etc. 
If job can't be recoverd it means test failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17907) flink-table-api-java: Compilation failure

2020-05-24 Thread Aihua Li (Jira)
Aihua Li created FLINK-17907:


 Summary: flink-table-api-java: Compilation failure
 Key: FLINK-17907
 URL: https://issues.apache.org/jira/browse/FLINK-17907
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.11.0
 Environment: local env
Reporter: Aihua Li
 Fix For: 1.11.0


When i execute the command "mvn clean install -B -U -DskipTests 
-Dcheckstyle.skip=true -Drat.ignoreErrors -Dmaven.javadoc.skip " in branch 
"master" and "release\-1.11" to install flink in my local env, i meet this 
failure:

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile (default-compile) 
on project flink-table-api-java: Compilation failure
[ERROR] 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/operations/utils/AggregateOperationFactory.java:[550,53]
 unreported exception X; must be caught or declared to be thrown
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn  -rf :flink-table-api-java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] Release 1.10.0, release candidate #3

2020-02-09 Thread aihua li
Yes, but the results you see in the Performance Code Speed Center [3] skip 
FLIP-49.
 The results of the default configurations are overwritten by the latest 
results.

> 2020年2月9日 下午5:29,Yu Li  写道:
> 
> Thanks for the efforts Aihua! These could definitely improve our RC test 
> coverage!
> 
> Just to confirm, that the stability tests were executed with the same test 
> suite for Alibaba production usage, and the e2e performance one was executed 
> with the test suite proposed in FLIP-83 [1] and FLINK-14917 [2], and the 
> result could also be observed from our performance code-speed center [3], 
> right?
> 
> Thanks.
> 
> Best Regards,
> Yu
> 
> [1] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-83%3A+Flink+End-to-end+Performance+Testing+Framework
>  
> <https://cwiki.apache.org/confluence/display/FLINK/FLIP-83%3A+Flink+End-to-end+Performance+Testing+Framework>
> [2] https://issues.apache.org/jira/browse/FLINK-14917 
> <https://issues.apache.org/jira/browse/FLINK-14917>
> [3] https://s.apache.org/nglhm <https://s.apache.org/nglhm>
> 
> On Sun, 9 Feb 2020 at 11:20, aihua li  <mailto:liaihua1...@gmail.com>> wrote:
> +1 (non-binging)
> 
> I ran stability tests and end-to-end performance tests in branch 
> release-1.10.0-rc3,both of them passed.
> 
> Stability test: It mainly checks The flink job can revover from  various 
> abnormal situations which concluding disk full, 
> network interruption, zk unable to connect, rpc message timeout, etc. 
> If job can't be recoverd it means test failed.
> The test passed after running 5 hours.
> 
> End-to-end performance test: It containes 32 test scenarios which designed in 
> FLIP-83.
> Test results: The performance regressions about 3% from 1.9.1 if uses default 
> parameters;
> The result:
> 
>  if skips FLIP-49 (add parameters:taskmanager.memory.managed.fraction: 
> 0,taskmanager.memory.flink.size: 1568m in flink-conf.yaml),
>  the performance improves about 5% from 1.9.1. The result:
> 
> 
> I confirm it with @Xintong Song 
> <https://cwiki.apache.org/confluence/display/~xintongsong> that the result  
> makes sense.
> 
>> 2020年2月8日 上午5:54,Gary Yao mailto:g...@apache.org>> 写道:
>> 
>> Hi everyone,
>> Please review and vote on the release candidate #3 for the version 1.10.0,
>> as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific comments)
>> 
>> 
>> The complete staging area is available for your review, which includes:
>> * JIRA release notes [1],
>> * the official Apache source release and binary convenience releases to be
>> deployed to dist.apache.org <http://dist.apache.org/> [2], which are signed 
>> with the key with
>> fingerprint BB137807CEFBE7DD2616556710B12A1F89C115E8 [3],
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> * source code tag "release-1.10.0-rc3" [5],
>> * website pull request listing the new release and adding announcement blog
>> post [6][7].
>> 
>> The vote will be open for at least 72 hours. It is adopted by majority
>> approval, with at least 3 PMC affirmative votes.
>> 
>> Thanks,
>> Yu & Gary
>> 
>> [1]
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845
>>  
>> <https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12345845>
>> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc3/ 
>> <https://dist.apache.org/repos/dist/dev/flink/flink-1.10.0-rc3/>
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS 
>> <https://dist.apache.org/repos/dist/release/flink/KEYS>
>> [4] https://repository.apache.org/content/repositories/orgapacheflink-1333 
>> <https://repository.apache.org/content/repositories/orgapacheflink-1333>
>> [5] https://github.com/apache/flink/releases/tag/release-1.10.0-rc3 
>> <https://github.com/apache/flink/releases/tag/release-1.10.0-rc3>
>> [6] https://github.com/apache/flink-web/pull/302 
>> <https://github.com/apache/flink-web/pull/302>
>> [7] https://github.com/apache/flink-web/pull/301 
>> <https://github.com/apache/flink-web/pull/301>
> 



Re: [VOTE] Integrate Flink Docker image publication into Flink release process

2020-01-30 Thread aihua li
+1 (non-binding)

> 2020年1月30日 下午7:36,Igal Shilman  写道:
> 
> +1 (non-binding)
> 
> On Thu, Jan 30, 2020 at 12:18 PM Yu Li  wrote:
> 
>> +1 (non-binding)
>> 
>> Best Regards,
>> Yu
>> 
>> 
>> On Thu, 30 Jan 2020 at 18:35, Arvid Heise  wrote:
>> 
>>> +1 (non-binding)
>>> 
>>> On Thu, Jan 30, 2020 at 11:10 AM Fabian Hueske 
>> wrote:
>>> 
 Hi Ismael,
 
> Just one question, we will be able to still be featured as an
>> official
 docker image in this case?
 
 Yes, that's the goal. We still want to publish official DockerHub
>> images
 for every Flink release.
 Since we're mainly migrating the docker-flink/docker-flink repo to
 apache/flink-docker, this should just work as before.
 
 Less important images (playgrounds, demos) would be published via ASF
>>> Infra
 under the Apache DockerHub user [1].
 
 Best,
 Fabian
 
 [1] https://hub.docker.com/u/apache
 
 Am Do., 30. Jan. 2020 um 06:12 Uhr schrieb Hequn Cheng <
 chenghe...@gmail.com
> :
 
> +1
> 
> Even though I prefer to contribute the Dockerfiles into the Flink
>> main
> repo,
> but I think a dedicate repo is also a good idea.
> 
> Thanks a lot for driving this! @Ufuk Celebi
> 
> On Thu, Jan 30, 2020 at 12:02 PM Peter Huang <
>>> huangzhenqiu0...@gmail.com
> 
> wrote:
> 
>> +1 (non-binding)
>> 
>> 
>> 
>> On Wed, Jan 29, 2020 at 5:54 PM Yang Wang 
 wrote:
>> 
>>> +1 (non-binding)
>>> 
>>> 
>>> Best,
>>> Yang
>>> 
>>> Rong Rong  于2020年1月30日周四 上午12:53写道:
>>> 
 +1
 
 --
 Rong
 
 On Wed, Jan 29, 2020 at 8:51 AM Ismaël Mejía <
>> ieme...@gmail.com>
>> wrote:
 
> +1 (non-binding)
> 
> No more maintenance work for us Patrick! Just kidding :), it
>>> was
>> mostly
> done by Patrick, all kudos to him.
> Just one question, we will be able to still be featured as an
>> official
> docker image in this case?
> 
> Best,
> Ismaël
> 
> ps. Hope having an official Helm chart becomes also a future
> target.
> 
> 
> 
> On Tue, Jan 28, 2020 at 3:26 PM Fabian Hueske <
 fhue...@apache.org>
 wrote:
> 
>> +1
>> 
>> Am Di., 28. Jan. 2020 um 15:23 Uhr schrieb Yun Tang <
>>> myas...@live.com
> :
>> 
>>> +1 (non-binding)
>>> 
>>> From: Stephan Ewen 
>>> Sent: Tuesday, January 28, 2020 21:36
>>> To: dev ; patr...@ververica.com <
>>> patr...@ververica.com>
>>> Subject: Re: [VOTE] Integrate Flink Docker image
>>> publication
> into
 Flink
>>> release process
>>> 
>>> +1
>>> 
>>> On Tue, Jan 28, 2020 at 2:20 PM Patrick Lucas <
>>> patr...@ververica.com
> 
>>> wrote:
>>> 
 Thanks for kicking this off, Ufuk.
 
 +1 (non-binding)
 
 --
 Patrick
 
 On Mon, Jan 27, 2020 at 5:50 PM Ufuk Celebi <
 u...@apache.org>
 wrote:
 
> Hey all,
> 
> there is a proposal to contribute the Dockerfiles and
> scripts
>>> of
> https://github.com/docker-flink/docker-flink to the
 Flink
 project.
>> The
> discussion corresponding to this vote outlines the
> reasoning
>>> for
> the
> proposal and can be found here: [1].
> 
> The proposal is as follows:
> * Request a new repository apache/flink-docker
> * Migrate all files from docker-flink/docker-flink to
>>> apache/flink-docker
> * Update the release documentation to describe how to
> update
> apache/flink-docker for new releases
> 
> Please review and vote on this proposal as follows:
> [ ] +1, Approve the proposal
> [ ] -1, Do not approve the proposal (please provide
> specific
>> comments)
> 
> The vote will be open for at least 3 days, ending the
>> earliest
 on:
 January
> 30th 2020, 17:00 UTC.
> 
> Cheers,
> 
> Ufuk
> 
> PS: I'm treating this proposal similar to a "Release
 Plan"
> as
>> mentioned
 in
> the project bylaws [2]. Please let me know if you
 consider
>>> this a
 different
> category.
> 
> [1]
> 
 
>>> 
>> 
> 
 
>>> 
>> 
> 
 
>>> 
>> http://apache-flink-mailing-list-archive

Re: [ANNOUNCE] Yu Li became a Flink committer

2020-01-28 Thread aihua li
Congratulations Yu LI, well deserved.

> 2020年1月23日 下午4:59,Stephan Ewen  写道:
> 
> Hi all!
> 
> We are announcing that Yu Li has joined the rank of Flink committers.
> 
> Yu joined already in late December, but the announcement got lost because
> of the Christmas and New Years season, so here is a belated proper
> announcement.
> 
> Yu is one of the main contributors to the state backend components in the
> recent year, working on various improvements, for example the RocksDB
> memory management for 1.10.
> He has also been one of the release managers for the big 1.10 release.
> 
> Congrats for joining us, Yu!
> 
> Best,
> Stephan



Re: [ANNOUNCE] Dian Fu becomes a Flink committer

2020-01-16 Thread aihua li
Congratulations!  Dian Fu

> 2020年1月16日 下午6:22,Shuo Cheng  写道:
> 
> Congratulations!  Dian Fu



Re: [ANNOUNCE] Zhu Zhu becomes a Flink committer

2019-12-15 Thread aihua li
Congratulations, zhuzhu!

> 在 2019年12月16日,上午10:04,Jingsong Li  写道:
> 
> Congratulations Zhu Zhu!
> 
> Best,
> Jingsong Lee
> 
> On Mon, Dec 16, 2019 at 10:01 AM Yang Wang  wrote:
> 
>> Congratulations, Zhu Zhu!
>> 
>> wenlong.lwl  于2019年12月16日周一 上午9:56写道:
>> 
>>> Congratulations, Zhu Zhu!
>>> 
>>> On Mon, 16 Dec 2019 at 09:14, Leonard Xu  wrote:
>>> 
 Congratulations, Zhu Zhu ! !
 
 Best,
 Leonard Xu
 
> On Dec 16, 2019, at 07:53, Becket Qin  wrote:
> 
> Congrats, Zhu Zhu!
> 
> On Sun, Dec 15, 2019 at 10:26 PM Dian Fu 
>>> wrote:
> 
>> Congrats Zhu Zhu!
>> 
>>> 在 2019年12月15日,下午6:23,Zhu Zhu  写道:
>>> 
>>> Thanks everyone for the warm welcome!
>>> It's my honor and pleasure to improve Flink with all of you in the
>>> community!
>>> 
>>> Thanks,
>>> Zhu Zhu
>>> 
>>> Benchao Li  于2019年12月15日周日 下午3:54写道:
>>> 
 Congratulations!:)
 
 Hequn Cheng  于2019年12月15日周日 上午11:47写道:
 
> Congrats, Zhu Zhu!
> 
> Best, Hequn
> 
> On Sun, Dec 15, 2019 at 6:11 AM Shuyi Chen 
 wrote:
> 
>> Congratulations!
>> 
>> On Sat, Dec 14, 2019 at 7:59 AM Rong Rong 
>> wrote:
>> 
>>> Congrats Zhu Zhu :-)
>>> 
>>> --
>>> Rong
>>> 
>>> On Sat, Dec 14, 2019 at 4:47 AM tison 
 wrote:
>>> 
 Congratulations!:)
 
 Best,
 tison.
 
 
 OpenInx  于2019年12月14日周六 下午7:34写道:
 
> Congrats Zhu Zhu!
> 
> On Sat, Dec 14, 2019 at 2:38 PM Jeff Zhang >> 
> wrote:
> 
>> Congrats, Zhu Zhu!
>> 
>> Paul Lam  于2019年12月14日周六 上午10:29写道:
>> 
>>> Congrats Zhu Zhu!
>>> 
>>> Best,
>>> Paul Lam
>>> 
>>> Kurt Young  于2019年12月14日周六 上午10:22写道:
>>> 
 Congratulations Zhu Zhu!
 
 Best,
 Kurt
 
 
 On Sat, Dec 14, 2019 at 10:04 AM jincheng sun <
>> sunjincheng...@gmail.com>
 wrote:
 
> Congrats ZhuZhu and welcome on board!
> 
> Best,
> Jincheng
> 
> 
> Jark Wu  于2019年12月14日周六 上午9:55写道:
> 
>> Congratulations, Zhu Zhu!
>> 
>> Best,
>> Jark
>> 
>> On Sat, 14 Dec 2019 at 08:20, Yangze Guo <
>> karma...@gmail.com
 
>> wrote:
>> 
>>> Congrats, ZhuZhu!
>>> 
>>> Bowen Li  于 2019年12月14日周六
> 上午5:37写道:
>>> 
 Congrats!
 
 On Fri, Dec 13, 2019 at 10:42 AM Xuefu Z <
 usxu...@gmail.com>
 wrote:
 
> Congratulations, Zhu Zhu!
> 
> On Fri, Dec 13, 2019 at 10:37 AM Peter Huang <
>>> huangzhenqiu0...@gmail.com
> 
> wrote:
> 
>> Congratulations!:)
>> 
>> On Fri, Dec 13, 2019 at 9:45 AM Piotr Nowojski
 <
>> pi...@ververica.com>
>> wrote:
>> 
>>> Congratulations! :)
>>> 
 On 13 Dec 2019, at 18:05, Fabian Hueske <
>>> fhue...@gmail.com
> 
>>> wrote:
 
 Congrats Zhu Zhu and welcome on board!
 
 Best, Fabian
 
 Am Fr., 13. Dez. 2019 um 17:51 Uhr schrieb
> Till
>> Rohrmann
>>> <
 trohrm...@apache.org>:
 
> Hi everyone,
> 
> I'm very happy to announce that Zhu Zhu
>> accepted
 the
>>> offer
> of
>>> the
>> Flink
>>> PMC
> to become a committer of the Flink
 project.
> 
> Zhu Zhu has been an active community
 member
>> for
 more
>>> than
 a
>> year
> now.
>>> Zhu
> Zhu played an essential role in the
>

Re: [DISCUSS] Remove old WebUI

2019-11-24 Thread aihua li
+1 to drop the old UI.

> 在 2019年11月21日,下午8:04,Chesnay Schepler  写道:
> 
> Hello everyone,
> 
> Flink 1.9 shipped with a new UI, with the old one being kept around as a 
> backup in case something wasn't working as expected.
> 
> Currently there are no issues indicating any significant problems (exclusive 
> to the new UI), so I wanted to check what people think about dropping the old 
> UI for 1.10.
> 



Re: [DISCUSS] FLIP-83: Flink End-to-end Performance Testing Framework

2019-11-21 Thread aihua li
nks Yu for driving this.
>>>>>> Just curious about that can we collect the metrics about Job
>>>> scheduling
>>>>> and
>>>>>> task launch. the speed of this part is also important.
>>>>>> We can add tests for watch it too.
>>>>>> 
>>>>>> Look forward to more batch test support.
>>>>>> 
>>>>>> Best,
>>>>>> Jingsong Lee
>>>>>> 
>>>>>> On Mon, Nov 4, 2019 at 10:00 AM OpenInx  wrote:
>>>>>> 
>>>>>>>> The test cases are written in java and scripts in python. We
>>>> propose
>>>>> a
>>>>>>> separate directory/module in parallel with flink-end-to-end-tests,
>>>> with
>>>>>> the
>>>>>>>> name of flink-end-to-end-perf-tests.
>>>>>>> 
>>>>>>> Glad to see that the newly introduced e2e test will be written in
>>>> Java.
>>>>>>> because  I'm re-working on the existed e2e tests suites from BASH
>>>>> scripts
>>>>>>> to Java test cases so that we can support more external system ,
>>>> such
>>>>> as
>>>>>>> running the testing job on yarn+flink, docker+flink,
>>>> standalone+flink,
>>>>>>> distributed kafka cluster etc.
>>>>>>> BTW, I think the perf e2e test suites will also need to be
>> designed
>>>> as
>>>>>>> supporting running on both standalone env and distributed env.
>> will
>>>> be
>>>>>>> helpful
>>>>>>> for developing & evaluating the perf.
>>>>>>> Thanks.
>>>>>>> 
>>>>>>> On Mon, Nov 4, 2019 at 9:31 AM aihua li 
>>>> wrote:
>>>>>>> 
>>>>>>>> In stage1, the checkpoint mode isn't disabled,and uses heap as
>> the
>>>>>>>> statebackend.
>>>>>>>> I think there should be some special scenarios to test
>> checkpoint
>>>> and
>>>>>>>> statebackend, which will be discussed and added in the
>>>> release-1.11
>>>>>>>> 
>>>>>>>>> 在 2019年11月2日,上午12:13,Yun Tang  写道:
>>>>>>>>> 
>>>>>>>>> By the way, do you think it's worthy to add a checkpoint mode
>>>> which
>>>>>>> just
>>>>>>>> disable checkpoint to run end-to-end jobs? And when will stage2
>>>> and
>>>>>>> stage3
>>>>>>>> be discussed in more details?
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Best, Jingsong Lee
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 



Re: [VOTE] FLIP-83: Flink End-to-end Performance Testing Framework

2019-11-19 Thread aihua li
hi,Becket,

Thanks for the comments!

> 1. How do the testing records look like? The size and key distributions.

The records looks like a long string,which default size is 1k. The key is 
randomly generated according to the specified range plus a fixed string to 
assure that the data is evenly distributed on each task.

> 2. The resources for each task.

The resources for each task used the default value.

> 3. The intended configuration for the jobs.

The parallelism of test Job will be adjusted according to resource conditions 
to fill the cluster as much as possible. Other configurations are not supported 
at this time.

> 4. What exact source and sink it would use.

In order to reduce the dependence on the external, the source data is generated 
randomly, the sink only supports hdfs or no sinks.

we will add this details to the flip laterly.


> 在 2019年11月18日,下午7:59,Becket Qin  写道:
> 
> +1 (binding) on having the test suite.
> 
> BTW, it would be good to have a few more details about the performance
> tests. For example:
> 1. How do the testing records look like? The size and key distributions.
> 2. The resources for each task.
> 3. The intended configuration for the jobs.
> 4. What exact source and sink it would use.
> 
> Thanks,
> 
> Jiangjie (Becket) Qin
> 
> On Mon, Nov 18, 2019 at 7:25 PM Zhijiang 
> wrote:
> 
>> +1 (binding)!
>> 
>> It is a good thing to enhance our testing work.
>> 
>> Best,
>> Zhijiang
>> 
>> 
>> --
>> From:Hequn Cheng 
>> Send Time:2019 Nov. 18 (Mon.) 18:22
>> To:dev 
>> Subject:Re: [VOTE] FLIP-83: Flink End-to-end Performance Testing Framework
>> 
>> +1 (binding)!
>> I think this would be very helpful to detect regression problems.
>> 
>> Best, Hequn
>> 
>> On Mon, Nov 18, 2019 at 4:28 PM vino yang  wrote:
>> 
>>> +1 (non-binding)
>>> 
>>> Best,
>>> Vino
>>> 
>>> jincheng sun  于2019年11月18日周一 下午2:31写道:
>>> 
>>>> +1  (binding)
>>>> 
>>>> OpenInx  于2019年11月18日周一 下午12:09写道:
>>>> 
>>>>> +1  (non-binding)
>>>>> 
>>>>> On Mon, Nov 18, 2019 at 11:54 AM aihua li 
>>> wrote:
>>>>> 
>>>>>> +1  (non-binding)
>>>>>> 
>>>>>> Thanks Yu Li for driving on this.
>>>>>> 
>>>>>>> 在 2019年11月15日,下午8:10,Yu Li  写道:
>>>>>>> 
>>>>>>> Hi All,
>>>>>>> 
>>>>>>> I would like to start the vote for FLIP-83 [1] which is discussed
>>> and
>>>>>>> reached consensus in the discussion thread [2].
>>>>>>> 
>>>>>>> The vote will be open for at least 72 hours (excluding weekend).
>>> I'll
>>>>> try
>>>>>>> to close it by 2019-11-20 21:00 CST, unless there is an objection
>>> or
>>>>> not
>>>>>>> enough votes.
>>>>>>> 
>>>>>>> [1]
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-83%3A+Flink+End-to-end+Performance+Testing+Framework
>>>>>>> [2] https://s.apache.org/7fqrz
>>>>>>> 
>>>>>>> Best Regards,
>>>>>>> Yu
>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> 



Re: [VOTE] FLIP-83: Flink End-to-end Performance Testing Framework

2019-11-17 Thread aihua li
+1  (non-binding) 

Thanks Yu Li for driving on this.

> 在 2019年11月15日,下午8:10,Yu Li  写道:
> 
> Hi All,
> 
> I would like to start the vote for FLIP-83 [1] which is discussed and
> reached consensus in the discussion thread [2].
> 
> The vote will be open for at least 72 hours (excluding weekend). I'll try
> to close it by 2019-11-20 21:00 CST, unless there is an objection or not
> enough votes.
> 
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-83%3A+Flink+End-to-end+Performance+Testing+Framework
> [2] https://s.apache.org/7fqrz
> 
> Best Regards,
> Yu



Re: [DISCUSS] FLIP-83: Flink End-to-end Performance Testing Framework

2019-11-03 Thread aihua li
In stage1, the checkpoint mode isn't disabled,and uses heap as the statebackend.
I think there should be some special scenarios to test checkpoint and 
statebackend, which will be discussed and added in the release-1.11

> 在 2019年11月2日,上午12:13,Yun Tang  写道:
> 
> By the way, do you think it's worthy to add a checkpoint mode which just 
> disable checkpoint to run end-to-end jobs? And when will stage2 and stage3 be 
> discussed in more details?



Re: [ANNOUNCE] Kete Young is now part of the Flink PMC

2019-07-23 Thread aihua li
Congratulations Kurt, Well deserved.

> 在 2019年7月23日,下午5:24,Robert Metzger  写道:
> 
> Hi all,
> 
> On behalf of the Flink PMC, I'm happy to announce that Kete Young is now
> part of the Apache Flink Project Management Committee (PMC).
> 
> Kete has been a committer since February 2017, working a lot on Table API /
> SQL. He's currently co-managing the 1.9 release! Thanks a lot for your work
> for Flink!
> 
> Congratulations & Welcome Kurt!
> 
> Best,
> Robert



Re: [ANNOUNCE] Zhijiang Wang has been added as a committer to the Flink project

2019-07-22 Thread aihua li
Congratulations Zhijiang!

> 在 2019年7月22日,下午10:11,Robert Metzger  写道:
> 
> Hey all,
> 
> We've added another committer to the Flink project: Zhijiang Wang.
> 
> Congratulations Zhijiang!
> 
> Best,
> Robert
> (on behalf of the Flink PMC)



I want to contribute to Apache Flink

2019-04-29 Thread aihua li
Hi,

I want to contribute to Apache Flink.
Would you please give me the contributor permission?
My JIRA ID is aiwa.