Re: [DISCUSS] Drop Python 2 support for 1.10

2019-10-08 Thread Yu Li
Thanks for bringing this up Dian.

Since python 2.7 support was added in 1.9.0 and would be EOL near the
planned release time for 1.10, I could see a good reason to take option 1.

Please remember to add an explicit release note and would be better to send
a notification in user ML about the plan to drop it, just in case some
1.9.0 users are already using python 2.7 in their product env.

Best Regards,
Yu


On Wed, 9 Oct 2019 at 11:13, Jeff Zhang  wrote:

> +1
>
> Hequn Cheng  于2019年10月9日周三 上午11:07写道:
>
> > Hi Dian,
> >
> > +1 to drop Python 2 directly.
> >
> > Just as @jincheng said, things would be more complicated if we are going
> to
> > support python UDFs.
> > The python UDFs will introduce a lot of python dependencies which will
> also
> > drop the support of Python 2, such as beam, pandas, pyarrow, etc.
> > Given this and Python 2 will reach EOL on Jan 1 2020. I think we can drop
> > Python 2 in Flink as well.
> >
> > As for the two options, I think we can drop it directly in 1.10. The
> > flink-python is introduced just from 1.9, I think it's safe to drop it
> for
> > now.
> > And we can also benefit from it when we add support for python UDFs.
> >
> > Best, Hequn
> >
> >
> > On Wed, Oct 9, 2019 at 8:40 AM jincheng sun 
> > wrote:
> >
> > > Hi Dian,
> > >
> > > Thanks for bringing this discussion!
> > >
> > > In Flink 1.9 we only add Python Table API mapping to Java Table
> > API(without
> > > Python UDFs), there no special requirements for Python version, so we
> add
> > > python 2,7 support. But for Flink 1.10, we add the Python UDFs support,
> > > i.e., user will add more python code in Flink job and more requirements
> > for
> > > the features of the Python language.So I think It's better to follow
> the
> > > rhythm of Python official.
> > >
> > > Option 2 is the most conservative and correct approach, but for the
> > current
> > > situation, we cooperate with the Beam community and use Beam's
> > portability
> > > framework for UDFs support, so we prefer to adopt the Option 1.
> > >
> > > Best,
> > > Jincheng
> > >
> > >
> > >
> > > Dian Fu  于2019年10月8日周二 下午10:34写道:
> > >
> > > > Hi everyone,
> > > >
> > > > I would like to propose to drop Python 2 support(Currently Python
> 2.7,
> > > > 3.5, 3.6, 3.7 are all supported in Flink) as it's coming to an end at
> > Jan
> > > > 1, 2020 [1]. A lot of projects [2][3][4] has already stated or are
> > > planning
> > > > to drop Python 2 support.
> > > >
> > > > The benefits of dropping Python 2 support are:
> > > > 1. Maintaining Python 2/3 compatibility is a burden and it makes the
> > code
> > > > complicate as Python 2 and Python 3 is not compatible.
> > > > 2. There are many features which are only available in Python 3.x
> such
> > as
> > > > Type Hints[5]. We can only make use of this kind of features after
> > > dropping
> > > > the Python 2 support.
> > > > 3. Flink-python depends on third-part projects, such as Apache Beam
> > (may
> > > > add more dependencies such as pandas, etc in the near future), it's
> not
> > > > possible to upgrade them to the latest version once they drop the
> > Python
> > > 2
> > > > support.
> > > >
> > > > Here are the options we have:
> > > > 1. Drop Python 2 support in 1.10:
> > > > As flink-python module is a new module added since 1.9.0 and so
> > dropping
> > > > Python 2 support at the early stage seems a good choice for us.
> > > > 2. Deprecate Python 2 in 1.10 and drop its support in 1.11:
> > > > As 1.10 is planned to be released around the beginning of 2020. This
> is
> > > > also aligned with the official Python 2 support.
> > > >
> > > > Personally I prefer option 1 as flink-python is new module and there
> is
> > > no
> > > > much history reasons to consider.
> > > >
> > > > Looking forward to your feedback!
> > > >
> > > > Regards,
> > > > Dian
> > > >
> > > > [1] https://pythonclock.org/ 
> > > > [2] https://python3statement.org/ 
> > > > [3]
> > > https://spark.apache.org/news/plan-for-dropping-python-2-support.html
> > > > <
> https://spark.apache.org/news/plan-for-dropping-python-2-support.html
> > >
> > > > [4]
> > > >
> > >
> >
> https://lists.apache.org/thread.html/eba6caa58ea79a7ecbc8560d1c680a366b44c531d96ce5c699d41535@%3Cdev.beam.apache.org%3E
> > > > <
> > > >
> > >
> >
> https://lists.apache.org/thread.html/eba6caa58ea79a7ecbc8560d1c680a366b44c531d96ce5c699d41535@%3Cdev.beam.apache.org%3E
> > > > >
> > > > [5]
> > > >
> > >
> >
> https://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5
> > > > <
> > > >
> > >
> >
> https://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5
> > > > >
> > >
> >
>
>
> --
> Best Regards
>
> Jeff Zhang
>


Re: [DISCUSS] Flink Python UDF Environment and Dependency Management

2019-10-08 Thread Wei Zhong
Hi Jincheng, Dian and Jeff,

Thank you for your replies and comments in google doc! I think we have come to 
an agreement on the desgin doc with only minor changes as follow:
  - Using the API "set_python_executable" instead of "set_environment_variable" 
to set the python executable file path.
  - Making the argument "requirements_cached_dir" of API 
"set_python_requirements" optional to support only upload a requirement.txt 
file.

I'm also glad to hear any other opinions!

Thanks,
Wei


> 在 2019年9月26日,15:23,Dian Fu  写道:
> 
> Hi Wei,
> 
> Thanks a lot for bringing up this discussion. Python dependency management is 
> very important for Python users. I have left a few comments on the design doc.
> 
> Thanks,
> Dian
> 
>> 在 2019年9月26日,下午12:23,jincheng sun  写道:
>> 
>> Thanks for bring up the discussion, Wei.
>> Overall the design doc looks good. I have left a few comments.
>> 
>> BTW: Dependency Management is very important for Python UDFs, welcome
>> anyone left your suggestions!
>> 
>> Best,
>> Jincheng
>> 
>> Wei Zhong  于2019年9月26日周四 上午11:59写道:
>> 
>>> Hi everyone,
>>> 
>>> In FLIP-58 [1] we have a plan to support Python UDF. As a critical part of
>>> python UDF, the environment and dependency management of users' python code
>>> has not been fully discussed.
>>> 
>>> I'd like to start a discussion on "Flink Python UDF Environment and
>>> Dependency Management". Here is the design doc I drafted:
>>> 
>>> 
>>> https://docs.google.com/document/d/1vq5J3TSyhscQXbpRhz-Yd3KCX62PBJeC_a_h3amUvJ4/edit?usp=sharing
>>> 
>>> Please take a look, and feedbacks are welcome.
>>> 
>>> Thanks,
>>> Wei
>>> 
>>> [1]:
>>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-58%3A+Flink+Python+User-Defined+Stateless+Function+for+Table
>>> 
>>> 
>>> 
> 



Re: [VOTE] Release 1.9.1, release candidate #1

2019-10-08 Thread Jark Wu
Thanks Jincheng and Till, then let's keep on verifying the RC1.

Best,
Jark

On Wed, 9 Oct 2019 at 11:00, jincheng sun  wrote:

> I think we should create the new RC when we find the blocker issues.
> We can looking forward the other check result, we can add the fix of
> FLINK-14315 in to 1.9.1 only we find the blockers.
>
> Best,
> Jincheng
>
> Till Rohrmann  于2019年10月8日周二 下午8:20写道:
>
>> FLINK-14315 has been merged into the release-1.9 branch. I've marked the
>> fix version of this ticket as 1.9.2. If we should create a new RC, then we
>> could include this fix. If this happens, then we need to update the fix
>> version to 1.9.1.
>>
>> Cheers,
>> Till
>>
>> On Tue, Oct 8, 2019 at 1:51 PM Till Rohrmann 
>> wrote:
>>
>> > If people already spent time on verifying the current RC I would also be
>> > fine to release the fix for FLINK-14315 with Flink 1.9.2.
>> >
>> > I will try to merge the PR as soon as possible. When I close the
>> ticket, I
>> > will update the fix version field to 1.9.2.
>> >
>> > Cheers,
>> > Till
>> >
>> > On Tue, Oct 8, 2019 at 4:43 AM Jark Wu  wrote:
>> >
>> >> Hi Zili,
>> >>
>> >> Thanks for reminding me this, because of the Chinese National Day and
>> >> Flink Forward Europe,
>> >> we didn't receive any verification on the 1.9.1 RC1. And I guess we
>> have
>> >> to extend the voting time after Flink Forward.
>> >> So I'm fine to have FLINK-14315 and rebuild another RC. What do you
>> think
>> >> @Till @Jincheng?
>> >>
>> >> I guess FLINK-14315 will be merged soon as it is approved 4 days ago?
>> >> Could you help to merge it once it is passed ? @Zili Chen
>> >> 
>> >>
>> >> Best,
>> >> Jark
>> >>
>> >> On Tue, 8 Oct 2019 at 09:14, Zili Chen  wrote:
>> >>
>> >>> Hi Jark,
>> >>>
>> >>> I notice a critical bug[1] is marked resolved in 1.9.1 but given 1.9.1
>> >>> has been cut I'd like to throw the issue here so that we're sure
>> >>> whether or not it is included in 1.9.1.
>> >>>
>> >>> Best,
>> >>> tison.
>> >>>
>> >>> [1] https://issues.apache.org/jira/browse/FLINK-14315
>> >>>
>> >>>
>> >>> Jark Wu  于2019年9月30日周一 下午3:25写道:
>> >>>
>>   Hi everyone,
>> 
>>  Please review and vote on the release candidate #1 for the version
>>  1.9.1,
>>  as follows:
>>  [ ] +1, Approve the release
>>  [ ] -1, Do not approve the release (please provide specific comments)
>> 
>> 
>>  The complete staging area is available for your review, which
>> includes:
>>  * JIRA release notes [1],
>>  * the official Apache source release and binary convenience releases
>> to
>>  be
>>  deployed to dist.apache.org [2], which are signed with the key with
>>  fingerprint E2C45417BED5C104154F341085BACB5AEFAE3202 [3],
>>  * all artifacts to be deployed to the Maven Central Repository [4],
>>  * source code tag "release-1.9.1-rc1" [5],
>>  * website pull request listing the new release and adding
>> announcement
>>  blog
>>  post [6].
>> 
>>  The vote will be open for at least 72 hours.
>>  Please cast your votes before *Oct. 3th 2019, 08:00 UTC*.
>> 
>>  It is adopted by majority approval, with at least 3 PMC affirmative
>>  votes.
>> 
>>  Thanks,
>>  Jark
>> 
>>  [1]
>> 
>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12346003
>>  [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.1-rc1/
>>  [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>  [4]
>> 
>> https://repository.apache.org/content/repositories/orgapacheflink-1272/
>>  [5]
>> 
>> 
>> https://github.com/apache/flink/commit/4d56de81cb692c68a7d1dbfff13087a5079a8252
>>  [6] https://github.com/apache/flink-web/pull/274
>> 
>> >>>
>>
>


Re: [DISCUSS] Drop Python 2 support for 1.10

2019-10-08 Thread Jeff Zhang
+1

Hequn Cheng  于2019年10月9日周三 上午11:07写道:

> Hi Dian,
>
> +1 to drop Python 2 directly.
>
> Just as @jincheng said, things would be more complicated if we are going to
> support python UDFs.
> The python UDFs will introduce a lot of python dependencies which will also
> drop the support of Python 2, such as beam, pandas, pyarrow, etc.
> Given this and Python 2 will reach EOL on Jan 1 2020. I think we can drop
> Python 2 in Flink as well.
>
> As for the two options, I think we can drop it directly in 1.10. The
> flink-python is introduced just from 1.9, I think it's safe to drop it for
> now.
> And we can also benefit from it when we add support for python UDFs.
>
> Best, Hequn
>
>
> On Wed, Oct 9, 2019 at 8:40 AM jincheng sun 
> wrote:
>
> > Hi Dian,
> >
> > Thanks for bringing this discussion!
> >
> > In Flink 1.9 we only add Python Table API mapping to Java Table
> API(without
> > Python UDFs), there no special requirements for Python version, so we add
> > python 2,7 support. But for Flink 1.10, we add the Python UDFs support,
> > i.e., user will add more python code in Flink job and more requirements
> for
> > the features of the Python language.So I think It's better to follow the
> > rhythm of Python official.
> >
> > Option 2 is the most conservative and correct approach, but for the
> current
> > situation, we cooperate with the Beam community and use Beam's
> portability
> > framework for UDFs support, so we prefer to adopt the Option 1.
> >
> > Best,
> > Jincheng
> >
> >
> >
> > Dian Fu  于2019年10月8日周二 下午10:34写道:
> >
> > > Hi everyone,
> > >
> > > I would like to propose to drop Python 2 support(Currently Python 2.7,
> > > 3.5, 3.6, 3.7 are all supported in Flink) as it's coming to an end at
> Jan
> > > 1, 2020 [1]. A lot of projects [2][3][4] has already stated or are
> > planning
> > > to drop Python 2 support.
> > >
> > > The benefits of dropping Python 2 support are:
> > > 1. Maintaining Python 2/3 compatibility is a burden and it makes the
> code
> > > complicate as Python 2 and Python 3 is not compatible.
> > > 2. There are many features which are only available in Python 3.x such
> as
> > > Type Hints[5]. We can only make use of this kind of features after
> > dropping
> > > the Python 2 support.
> > > 3. Flink-python depends on third-part projects, such as Apache Beam
> (may
> > > add more dependencies such as pandas, etc in the near future), it's not
> > > possible to upgrade them to the latest version once they drop the
> Python
> > 2
> > > support.
> > >
> > > Here are the options we have:
> > > 1. Drop Python 2 support in 1.10:
> > > As flink-python module is a new module added since 1.9.0 and so
> dropping
> > > Python 2 support at the early stage seems a good choice for us.
> > > 2. Deprecate Python 2 in 1.10 and drop its support in 1.11:
> > > As 1.10 is planned to be released around the beginning of 2020. This is
> > > also aligned with the official Python 2 support.
> > >
> > > Personally I prefer option 1 as flink-python is new module and there is
> > no
> > > much history reasons to consider.
> > >
> > > Looking forward to your feedback!
> > >
> > > Regards,
> > > Dian
> > >
> > > [1] https://pythonclock.org/ 
> > > [2] https://python3statement.org/ 
> > > [3]
> > https://spark.apache.org/news/plan-for-dropping-python-2-support.html
> > >  >
> > > [4]
> > >
> >
> https://lists.apache.org/thread.html/eba6caa58ea79a7ecbc8560d1c680a366b44c531d96ce5c699d41535@%3Cdev.beam.apache.org%3E
> > > <
> > >
> >
> https://lists.apache.org/thread.html/eba6caa58ea79a7ecbc8560d1c680a366b44c531d96ce5c699d41535@%3Cdev.beam.apache.org%3E
> > > >
> > > [5]
> > >
> >
> https://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5
> > > <
> > >
> >
> https://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5
> > > >
> >
>


-- 
Best Regards

Jeff Zhang


[jira] [Created] (FLINK-14348) YarnFileStageTestS3ITCase. testRecursiveUploadForYarnS3a fails to delete files

2019-10-08 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-14348:
---

 Summary: YarnFileStageTestS3ITCase. testRecursiveUploadForYarnS3a 
fails to delete files
 Key: FLINK-14348
 URL: https://issues.apache.org/jira/browse/FLINK-14348
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.9.0
Reporter: Caizhi Weng


YarnFileStageTestS3ITCase. testRecursiveUploadForYarnS3a fails with the 
following exceptions:
{code:java}
15:25:07.359 [ERROR] 
testRecursiveUploadForYarnS3a(org.apache.flink.yarn.YarnFileStageTestS3ITCase)  
Time elapsed: 10.808 s  <<< 
ERROR!24649org.apache.hadoop.fs.s3a.AWSS3IOException: delete on 
s3a://[secure]/temp/tests-3565b11f-e9be-4213-a98d-0f0ecd123783/testYarn-s3a: 
com.amazonaws.services.s3.model.MultiObjectDeleteException: One or more objects 
could not be deleted (Service: null; Status Code: 200; Error Code: null; 
Request ID: 2D1AE3D999528C34; S3 Extended Request ID: 
zIX1QsAcsY1ZYSDOeCaYsGJ4bz0NJTy2kw0EYmlJr8Kb7pM8OPmhAKO5XHI26xiOi2tIkTIoBwg=), 
S3 Extended Request ID: 
zIX1QsAcsY1ZYSDOeCaYsGJ4bz0NJTy2kw0EYmlJr8Kb7pM8OPmhAKO5XHI26xiOi2tIkTIoBwg=: 
One or more objects could not be deleted (Service: null; Status Code: 200; 
Error Code: null; Request ID: 2D1AE3D999528C34; S3 Extended Request ID: 
zIX1QsAcsY1ZYSDOeCaYsGJ4bz0NJTy2kw0EYmlJr8Kb7pM8OPmhAKO5XHI26xiOi2tIkTIoBwg=)24650
 at 
org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarn(YarnFileStageTestS3ITCase.java:159)24651
  at 
org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3a(YarnFileStageTestS3ITCase.java:190)24652Caused
 by: com.amazonaws.services.s3.model.MultiObjectDeleteException: One or more 
objects could not be deleted (Service: null; Status Code: 200; Error Code: 
null; Request ID: 2D1AE3D999528C34; S3 Extended Request ID: 
zIX1QsAcsY1ZYSDOeCaYsGJ4bz0NJTy2kw0EYmlJr8Kb7pM8OPmhAKO5XHI26xiOi2tIkTIoBwg=)24653
   at 
org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarn(YarnFileStageTestS3ITCase.java:159)24654
  at 
org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3a(YarnFileStageTestS3ITCase.java:190){code}
Travis log: [https://travis-ci.org/apache/flink/jobs/595082651]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14347) YARNSessionFIFOITCase.checkForProhibitedLogContents found a log with prohibited string

2019-10-08 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-14347:
---

 Summary: YARNSessionFIFOITCase.checkForProhibitedLogContents found 
a log with prohibited string
 Key: FLINK-14347
 URL: https://issues.apache.org/jira/browse/FLINK-14347
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.8.2
Reporter: Caizhi Weng


YARNSessionFIFOITCase.checkForProhibitedLogContents fails with the following 
exception:
{code:java}
14:55:27.643 [ERROR]   
YARNSessionFIFOITCase.checkForProhibitedLogContents:77->YarnTestBase.ensureNoProhibitedStringInLogFiles:461
 Found a file 
/home/travis/build/apache/flink/flink-yarn-tests/target/flink-yarn-tests-fifo/flink-yarn-tests-fifo-logDir-nm-1_0/application_1570546069180_0001/container_1570546069180_0001_01_01/jobmanager.log
 with a prohibited string (one of [Exception, Started 
SelectChannelConnector@0.0.0.0:8081]). Excerpts:23760[{code}
Travis log link: [https://travis-ci.org/apache/flink/jobs/595082243]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Drop Python 2 support for 1.10

2019-10-08 Thread Hequn Cheng
Hi Dian,

+1 to drop Python 2 directly.

Just as @jincheng said, things would be more complicated if we are going to
support python UDFs.
The python UDFs will introduce a lot of python dependencies which will also
drop the support of Python 2, such as beam, pandas, pyarrow, etc.
Given this and Python 2 will reach EOL on Jan 1 2020. I think we can drop
Python 2 in Flink as well.

As for the two options, I think we can drop it directly in 1.10. The
flink-python is introduced just from 1.9, I think it's safe to drop it for
now.
And we can also benefit from it when we add support for python UDFs.

Best, Hequn


On Wed, Oct 9, 2019 at 8:40 AM jincheng sun 
wrote:

> Hi Dian,
>
> Thanks for bringing this discussion!
>
> In Flink 1.9 we only add Python Table API mapping to Java Table API(without
> Python UDFs), there no special requirements for Python version, so we add
> python 2,7 support. But for Flink 1.10, we add the Python UDFs support,
> i.e., user will add more python code in Flink job and more requirements for
> the features of the Python language.So I think It's better to follow the
> rhythm of Python official.
>
> Option 2 is the most conservative and correct approach, but for the current
> situation, we cooperate with the Beam community and use Beam's portability
> framework for UDFs support, so we prefer to adopt the Option 1.
>
> Best,
> Jincheng
>
>
>
> Dian Fu  于2019年10月8日周二 下午10:34写道:
>
> > Hi everyone,
> >
> > I would like to propose to drop Python 2 support(Currently Python 2.7,
> > 3.5, 3.6, 3.7 are all supported in Flink) as it's coming to an end at Jan
> > 1, 2020 [1]. A lot of projects [2][3][4] has already stated or are
> planning
> > to drop Python 2 support.
> >
> > The benefits of dropping Python 2 support are:
> > 1. Maintaining Python 2/3 compatibility is a burden and it makes the code
> > complicate as Python 2 and Python 3 is not compatible.
> > 2. There are many features which are only available in Python 3.x such as
> > Type Hints[5]. We can only make use of this kind of features after
> dropping
> > the Python 2 support.
> > 3. Flink-python depends on third-part projects, such as Apache Beam (may
> > add more dependencies such as pandas, etc in the near future), it's not
> > possible to upgrade them to the latest version once they drop the Python
> 2
> > support.
> >
> > Here are the options we have:
> > 1. Drop Python 2 support in 1.10:
> > As flink-python module is a new module added since 1.9.0 and so dropping
> > Python 2 support at the early stage seems a good choice for us.
> > 2. Deprecate Python 2 in 1.10 and drop its support in 1.11:
> > As 1.10 is planned to be released around the beginning of 2020. This is
> > also aligned with the official Python 2 support.
> >
> > Personally I prefer option 1 as flink-python is new module and there is
> no
> > much history reasons to consider.
> >
> > Looking forward to your feedback!
> >
> > Regards,
> > Dian
> >
> > [1] https://pythonclock.org/ 
> > [2] https://python3statement.org/ 
> > [3]
> https://spark.apache.org/news/plan-for-dropping-python-2-support.html
> > 
> > [4]
> >
> https://lists.apache.org/thread.html/eba6caa58ea79a7ecbc8560d1c680a366b44c531d96ce5c699d41535@%3Cdev.beam.apache.org%3E
> > <
> >
> https://lists.apache.org/thread.html/eba6caa58ea79a7ecbc8560d1c680a366b44c531d96ce5c699d41535@%3Cdev.beam.apache.org%3E
> > >
> > [5]
> >
> https://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5
> > <
> >
> https://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5
> > >
>


Re: [VOTE] Release 1.9.1, release candidate #1

2019-10-08 Thread jincheng sun
I think we should create the new RC when we find the blocker issues.
We can looking forward the other check result, we can add the fix of
FLINK-14315 in to 1.9.1 only we find the blockers.

Best,
Jincheng

Till Rohrmann  于2019年10月8日周二 下午8:20写道:

> FLINK-14315 has been merged into the release-1.9 branch. I've marked the
> fix version of this ticket as 1.9.2. If we should create a new RC, then we
> could include this fix. If this happens, then we need to update the fix
> version to 1.9.1.
>
> Cheers,
> Till
>
> On Tue, Oct 8, 2019 at 1:51 PM Till Rohrmann  wrote:
>
> > If people already spent time on verifying the current RC I would also be
> > fine to release the fix for FLINK-14315 with Flink 1.9.2.
> >
> > I will try to merge the PR as soon as possible. When I close the ticket,
> I
> > will update the fix version field to 1.9.2.
> >
> > Cheers,
> > Till
> >
> > On Tue, Oct 8, 2019 at 4:43 AM Jark Wu  wrote:
> >
> >> Hi Zili,
> >>
> >> Thanks for reminding me this, because of the Chinese National Day and
> >> Flink Forward Europe,
> >> we didn't receive any verification on the 1.9.1 RC1. And I guess we have
> >> to extend the voting time after Flink Forward.
> >> So I'm fine to have FLINK-14315 and rebuild another RC. What do you
> think
> >> @Till @Jincheng?
> >>
> >> I guess FLINK-14315 will be merged soon as it is approved 4 days ago?
> >> Could you help to merge it once it is passed ? @Zili Chen
> >> 
> >>
> >> Best,
> >> Jark
> >>
> >> On Tue, 8 Oct 2019 at 09:14, Zili Chen  wrote:
> >>
> >>> Hi Jark,
> >>>
> >>> I notice a critical bug[1] is marked resolved in 1.9.1 but given 1.9.1
> >>> has been cut I'd like to throw the issue here so that we're sure
> >>> whether or not it is included in 1.9.1.
> >>>
> >>> Best,
> >>> tison.
> >>>
> >>> [1] https://issues.apache.org/jira/browse/FLINK-14315
> >>>
> >>>
> >>> Jark Wu  于2019年9月30日周一 下午3:25写道:
> >>>
>   Hi everyone,
> 
>  Please review and vote on the release candidate #1 for the version
>  1.9.1,
>  as follows:
>  [ ] +1, Approve the release
>  [ ] -1, Do not approve the release (please provide specific comments)
> 
> 
>  The complete staging area is available for your review, which
> includes:
>  * JIRA release notes [1],
>  * the official Apache source release and binary convenience releases
> to
>  be
>  deployed to dist.apache.org [2], which are signed with the key with
>  fingerprint E2C45417BED5C104154F341085BACB5AEFAE3202 [3],
>  * all artifacts to be deployed to the Maven Central Repository [4],
>  * source code tag "release-1.9.1-rc1" [5],
>  * website pull request listing the new release and adding announcement
>  blog
>  post [6].
> 
>  The vote will be open for at least 72 hours.
>  Please cast your votes before *Oct. 3th 2019, 08:00 UTC*.
> 
>  It is adopted by majority approval, with at least 3 PMC affirmative
>  votes.
> 
>  Thanks,
>  Jark
> 
>  [1]
> 
> 
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12346003
>  [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.1-rc1/
>  [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>  [4]
> 
> https://repository.apache.org/content/repositories/orgapacheflink-1272/
>  [5]
> 
> 
> https://github.com/apache/flink/commit/4d56de81cb692c68a7d1dbfff13087a5079a8252
>  [6] https://github.com/apache/flink-web/pull/274
> 
> >>>
>


Re: Re: [DISCUSS] FLIP-65: New type inference for Table API UDFs

2019-10-08 Thread Jingsong Li
Thanks Timo for your pretty nice proposal, big +1 to the FLIP. Left some
minor comments.

A minor concern about flink-planner, precision things maybe cannot be
supported.

Best,
Jingsong Lee

On Tue, Oct 8, 2019 at 5:58 PM zha...@lenovocloud.com <
zha...@lenovocloud.com> wrote:

> unsubscribe
>
>
>
> zha...@lenovocloud.com
>
> From: Jark Wu
> Date: 2019-10-08 17:29
> To: dev
> Subject: Re: [DISCUSS] FLIP-65: New type inference for Table API UDFs
> Hi Timo,
>
> Thanks for the proposal, a big +1 to the FLIP, especially this enables the
> unified `TableEnvironment.registerFunction()`.
>
> I think the design documentation is pretty good enough, I only left some
> minor comments there.
>
> Best,
> Jark
>
> On Fri, 4 Oct 2019 at 23:54, Timo Walther  wrote:
>
> > Hi everyone,
> >
> > I would like to propose FLIP-65 that describes how we want to deal with
> > data types and their inference/extraction in the Table API in the
> > future. I have collected many comments, shortcomings, issues from users
> > and trainings in last years that went into the design. It completes the
> > work started in FLIP-37 by upgrading the type system to DataTypes and
> > allowing new use cases. Some key features of this FLIP are:
> >
> > - Type extraction closely coupled to the SQL standard (with
> > precision/scale specification and complex data types)
> >
> > - Simple stuff is simple, missing information can be provided with
> > annotations without the need to specify everything from scratch.
> >
> > - Full access to the planner's type inference which allows to create
> > UDFs as powerful as built-in system functions
> >
> > - Unification of Scala and Java API to enable the unified
> > TableEnvironment.registerFunction()
> >
> >
> > The design document can be found here:
> >
> >
> >
> https://docs.google.com/document/d/1Zf8-okGvCiTTRaTN0IqtTGrrjXYNGFJnhUQyinF4xcU/edit#
> >
> > I will convert it to a wiki page after the first review comments.
> >
> > Happy to hear your thoughts.
> >
> > Thanks,
> >
> > Timo
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-37%3A+Rework+of+the+Table+API+Type+System
> >
> >
>


-- 
Best, Jingsong Lee


Re: [DISCUSS] Add ARM CI build to Flink (information-only)

2019-10-08 Thread Xiyuan Wang
Hi, Flink Team,

According to the discussion, I assume that we are now agree that running
cron job for ARM at this moment. I have ran POC e2e test in OpenLab for
some days[1]. It includes:

flink-end-to-end-test-part1
split_checkpoints.sh  and split_sticky.sh
flink-end-to-end-test-part2
 split_heavy.sh  and split_ha.sh
flink-end-to-end-test-part3
split_misc.sh and split_misc_hadoopfree.sh

part1 and part2 runs well. part3 is not statble. I need take more time to
fix part3. container part is not included because the problem5 mentioned
below.(I'll add it once it's solved.)

While I did som hacks to make sure the job pass. It includes:

1. Frocksdb ARM package: https://issues.apache.org/jira/browse/FLINK-13598
(Not solved)
2. PrometheusReporterEndToEndITCase doesn't support ARM arch:
https://issues.apache.org/jira/browse/FLINK-14086 (PR for fix:
https://github.com/apache/flink/pull/9768)
3. Elasticsearch Xpack Machine Learning doesn't support ARM :
https://issues.apache.org/jira/browse/FLINK-14126 (PR for fix:
https://github.com/apache/flink/pull/9765)
4. maven-shade-plugin 3.2.1 doesn't work on ARM for Flink (Fixed, thanks @Dian
Fu )
5. flink e2e container test doesn't support ARM:
https://issues.apache.org/jira/browse/FLINK-14241 (PR for fix:
https://github.com/apache/flink/pull/9782)

Please help review these PRs. Thanks very much.

And I added a PR[2]   to make
Flink run cron jobs officially. Once it's merged, the jobs will be ran once
a day at UTC2000. The result can be sent to bui...@flink.apache.org if
Flink team can give the permission to send mail by i...@openlabtesting.org


[1]: http://status.openlabtesting.org/builds?project=apache%2Fflink
[2]: https://github.com/apache/flink/pull/9416


Thanks.

Xiyuan Wang  于2019年9月25日周三 下午5:33写道:

> Hi Till
> Thanks for your response. All ARM related work is triggered here:
> https://issues.apache.org/jira/browse/FLINK-13448 and I have created some
> PRs already.
>
> After do some hacking locally, E2E tests runs well now.  I have added
> them into OpenLab alreay. The POC log:
> http://status.openlabtesting.org/builds?project=apache%2Fflink=periodic-20
>  It
> runs at UTC2000 everyday. Following the POC, I have created the official PR
> for cron job as well which contains core/test related module test and e2e
> test(expect container ones): https://github.com/apache/flink/pull/9416
>
> Once it's merged, I can configure it at OpenLab side to send the test
> result everyday to bu...@flink.apache.org.
>
> Thanks.
>
>
>
>
>
> Till Rohrmann  于2019年9月23日周一 下午8:40写道:
>
>> This sounds good Xiyuan. I'd also be in favour of running the ARM builds
>> regularly as cron jobs and once we see that they are stable we could run
>> them for every master commit. Hence, I'd say let's fix the above mentioned
>> problems and then set the nightly cron job up.
>>
>> Cheers,
>> Till
>>
>> On Fri, Sep 20, 2019 at 8:57 AM Xiyuan Wang 
>> wrote:
>>
>> > Sure,  we can run daily ARM job as Travis CI nightly jobs firstly. Once
>> > it's stable enough, we can consider adding it to peer PR.
>> >
>> > BTW, I tested flink-end-to-end-test on ARM in last few days. Keeping the
>> > same as Travis, all 7 scenarios were tested:
>> >
>> > 1. split_checkpoints.sh
>> > 2. split_sticky.sh
>> > 3. split_ha.sh
>> > 4. split_heavy.sh
>> > 5. split_misc_hadoopfree.sh
>> > 6. split_misc.sh
>> > 7. split_container.sh
>> >
>> > The 1st-6th scenarios works well within some hacking and bug fixing
>> > locally:
>> > 1. frocksdb doesn't have official ARM release, so I built and
>> install
>> > it locally for ARM.
>> >   https://issues.apache.org/jira/browse/FLINK-13598
>> > 2. Prometheus has ARM release but the test always download x86
>> version.
>> > Download the correct version can fix the issue.
>> >   https://issues.apache.org/jira/browse/FLINK-14086
>> > 3. Elasticsearch 6.0+ enables Xpack machine learning feature by
>> > default, but this feature doesn't support ARM. So Elasticsearch 6.0+
>> failed
>> > to start on ARM. Set `Xpack.ml.enabled: false` can fix this issue.
>> >   https://issues.apache.org/jira/browse/FLINK-14126
>> >
>> > The 7th scenario for container failed because:
>> > 1. docker-compose doesn't have official ARM package. Use `apt
>> install
>> > docker-compose` can solve the problem.
>> > 2. minikube doesn't support ARM arch. Use kubeadm for K8S
>> installation
>> > can solve the problem.
>> >
>> > Fixing the problem mentioned above is not hard. So I think we can add
>> flink
>> > build, unit-test and e2e test as nightly jobs now.
>> >
>> > Any idea?
>> >
>> > Thanks.
>> >
>> > Stephan Ewen  于2019年9月19日周四 下午5:44写道:
>> >
>> > > My gut feeling is that having a CI that only runs on a specific
>> command
>> > > will not help too much.
>> > >
>> > > What about going with nightly builds then? We could set up the ARM CI
>> the
>> > > same way as the Travis CI nightly builds (cron builds). 

Re: [DISCUSS] Drop Python 2 support for 1.10

2019-10-08 Thread jincheng sun
Hi Dian,

Thanks for bringing this discussion!

In Flink 1.9 we only add Python Table API mapping to Java Table API(without
Python UDFs), there no special requirements for Python version, so we add
python 2,7 support. But for Flink 1.10, we add the Python UDFs support,
i.e., user will add more python code in Flink job and more requirements for
the features of the Python language.So I think It's better to follow the
rhythm of Python official.

Option 2 is the most conservative and correct approach, but for the current
situation, we cooperate with the Beam community and use Beam's portability
framework for UDFs support, so we prefer to adopt the Option 1.

Best,
Jincheng



Dian Fu  于2019年10月8日周二 下午10:34写道:

> Hi everyone,
>
> I would like to propose to drop Python 2 support(Currently Python 2.7,
> 3.5, 3.6, 3.7 are all supported in Flink) as it's coming to an end at Jan
> 1, 2020 [1]. A lot of projects [2][3][4] has already stated or are planning
> to drop Python 2 support.
>
> The benefits of dropping Python 2 support are:
> 1. Maintaining Python 2/3 compatibility is a burden and it makes the code
> complicate as Python 2 and Python 3 is not compatible.
> 2. There are many features which are only available in Python 3.x such as
> Type Hints[5]. We can only make use of this kind of features after dropping
> the Python 2 support.
> 3. Flink-python depends on third-part projects, such as Apache Beam (may
> add more dependencies such as pandas, etc in the near future), it's not
> possible to upgrade them to the latest version once they drop the Python 2
> support.
>
> Here are the options we have:
> 1. Drop Python 2 support in 1.10:
> As flink-python module is a new module added since 1.9.0 and so dropping
> Python 2 support at the early stage seems a good choice for us.
> 2. Deprecate Python 2 in 1.10 and drop its support in 1.11:
> As 1.10 is planned to be released around the beginning of 2020. This is
> also aligned with the official Python 2 support.
>
> Personally I prefer option 1 as flink-python is new module and there is no
> much history reasons to consider.
>
> Looking forward to your feedback!
>
> Regards,
> Dian
>
> [1] https://pythonclock.org/ 
> [2] https://python3statement.org/ 
> [3] https://spark.apache.org/news/plan-for-dropping-python-2-support.html
> 
> [4]
> https://lists.apache.org/thread.html/eba6caa58ea79a7ecbc8560d1c680a366b44c531d96ce5c699d41535@%3Cdev.beam.apache.org%3E
> <
> https://lists.apache.org/thread.html/eba6caa58ea79a7ecbc8560d1c680a366b44c531d96ce5c699d41535@%3Cdev.beam.apache.org%3E
> >
> [5]
> https://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5
> <
> https://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5
> >


Re: [VOTE] FLIP-57: Rework FunctionCatalog, latest updated

2019-10-08 Thread Xuefu Z
+1

On Tue, Oct 8, 2019 at 7:00 AM Aljoscha Krettek  wrote:

> +1
>
> > On 8. Oct 2019, at 15:35, Timo Walther  wrote:
> >
> > +1
> >
> > Thanks for driving these efforts,
> > Timo
> >
> > On 07.10.19 10:10, Dawid Wysakowicz wrote:
> >> +1 for the FLIP.
> >>
> >> Best,
> >>
> >> Dawid
> >>
> >> On 07/10/2019 08:45, Bowen Li wrote:
> >>> Hi all,
> >>>
> >>> I'd like to start a new voting thread for FLIP-57 [1] on its latest
> status
> >>> despite [2], and we've reached consensus in [2] and [3].
> >>>
> >>> This voting will be open for minimum 3 days till 6:45am UTC, Oct 10.
> >>>
> >>> Thanks,
> >>> Bowen
> >>>
> >>> [1]
> >>>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-57%3A+Rework+FunctionCatalog
> >>> [2] https://www.mail-archive.com/dev@flink.apache.org/msg30180.html
> >>> [3]
> >>>
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-57-Rework-FunctionCatalog-td32291.html#a32613
> >>>
> >
>
>

-- 
Xuefu Zhang

"In Honey We Trust!"


[DISCUSS] Drop Python 2 support for 1.10

2019-10-08 Thread Dian Fu
Hi everyone,

I would like to propose to drop Python 2 support(Currently Python 2.7, 3.5, 
3.6, 3.7 are all supported in Flink) as it's coming to an end at Jan 1, 2020 
[1]. A lot of projects [2][3][4] has already stated or are planning to drop 
Python 2 support.

The benefits of dropping Python 2 support are:
1. Maintaining Python 2/3 compatibility is a burden and it makes the code 
complicate as Python 2 and Python 3 is not compatible.
2. There are many features which are only available in Python 3.x such as Type 
Hints[5]. We can only make use of this kind of features after dropping the 
Python 2 support. 
3. Flink-python depends on third-part projects, such as Apache Beam (may add 
more dependencies such as pandas, etc in the near future), it's not possible to 
upgrade them to the latest version once they drop the Python 2 support.

Here are the options we have:
1. Drop Python 2 support in 1.10:
As flink-python module is a new module added since 1.9.0 and so dropping Python 
2 support at the early stage seems a good choice for us.
2. Deprecate Python 2 in 1.10 and drop its support in 1.11:
As 1.10 is planned to be released around the beginning of 2020. This is also 
aligned with the official Python 2 support.

Personally I prefer option 1 as flink-python is new module and there is no much 
history reasons to consider.

Looking forward to your feedback!

Regards,
Dian

[1] https://pythonclock.org/ 
[2] https://python3statement.org/ 
[3] https://spark.apache.org/news/plan-for-dropping-python-2-support.html 

[4] 
https://lists.apache.org/thread.html/eba6caa58ea79a7ecbc8560d1c680a366b44c531d96ce5c699d41535@%3Cdev.beam.apache.org%3E
 

[5] 
https://stackoverflow.com/questions/32557920/what-are-type-hints-in-python-3-5 


Re: [VOTE] FLIP-57: Rework FunctionCatalog, latest updated

2019-10-08 Thread Aljoscha Krettek
+1

> On 8. Oct 2019, at 15:35, Timo Walther  wrote:
> 
> +1
> 
> Thanks for driving these efforts,
> Timo
> 
> On 07.10.19 10:10, Dawid Wysakowicz wrote:
>> +1 for the FLIP.
>> 
>> Best,
>> 
>> Dawid
>> 
>> On 07/10/2019 08:45, Bowen Li wrote:
>>> Hi all,
>>> 
>>> I'd like to start a new voting thread for FLIP-57 [1] on its latest status
>>> despite [2], and we've reached consensus in [2] and [3].
>>> 
>>> This voting will be open for minimum 3 days till 6:45am UTC, Oct 10.
>>> 
>>> Thanks,
>>> Bowen
>>> 
>>> [1]
>>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-57%3A+Rework+FunctionCatalog
>>> [2] https://www.mail-archive.com/dev@flink.apache.org/msg30180.html
>>> [3]
>>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-57-Rework-FunctionCatalog-td32291.html#a32613
>>> 
> 



Re: [VOTE] FLIP-57: Rework FunctionCatalog, latest updated

2019-10-08 Thread Timo Walther

+1

Thanks for driving these efforts,
Timo

On 07.10.19 10:10, Dawid Wysakowicz wrote:

+1 for the FLIP.

Best,

Dawid

On 07/10/2019 08:45, Bowen Li wrote:

Hi all,

I'd like to start a new voting thread for FLIP-57 [1] on its latest status
despite [2], and we've reached consensus in [2] and [3].

This voting will be open for minimum 3 days till 6:45am UTC, Oct 10.

Thanks,
Bowen

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-57%3A+Rework+FunctionCatalog
[2] https://www.mail-archive.com/dev@flink.apache.org/msg30180.html
[3]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-57-Rework-FunctionCatalog-td32291.html#a32613





Re: [VOTE] Release 1.9.1, release candidate #1

2019-10-08 Thread Till Rohrmann
FLINK-14315 has been merged into the release-1.9 branch. I've marked the
fix version of this ticket as 1.9.2. If we should create a new RC, then we
could include this fix. If this happens, then we need to update the fix
version to 1.9.1.

Cheers,
Till

On Tue, Oct 8, 2019 at 1:51 PM Till Rohrmann  wrote:

> If people already spent time on verifying the current RC I would also be
> fine to release the fix for FLINK-14315 with Flink 1.9.2.
>
> I will try to merge the PR as soon as possible. When I close the ticket, I
> will update the fix version field to 1.9.2.
>
> Cheers,
> Till
>
> On Tue, Oct 8, 2019 at 4:43 AM Jark Wu  wrote:
>
>> Hi Zili,
>>
>> Thanks for reminding me this, because of the Chinese National Day and
>> Flink Forward Europe,
>> we didn't receive any verification on the 1.9.1 RC1. And I guess we have
>> to extend the voting time after Flink Forward.
>> So I'm fine to have FLINK-14315 and rebuild another RC. What do you think
>> @Till @Jincheng?
>>
>> I guess FLINK-14315 will be merged soon as it is approved 4 days ago?
>> Could you help to merge it once it is passed ? @Zili Chen
>> 
>>
>> Best,
>> Jark
>>
>> On Tue, 8 Oct 2019 at 09:14, Zili Chen  wrote:
>>
>>> Hi Jark,
>>>
>>> I notice a critical bug[1] is marked resolved in 1.9.1 but given 1.9.1
>>> has been cut I'd like to throw the issue here so that we're sure
>>> whether or not it is included in 1.9.1.
>>>
>>> Best,
>>> tison.
>>>
>>> [1] https://issues.apache.org/jira/browse/FLINK-14315
>>>
>>>
>>> Jark Wu  于2019年9月30日周一 下午3:25写道:
>>>
  Hi everyone,

 Please review and vote on the release candidate #1 for the version
 1.9.1,
 as follows:
 [ ] +1, Approve the release
 [ ] -1, Do not approve the release (please provide specific comments)


 The complete staging area is available for your review, which includes:
 * JIRA release notes [1],
 * the official Apache source release and binary convenience releases to
 be
 deployed to dist.apache.org [2], which are signed with the key with
 fingerprint E2C45417BED5C104154F341085BACB5AEFAE3202 [3],
 * all artifacts to be deployed to the Maven Central Repository [4],
 * source code tag "release-1.9.1-rc1" [5],
 * website pull request listing the new release and adding announcement
 blog
 post [6].

 The vote will be open for at least 72 hours.
 Please cast your votes before *Oct. 3th 2019, 08:00 UTC*.

 It is adopted by majority approval, with at least 3 PMC affirmative
 votes.

 Thanks,
 Jark

 [1]

 https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12346003
 [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.1-rc1/
 [3] https://dist.apache.org/repos/dist/release/flink/KEYS
 [4]
 https://repository.apache.org/content/repositories/orgapacheflink-1272/
 [5]

 https://github.com/apache/flink/commit/4d56de81cb692c68a7d1dbfff13087a5079a8252
 [6] https://github.com/apache/flink-web/pull/274

>>>


[jira] [Created] (FLINK-14346) Performance issue with StringSerializer

2019-10-08 Thread Roman Grebennikov (Jira)
Roman Grebennikov created FLINK-14346:
-

 Summary: Performance issue with StringSerializer
 Key: FLINK-14346
 URL: https://issues.apache.org/jira/browse/FLINK-14346
 Project: Flink
  Issue Type: Improvement
  Components: API / Type Serialization System, Benchmarks
Affects Versions: 1.9.0
 Environment: Tested on Flink 1.9.0, adoptopenjdk 8u222.
Reporter: Roman Grebennikov


While doing a performance profiling for our Flink state-heavy streaming job, we 
found that quite  a significant amount of CPU time is spent inside 
StringSerializer writing data to the underlying byte buffer. The hottest part 
of the code is the StringValue.writeString function. And replacing the default 
StringSerializer with the custom one (to just play with a baseline), which is 
just calling DataOutput.writeUTF/readUTF surprisingly yielded to almost 2x 
speedup for string serialization.

As writeUTF and writeString have incompatible wire formats, replacing latter 
with former is not a good idea in general as it may break checkpoint/savepoint 
compatibility.

We also did an early performance analysis of the root cause of this performance 
issue, and the main reason of JDK's writeUTF being faster is that it's code is 
not writing directly to output stream byte-by-byte, but instead creating an 
underlying temporary byte buffer. This yields to a HotSpot almost perfectly 
unrolling the main loop, which results in much better data parallelism.

I've tried to port the ideas from the JVM's implementation of writeUTF back to 
StringValue.writeString, and my current result is nice, having quite 
significant speedup compared to the current implementation:

{{[info] Benchmark Mode Cnt Score Error Units}}
{{[info] StringSerializerBenchmark.measureJDK avgt 30 82.871 ± 1.293 ns/op}}
{{[info] StringSerializerBenchmark.measureNew avgt 30 94.004 ± 1.491 ns/op}}
{{[info] StringSerializerBenchmark.measureOld avgt 30 156.905 ± 3.596 ns/op}}

 

{{Where measureJDK is the JDK's writeUTF asa baseline, measureOld is the 
current upstream implementation in Flink, and the measureNew is the improved 
one. }}

 

{{The code for the benchmark (and the improved version of the serializer) is 
here: [https://github.com/shuttie/flink-string-serializer]}}

 

{{Next steps:}}
 # {{More benchmarks for non-ascii strings.}}
 # {{Benchmarks for long strings.}}
 # {{Benchmarks for deserialization.}}
 # {{Tests for old-new wire format compatibility.}}
 # {{PR to the Flink codebase.}}

{{Is there an interest for this kind of performance improvement?}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] Release 1.9.1, release candidate #1

2019-10-08 Thread Till Rohrmann
If people already spent time on verifying the current RC I would also be
fine to release the fix for FLINK-14315 with Flink 1.9.2.

I will try to merge the PR as soon as possible. When I close the ticket, I
will update the fix version field to 1.9.2.

Cheers,
Till

On Tue, Oct 8, 2019 at 4:43 AM Jark Wu  wrote:

> Hi Zili,
>
> Thanks for reminding me this, because of the Chinese National Day and
> Flink Forward Europe,
> we didn't receive any verification on the 1.9.1 RC1. And I guess we have
> to extend the voting time after Flink Forward.
> So I'm fine to have FLINK-14315 and rebuild another RC. What do you think
> @Till @Jincheng?
>
> I guess FLINK-14315 will be merged soon as it is approved 4 days ago?
> Could you help to merge it once it is passed ? @Zili Chen
> 
>
> Best,
> Jark
>
> On Tue, 8 Oct 2019 at 09:14, Zili Chen  wrote:
>
>> Hi Jark,
>>
>> I notice a critical bug[1] is marked resolved in 1.9.1 but given 1.9.1
>> has been cut I'd like to throw the issue here so that we're sure
>> whether or not it is included in 1.9.1.
>>
>> Best,
>> tison.
>>
>> [1] https://issues.apache.org/jira/browse/FLINK-14315
>>
>>
>> Jark Wu  于2019年9月30日周一 下午3:25写道:
>>
>>>  Hi everyone,
>>>
>>> Please review and vote on the release candidate #1 for the version 1.9.1,
>>> as follows:
>>> [ ] +1, Approve the release
>>> [ ] -1, Do not approve the release (please provide specific comments)
>>>
>>>
>>> The complete staging area is available for your review, which includes:
>>> * JIRA release notes [1],
>>> * the official Apache source release and binary convenience releases to
>>> be
>>> deployed to dist.apache.org [2], which are signed with the key with
>>> fingerprint E2C45417BED5C104154F341085BACB5AEFAE3202 [3],
>>> * all artifacts to be deployed to the Maven Central Repository [4],
>>> * source code tag "release-1.9.1-rc1" [5],
>>> * website pull request listing the new release and adding announcement
>>> blog
>>> post [6].
>>>
>>> The vote will be open for at least 72 hours.
>>> Please cast your votes before *Oct. 3th 2019, 08:00 UTC*.
>>>
>>> It is adopted by majority approval, with at least 3 PMC affirmative
>>> votes.
>>>
>>> Thanks,
>>> Jark
>>>
>>> [1]
>>>
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12346003
>>> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.9.1-rc1/
>>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>>> [4]
>>> https://repository.apache.org/content/repositories/orgapacheflink-1272/
>>> [5]
>>>
>>> https://github.com/apache/flink/commit/4d56de81cb692c68a7d1dbfff13087a5079a8252
>>> [6] https://github.com/apache/flink-web/pull/274
>>>
>>


Re: Re: [DISCUSS] FLIP-65: New type inference for Table API UDFs

2019-10-08 Thread zha...@lenovocloud.com
unsubscribe



zha...@lenovocloud.com
 
From: Jark Wu
Date: 2019-10-08 17:29
To: dev
Subject: Re: [DISCUSS] FLIP-65: New type inference for Table API UDFs
Hi Timo,
 
Thanks for the proposal, a big +1 to the FLIP, especially this enables the
unified `TableEnvironment.registerFunction()`.
 
I think the design documentation is pretty good enough, I only left some
minor comments there.
 
Best,
Jark
 
On Fri, 4 Oct 2019 at 23:54, Timo Walther  wrote:
 
> Hi everyone,
>
> I would like to propose FLIP-65 that describes how we want to deal with
> data types and their inference/extraction in the Table API in the
> future. I have collected many comments, shortcomings, issues from users
> and trainings in last years that went into the design. It completes the
> work started in FLIP-37 by upgrading the type system to DataTypes and
> allowing new use cases. Some key features of this FLIP are:
>
> - Type extraction closely coupled to the SQL standard (with
> precision/scale specification and complex data types)
>
> - Simple stuff is simple, missing information can be provided with
> annotations without the need to specify everything from scratch.
>
> - Full access to the planner's type inference which allows to create
> UDFs as powerful as built-in system functions
>
> - Unification of Scala and Java API to enable the unified
> TableEnvironment.registerFunction()
>
>
> The design document can be found here:
>
>
> https://docs.google.com/document/d/1Zf8-okGvCiTTRaTN0IqtTGrrjXYNGFJnhUQyinF4xcU/edit#
>
> I will convert it to a wiki page after the first review comments.
>
> Happy to hear your thoughts.
>
> Thanks,
>
> Timo
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-37%3A+Rework+of+the+Table+API+Type+System
>
>


Re: [DISCUSS] FLIP-65: New type inference for Table API UDFs

2019-10-08 Thread Jark Wu
Hi Timo,

Thanks for the proposal, a big +1 to the FLIP, especially this enables the
unified `TableEnvironment.registerFunction()`.

I think the design documentation is pretty good enough, I only left some
minor comments there.

Best,
Jark

On Fri, 4 Oct 2019 at 23:54, Timo Walther  wrote:

> Hi everyone,
>
> I would like to propose FLIP-65 that describes how we want to deal with
> data types and their inference/extraction in the Table API in the
> future. I have collected many comments, shortcomings, issues from users
> and trainings in last years that went into the design. It completes the
> work started in FLIP-37 by upgrading the type system to DataTypes and
> allowing new use cases. Some key features of this FLIP are:
>
> - Type extraction closely coupled to the SQL standard (with
> precision/scale specification and complex data types)
>
> - Simple stuff is simple, missing information can be provided with
> annotations without the need to specify everything from scratch.
>
> - Full access to the planner's type inference which allows to create
> UDFs as powerful as built-in system functions
>
> - Unification of Scala and Java API to enable the unified
> TableEnvironment.registerFunction()
>
>
> The design document can be found here:
>
>
> https://docs.google.com/document/d/1Zf8-okGvCiTTRaTN0IqtTGrrjXYNGFJnhUQyinF4xcU/edit#
>
> I will convert it to a wiki page after the first review comments.
>
> Happy to hear your thoughts.
>
> Thanks,
>
> Timo
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-37%3A+Rework+of+the+Table+API+Type+System
>
>


[jira] [Created] (FLINK-14345) Snapshot deployments may fail due to MapR HTTPS issue

2019-10-08 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-14345:


 Summary: Snapshot deployments may fail due to MapR HTTPS issue
 Key: FLINK-14345
 URL: https://issues.apache.org/jira/browse/FLINK-14345
 Project: Flink
  Issue Type: Bug
  Components: Release System
Affects Versions: 1.9.0, 1.8.2, 1.7.2, 1.10.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.7.3, 1.10.0, 1.9.1, 1.8.3


Snapshot deployments occasionally fail since the MapR HTTPS repository cannot 
be verified in some environments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14344) Snapshot master hook state asynchronously

2019-10-08 Thread Biao Liu (Jira)
Biao Liu created FLINK-14344:


 Summary: Snapshot master hook state asynchronously
 Key: FLINK-14344
 URL: https://issues.apache.org/jira/browse/FLINK-14344
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Reporter: Biao Liu
 Fix For: 1.10.0


Currently we snapshot the master hook state synchronously. As a part of 
reworking threading model of {{CheckpointCoordinator}}, we have to make this 
non-blocking to satisfy the requirement of running in main thread.

The behavior of snapshotting master hook state should be similar to task state 
snapshotting. It should be launched after \{{PendingCheckpoint}} created. It 
could complete or fail the {{PendingCheckpoint}} like task state snapshotting. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14343) Remove uncompleted YARNHighAvailabilityService

2019-10-08 Thread Zili Chen (Jira)
Zili Chen created FLINK-14343:
-

 Summary: Remove uncompleted YARNHighAvailabilityService
 Key: FLINK-14343
 URL: https://issues.apache.org/jira/browse/FLINK-14343
 Project: Flink
  Issue Type: Task
  Components: Runtime / Coordination
Reporter: Zili Chen
Assignee: Zili Chen
 Fix For: 1.10.0


Corresponding mailing list 
[thread|https://lists.apache.org/x/thread.html/6022f2124be91e3f4667d61a977ea0639e2c19286560d6d1cb874792@%3Cdev.flink.apache.org%3E].

Noticed that there are several stale & uncompleted high-availability services 
implementations, I start this thread in order to see whether or not we can 
remove them for a
clean codebase.

Below are all of classes I noticed.

- YarnHighAvailabilityServices
- AbstractYarnNonHaServices
- YarnIntraNonHaMasterServices
- YarnPreConfiguredMasterNonHaServices
- SingleLeaderElectionService
- FsNegativeRunningJobsRegistry
(as well as their dedicated tests)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)