Re: [VOTE] Release 1.5.1, release candidate #3

2018-07-10 Thread Yaz Sh
-1

./examples/streaming folder is missing in binary packages


Cheers,
Yazdan

> On Jul 10, 2018, at 9:57 PM, vino yang  wrote:
> 
> +1
> reviewed [1], [4] and [6]
> 
> 2018-07-11 3:10 GMT+08:00 Chesnay Schepler :
> 
>> Hi everyone,
>> Please review and vote on the release candidate #3 for the version 1.5.1,
>> as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific comments)
>> 
>> 
>> The complete staging area is available for your review, which includes:
>> * JIRA release notes [1],
>> * the official Apache source release and binary convenience releases to be
>> deployed to dist.apache.org [2], which are signed with the key with
>> fingerprint 11D464BA [3],
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> * source code tag "release-1.5.1-rc3" [5],
>> * website pull request listing the new release and adding announcement
>> blog post [6].
>> 
>> This RC is a slightly modified version of the previous RC, with most
>> release testing being applicable to both release candidates. The minimum
>> voting duration will hence be reduced to 24 hours. It is adopted by
>> majority approval, with at least 3 PMC affirmative votes.
>> 
>> Thanks,
>> Chesnay
>> 
>> [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?proje
>> ctId=12315522=12343053
>> [2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> [4] https://repository.apache.org/content/repositories/orgapacheflink-1171
>> [5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
>> refs/tags/release-1.5.1-rc3
>> [6] https://github.com/apache/flink-web/pull/112
>> 
>> 
>> 
>> 
>> 



Re: [VOTE] Release 1.5.1, release candidate #3

2018-07-10 Thread vino yang
+1
reviewed [1], [4] and [6]

2018-07-11 3:10 GMT+08:00 Chesnay Schepler :

> Hi everyone,
> Please review and vote on the release candidate #3 for the version 1.5.1,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases to be
> deployed to dist.apache.org [2], which are signed with the key with
> fingerprint 11D464BA [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag "release-1.5.1-rc3" [5],
> * website pull request listing the new release and adding announcement
> blog post [6].
>
> This RC is a slightly modified version of the previous RC, with most
> release testing being applicable to both release candidates. The minimum
> voting duration will hence be reduced to 24 hours. It is adopted by
> majority approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Chesnay
>
> [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?proje
> ctId=12315522=12343053
> [2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1171
> [5] https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
> refs/tags/release-1.5.1-rc3
> [6] https://github.com/apache/flink-web/pull/112
>
>
>
>
>


RE: 'mvn verify' fails at flink-hadoop-fs

2018-07-10 Thread NEKRASSOV, ALEXEI
I now see that the test that fails for me already has this check. But I don't 
think it is done correctly.

https://github.com/apache/flink/blob/master/flink-filesystems/flink-hadoop-fs/src/test/java/org/apache/flink/runtime/fs/hdfs/HdfsBehaviorTest.java
 has verifyOS() but a failed assumption in that method doesn't stop us from 
executing createHDFS() where there is no check, and thus the test fails.

I think we need to remove verifyOS(), and move the check directly to 
createHDFS() instead.


Similar problem may exist in:
flink-connectors\flink-connector-filesystem\src\test\java\org\apache\flink\streaming\connectors\fs\bucketing\BucketingSinkMigrationTest.java
flink-fs-tests\src\test\java\org\apache\flink\hdfstests\ContinuousFileProcessingMigrationTest.java
flink-fs-tests\src\test\java\org\apache\flink\hdfstests\HDFSTest.java

-Original Message-
From: Chesnay Schepler [mailto:ches...@apache.org] 
Sent: Tuesday, July 10, 2018 3:10 PM
To: dev@flink.apache.org; NEKRASSOV, ALEXEI 
Subject: Re: 'mvn verify' fails at flink-hadoop-fs

That flat-out disables all tests in the module, even those that could run on 
Windows.

We commonly add an OS check to respective tests that skip the tests, with an 
"Assume.assumeTrue(os!=windows)" statement in a "@BeforeClass" 
method.

On 10.07.2018 21:00, NEKRASSOV, ALEXEI wrote:
> I added lines below to flink-hadoop-fs/pom.xml, and that allowed me to turn 
> off the tests that were failing for me.
> Do we want to add this change to master?
> If so, do I need to document this new switch somewhere?
>
> (
> the build then hang for me at flink-runtime, but that's a different 
> issue Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 25.969 sec - in 
> org.apache.flink.runtime.taskmanager.TaskManagerRegistrationTest
> )
>
>   
>   
>   
>   org.apache.maven.plugins
>   maven-surefire-plugin
>   
>   ${skipHdfsTests}
>   
>   
>   
>   
>
> -Original Message-
> From: Chesnay Schepler [mailto:ches...@apache.org]
> Sent: Tuesday, July 10, 2018 10:36 AM
> To: dev@flink.apache.org; NEKRASSOV, ALEXEI 
> Subject: Re: 'mvn verify' fails at flink-hadoop-fs
>
> There's currently no workaround except going in and manually disabling them.
>
> On 10.07.2018 16:32, Chesnay Schepler wrote:
>> Generally, any test that uses HDFS will fail on Windows. We've 
>> disabled most of them, but some slip through from time to time.
>>
>> Note that we do not provide any guarantees for all tests passing on 
>> Windows.
>>
>> On 10.07.2018 16:28, NEKRASSOV, ALEXEI wrote:
>>> I'm running 'mvn clean verify' on Windows with no Hadoop libraries 
>>> installed, and the build fails (see below).
>>> What's the solution? Is there a switch to skip Hadoop-related tests?
>>> Or I need to install Hadoop libraries?
>>>
>>> Thanks,
>>> Alex
>>>
>>>
>>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
>>> 1.726 sec <<< FAILURE! - in 
>>> org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest
>>> org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest  Time elapsed:
>>> 1.726 sec  <<< ERROR!
>>> java.lang.UnsatisfiedLinkError:
>>> org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
>>>   at
>>> org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
>>>   at
>>> org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:570)
>>>   at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
>>>   at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:484)
>>>   at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:293)
>>>   at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>>>   at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:891)
>>>   at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
>>>   at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:503)
>>>   at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:559)
>>>   at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:724)
>>>   at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:708)
>>>   at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1358)
>>>   at
>>> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:996)
>>>   at
>>> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:867)
>>>   at
>>> 

[VOTE] Release 1.5.1, release candidate #3

2018-07-10 Thread Chesnay Schepler

Hi everyone,
Please review and vote on the release candidate #3 for the version 
1.5.1, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to 
be deployed to dist.apache.org [2], which are signed with the key with 
fingerprint 11D464BA [3],

* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc3" [5],
* website pull request listing the new release and adding announcement 
blog post [6].


This RC is a slightly modified version of the previous RC, with most 
release testing being applicable to both release candidates. The minimum 
voting duration will hence be reduced to 24 hours. It is adopted by 
majority approval, with at least 3 PMC affirmative votes.


Thanks,
Chesnay

[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12343053

[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1171
[5] 
https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.5.1-rc3

[6] https://github.com/apache/flink-web/pull/112






Re: 'mvn verify' fails at flink-hadoop-fs

2018-07-10 Thread Chesnay Schepler
That flat-out disables all tests in the module, even those that could 
run on Windows.


We commonly add an OS check to respective tests that skip the tests, 
with an "Assume.assumeTrue(os!=windows)" statement in a "@BeforeClass" 
method.


On 10.07.2018 21:00, NEKRASSOV, ALEXEI wrote:

I added lines below to flink-hadoop-fs/pom.xml, and that allowed me to turn off 
the tests that were failing for me.
Do we want to add this change to master?
If so, do I need to document this new switch somewhere?

(
the build then hang for me at flink-runtime, but that's a different issue
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.969 sec - in 
org.apache.flink.runtime.taskmanager.TaskManagerRegistrationTest
)




org.apache.maven.plugins
maven-surefire-plugin

${skipHdfsTests}





-Original Message-
From: Chesnay Schepler [mailto:ches...@apache.org]
Sent: Tuesday, July 10, 2018 10:36 AM
To: dev@flink.apache.org; NEKRASSOV, ALEXEI 
Subject: Re: 'mvn verify' fails at flink-hadoop-fs

There's currently no workaround except going in and manually disabling them.

On 10.07.2018 16:32, Chesnay Schepler wrote:

Generally, any test that uses HDFS will fail on Windows. We've
disabled most of them, but some slip through from time to time.

Note that we do not provide any guarantees for all tests passing on
Windows.

On 10.07.2018 16:28, NEKRASSOV, ALEXEI wrote:

I'm running 'mvn clean verify' on Windows with no Hadoop libraries
installed, and the build fails (see below).
What's the solution? Is there a switch to skip Hadoop-related tests?
Or I need to install Hadoop libraries?

Thanks,
Alex


Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.726
sec <<< FAILURE! - in
org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest
org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest  Time elapsed:
1.726 sec  <<< ERROR!
java.lang.UnsatisfiedLinkError:
org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
  at
org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
  at
org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:570)
  at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
  at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:484)
  at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:293)
  at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
  at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:891)
  at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
  at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:503)
  at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:559)
  at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:724)
  at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:708)
  at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1358)
  at
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:996)
  at
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:867)
  at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:702)
  at
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:374)
  at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
  at
org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest.createHDFS(HdfsBeha
viorTest.java:65)

Running org.apache.flink.runtime.fs.hdfs.HdfsKindTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081
sec - in org.apache.flink.runtime.fs.hdfs.HdfsKindTest
Running
org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.017
sec - in
org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest

Results :

Tests in error:
HdfsBehaviorTest.createHDFS:65 ▒ UnsatisfiedLink
org.apache.hadoop.io.nativeio...

Tests run: 24, Failures: 0, Errors: 1, Skipped: 1

[INFO]
-
---
[INFO] Reactor Summary:
[INFO]
[INFO] force-shading .. SUCCESS [
2.335 s] [INFO] flink ..
SUCCESS [
29.794 s]
[INFO] flink-annotations .. SUCCESS [
2.198 s] [INFO] flink-shaded-hadoop 
SUCCESS [  0.226 s] [INFO] 

RE: 'mvn verify' fails at flink-hadoop-fs

2018-07-10 Thread NEKRASSOV, ALEXEI
I added lines below to flink-hadoop-fs/pom.xml, and that allowed me to turn off 
the tests that were failing for me. 
Do we want to add this change to master?
If so, do I need to document this new switch somewhere?

(
the build then hang for me at flink-runtime, but that's a different issue
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.969 sec - in 
org.apache.flink.runtime.taskmanager.TaskManagerRegistrationTest
)




org.apache.maven.plugins
maven-surefire-plugin

${skipHdfsTests}





-Original Message-
From: Chesnay Schepler [mailto:ches...@apache.org] 
Sent: Tuesday, July 10, 2018 10:36 AM
To: dev@flink.apache.org; NEKRASSOV, ALEXEI 
Subject: Re: 'mvn verify' fails at flink-hadoop-fs

There's currently no workaround except going in and manually disabling them.

On 10.07.2018 16:32, Chesnay Schepler wrote:
> Generally, any test that uses HDFS will fail on Windows. We've 
> disabled most of them, but some slip through from time to time.
>
> Note that we do not provide any guarantees for all tests passing on 
> Windows.
>
> On 10.07.2018 16:28, NEKRASSOV, ALEXEI wrote:
>> I'm running 'mvn clean verify' on Windows with no Hadoop libraries 
>> installed, and the build fails (see below).
>> What's the solution? Is there a switch to skip Hadoop-related tests?
>> Or I need to install Hadoop libraries?
>>
>> Thanks,
>> Alex
>>
>>
>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.726 
>> sec <<< FAILURE! - in 
>> org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest
>> org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest  Time elapsed: 
>> 1.726 sec  <<< ERROR!
>> java.lang.UnsatisfiedLinkError: 
>> org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
>>  at
>> org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
>>  at
>> org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:570)
>>  at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
>>  at
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:484)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:293)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:891)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:503)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:559)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:724)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:708)
>>  at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1358)
>>  at
>> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:996)
>>  at
>> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:867)
>>  at
>> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:702)
>>  at
>> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:374)
>>  at
>> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
>>  at
>> org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest.createHDFS(HdfsBeha
>> viorTest.java:65)
>>
>> Running org.apache.flink.runtime.fs.hdfs.HdfsKindTest
>> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
>> sec - in org.apache.flink.runtime.fs.hdfs.HdfsKindTest
>> Running
>> org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest
>> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.017 
>> sec - in 
>> org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest
>>
>> Results :
>>
>> Tests in error:
>>HdfsBehaviorTest.createHDFS:65 ▒ UnsatisfiedLink 
>> org.apache.hadoop.io.nativeio...
>>
>> Tests run: 24, Failures: 0, Errors: 1, Skipped: 1
>>
>> [INFO]
>> -
>> ---
>> [INFO] Reactor Summary:
>> [INFO]
>> [INFO] force-shading .. SUCCESS [  
>> 2.335 s] [INFO] flink .. 
>> SUCCESS [
>> 29.794 s]
>> [INFO] flink-annotations .. SUCCESS [  
>> 2.198 s] [INFO] flink-shaded-hadoop  
>> SUCCESS [  0.226 s] [INFO] flink-shaded-hadoop2 
>> 

[jira] [Created] (FLINK-9796) Add failure handling to release guide

2018-07-10 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9796:
---

 Summary: Add failure handling to release guide
 Key: FLINK-9796
 URL: https://issues.apache.org/jira/browse/FLINK-9796
 Project: Flink
  Issue Type: Improvement
  Components: Release System
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler


The release guide only contains instructions for the happy path, but no 
information on canceling a release.
Non-ehaustive list of things to include:
* remove uploaded source/binary release from dist.apache.org
* drop repository in nexus manager



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9795) Update Mesos documentation for flip6

2018-07-10 Thread Leonid Ishimnikov (JIRA)
Leonid Ishimnikov created FLINK-9795:


 Summary: Update Mesos documentation for flip6
 Key: FLINK-9795
 URL: https://issues.apache.org/jira/browse/FLINK-9795
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.5.0, 1.6.0
Reporter: Leonid Ishimnikov


Mesos documentation would benefit from an overhaul after flip6 became the 
default cluster management model.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9794) JDBCOutputFormat does not consider idle connection and multithreads synchronization

2018-07-10 Thread wangsan (JIRA)
wangsan created FLINK-9794:
--

 Summary: JDBCOutputFormat does not consider idle connection and 
multithreads synchronization
 Key: FLINK-9794
 URL: https://issues.apache.org/jira/browse/FLINK-9794
 Project: Flink
  Issue Type: Bug
  Components: Streaming Connectors
Affects Versions: 1.5.0, 1.4.0
Reporter: wangsan


Current implementation of  JDBCOutputFormat has two potential problems: 

1. The Connection was established when JDBCOutputFormat is opened, and will be 
used all the time. But if this connection lies idle for a long time, the 
database will force close the connection, thus errors may occur.
2. The flush() method is called when batchCount exceeds the threshold, but it 
is also called while snapshotting state. So two threads may modify upload and 
batchCount, but without synchronization.

We need fix these two problems to make JDBCOutputFormat more reliable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Confusions About JDBCOutputFormat

2018-07-10 Thread wangsan
Hi Hequn,

Establishing a connection for each batch write may also have idle connection 
problem, since we are not sure when the connection will be closed. We call 
flush() method when a batch is finished or  snapshot state, but what if the 
snapshot is not enabled and the batch size not reached before the connection is 
closed?

May be we could use a Timer to test the connection periodically and keep it 
alive. What do you think?

I will open a jira and try to work on that issue.

Best, 
wangsan



> On Jul 10, 2018, at 8:38 PM, Hequn Cheng  wrote:
> 
> Hi wangsan,
> 
> I agree with you. It would be kind of you to open a jira to check the problem.
> 
> For the first problem, I think we need to establish connection each time 
> execute batch write. And, it is better to get the connection from a 
> connection pool.
> For the second problem, to avoid multithread problem, I think we should 
> synchronized the batch object in flush() method.
> 
> What do you think?
> 
> Best, Hequn
> 
> 
> 
> On Tue, Jul 10, 2018 at 2:36 PM, wangsan  > wrote:
> Hi all,
> 
> I'm going to use JDBCAppendTableSink and JDBCOutputFormat in my Flink 
> application. But I am confused with the implementation of JDBCOutputFormat.
> 
> 1. The Connection was established when JDBCOutputFormat is opened, and will 
> be used all the time. But if this connction lies idle for a long time, the 
> database will force close the connetion, thus errors may occur.
> 2. The flush() method is called when batchCount exceeds the threshold, but it 
> is also called while snapshotting state. So two threads may modify upload and 
> batchCount, but without synchronization.
> 
> Please correct me if I am wrong.
> 
> ——
> wangsan
> 



Re: 'mvn verify' fails at flink-hadoop-fs

2018-07-10 Thread Chesnay Schepler

There's currently no workaround except going in and manually disabling them.

On 10.07.2018 16:32, Chesnay Schepler wrote:
Generally, any test that uses HDFS will fail on Windows. We've 
disabled most of them, but some slip through from time to time.


Note that we do not provide any guarantees for all tests passing on 
Windows.


On 10.07.2018 16:28, NEKRASSOV, ALEXEI wrote:
I'm running 'mvn clean verify' on Windows with no Hadoop libraries 
installed, and the build fails (see below).

What's the solution? Is there a switch to skip Hadoop-related tests?
Or I need to install Hadoop libraries?

Thanks,
Alex


Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.726 
sec <<< FAILURE! - in org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest
org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest  Time elapsed: 
1.726 sec  <<< ERROR!
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
 at 
org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
 at 
org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:570)

 at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
 at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:484)
 at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:293)
 at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:891)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:503)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:559)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:724)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:708)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1358)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:996)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:867)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:702)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:374)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
 at 
org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest.createHDFS(HdfsBehaviorTest.java:65)


Running org.apache.flink.runtime.fs.hdfs.HdfsKindTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 
sec - in org.apache.flink.runtime.fs.hdfs.HdfsKindTest
Running 
org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.017 
sec - in 
org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest


Results :

Tests in error:
   HdfsBehaviorTest.createHDFS:65 ▒ UnsatisfiedLink 
org.apache.hadoop.io.nativeio...


Tests run: 24, Failures: 0, Errors: 1, Skipped: 1

[INFO] 


[INFO] Reactor Summary:
[INFO]
[INFO] force-shading .. SUCCESS 
[  2.335 s]
[INFO] flink .. SUCCESS [ 
29.794 s]
[INFO] flink-annotations .. SUCCESS 
[  2.198 s]
[INFO] flink-shaded-hadoop  SUCCESS 
[  0.226 s]
[INFO] flink-shaded-hadoop2 ... SUCCESS [ 
11.015 s]
[INFO] flink-shaded-hadoop2-uber .. SUCCESS [ 
16.343 s]
[INFO] flink-shaded-yarn-tests  SUCCESS [ 
13.653 s]
[INFO] flink-shaded-curator ... SUCCESS 
[  1.386 s]
[INFO] flink-test-utils-parent  SUCCESS 
[  0.191 s]
[INFO] flink-test-utils-junit . SUCCESS 
[  3.318 s]
[INFO] flink-metrics .. SUCCESS 
[  0.212 s]
[INFO] flink-metrics-core . SUCCESS 
[  3.502 s]
[INFO] flink-core . SUCCESS 
[01:30 min]
[INFO] flink-java . SUCCESS 
[01:31 min]
[INFO] flink-queryable-state .. SUCCESS 
[  0.186 s]
[INFO] flink-queryable-state-client-java .. SUCCESS 
[  4.099 s]
[INFO] flink-filesystems .. SUCCESS 
[  0.198 s]
[INFO] flink-hadoop-fs  FAILURE 
[  8.672 s]












Re: 'mvn verify' fails at flink-hadoop-fs

2018-07-10 Thread Chesnay Schepler
Generally, any test that uses HDFS will fail on Windows. We've disabled 
most of them, but some slip through from time to time.


Note that we do not provide any guarantees for all tests passing on Windows.

On 10.07.2018 16:28, NEKRASSOV, ALEXEI wrote:

I'm running 'mvn clean verify' on Windows with no Hadoop libraries installed, 
and the build fails (see below).
What's the solution? Is there a switch to skip Hadoop-related tests?
Or I need to install Hadoop libraries?

Thanks,
Alex


Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.726 sec <<< 
FAILURE! - in org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest
org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest  Time elapsed: 1.726 sec  <<< 
ERROR!
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
 at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native 
Method)
 at 
org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:570)
 at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
 at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:484)
 at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:293)
 at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:891)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:503)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:559)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:724)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:708)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1358)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:996)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:867)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:702)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:374)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
 at 
org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest.createHDFS(HdfsBehaviorTest.java:65)

Running org.apache.flink.runtime.fs.hdfs.HdfsKindTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec - in 
org.apache.flink.runtime.fs.hdfs.HdfsKindTest
Running org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.017 sec - in 
org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest

Results :

Tests in error:
   HdfsBehaviorTest.createHDFS:65 ▒ UnsatisfiedLink 
org.apache.hadoop.io.nativeio...

Tests run: 24, Failures: 0, Errors: 1, Skipped: 1

[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] force-shading .. SUCCESS [  2.335 s]
[INFO] flink .. SUCCESS [ 29.794 s]
[INFO] flink-annotations .. SUCCESS [  2.198 s]
[INFO] flink-shaded-hadoop  SUCCESS [  0.226 s]
[INFO] flink-shaded-hadoop2 ... SUCCESS [ 11.015 s]
[INFO] flink-shaded-hadoop2-uber .. SUCCESS [ 16.343 s]
[INFO] flink-shaded-yarn-tests  SUCCESS [ 13.653 s]
[INFO] flink-shaded-curator ... SUCCESS [  1.386 s]
[INFO] flink-test-utils-parent  SUCCESS [  0.191 s]
[INFO] flink-test-utils-junit . SUCCESS [  3.318 s]
[INFO] flink-metrics .. SUCCESS [  0.212 s]
[INFO] flink-metrics-core . SUCCESS [  3.502 s]
[INFO] flink-core . SUCCESS [01:30 min]
[INFO] flink-java . SUCCESS [01:31 min]
[INFO] flink-queryable-state .. SUCCESS [  0.186 s]
[INFO] flink-queryable-state-client-java .. SUCCESS [  4.099 s]
[INFO] flink-filesystems .. SUCCESS [  0.198 s]
[INFO] flink-hadoop-fs  FAILURE [  8.672 s]








'mvn verify' fails at flink-hadoop-fs

2018-07-10 Thread NEKRASSOV, ALEXEI
I'm running 'mvn clean verify' on Windows with no Hadoop libraries installed, 
and the build fails (see below).
What's the solution? Is there a switch to skip Hadoop-related tests?
Or I need to install Hadoop libraries?

Thanks,
Alex


Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.726 sec <<< 
FAILURE! - in org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest
org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest  Time elapsed: 1.726 sec  <<< 
ERROR!
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
at 
org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:570)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:484)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:293)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:891)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:503)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:559)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:724)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:708)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1358)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:996)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:867)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:702)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:374)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at 
org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest.createHDFS(HdfsBehaviorTest.java:65)

Running org.apache.flink.runtime.fs.hdfs.HdfsKindTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec - in 
org.apache.flink.runtime.fs.hdfs.HdfsKindTest
Running org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.017 sec - in 
org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest

Results :

Tests in error:
  HdfsBehaviorTest.createHDFS:65 ▒ UnsatisfiedLink 
org.apache.hadoop.io.nativeio...

Tests run: 24, Failures: 0, Errors: 1, Skipped: 1

[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] force-shading .. SUCCESS [  2.335 s]
[INFO] flink .. SUCCESS [ 29.794 s]
[INFO] flink-annotations .. SUCCESS [  2.198 s]
[INFO] flink-shaded-hadoop  SUCCESS [  0.226 s]
[INFO] flink-shaded-hadoop2 ... SUCCESS [ 11.015 s]
[INFO] flink-shaded-hadoop2-uber .. SUCCESS [ 16.343 s]
[INFO] flink-shaded-yarn-tests  SUCCESS [ 13.653 s]
[INFO] flink-shaded-curator ... SUCCESS [  1.386 s]
[INFO] flink-test-utils-parent  SUCCESS [  0.191 s]
[INFO] flink-test-utils-junit . SUCCESS [  3.318 s]
[INFO] flink-metrics .. SUCCESS [  0.212 s]
[INFO] flink-metrics-core . SUCCESS [  3.502 s]
[INFO] flink-core . SUCCESS [01:30 min]
[INFO] flink-java . SUCCESS [01:31 min]
[INFO] flink-queryable-state .. SUCCESS [  0.186 s]
[INFO] flink-queryable-state-client-java .. SUCCESS [  4.099 s]
[INFO] flink-filesystems .. SUCCESS [  0.198 s]
[INFO] flink-hadoop-fs  FAILURE [  8.672 s]





[jira] [Created] (FLINK-9793) flink on yarn 以yarn-cluster提交的时候flink-dist*.jar重复上传

2018-07-10 Thread linzhongjun (JIRA)
linzhongjun created FLINK-9793:
--

 Summary: flink on yarn 以yarn-cluster提交的时候flink-dist*.jar重复上传
 Key: FLINK-9793
 URL: https://issues.apache.org/jira/browse/FLINK-9793
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.4.2
Reporter: linzhongjun






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


RE: 'mvn verify' fails on rat plugin

2018-07-10 Thread NEKRASSOV, ALEXEI
HeartSaVioR, thanks for the helpful pointer!
PR created: https://github.com/apache/flink/pull/6295


-Original Message-
From: Jungtaek Lim [mailto:kabh...@gmail.com] 
Sent: Monday, July 09, 2018 11:43 PM
To: dev@flink.apache.org
Cc: Chesnay Schepler 
Subject: Re: 'mvn verify' fails on rat plugin

Alex,

I've already submitted similar kind of patch without explicit JIRA issue (
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_flink_pull_6256=DwIFaQ=LFYZ-o9_HUMeMTSQicvjIg=aQRKg6d5fsz42zXkyiSdqg=tTDLBdeCUA6c3QaIR_k3b3fV8-z82q1gcI1bA2HHna4=gnWwo2FrEO0aXWkIffbOFdob5aE3Vnst-7Fr-JAC26M=)
 so I guess you don't need to file one.

Thanks,
Jungtaek Lim (HeartSaVioR)

2018년 7월 10일 (화) 오전 8:57, NEKRASSOV, ALEXEI 님이 작성:

> Does such a change require JIRA?
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__flink.apache.org_
> contribute-2Dcode.html=DwIFaQ=LFYZ-o9_HUMeMTSQicvjIg=aQRKg6d5fsz
> 42zXkyiSdqg=tTDLBdeCUA6c3QaIR_k3b3fV8-z82q1gcI1bA2HHna4=oyBnxCDK_Gcgia7TqF56V6en_K41aaN-0lxXk2nv6Ak=
>  says: "... with an exception for trivial hot fixes..." but when I go to 
> create a pull request, I see: "Exceptions are made for typos in JavaDoc or 
> documentation".
> I see inconsistency here... :)
>
>
> -Original Message-
> From: Chesnay Schepler [mailto:ches...@apache.org]
> Sent: Monday, July 09, 2018 4:24 PM
> To: dev@flink.apache.org; NEKRASSOV, ALEXEI 
> Subject: Re: 'mvn verify' fails on rat plugin
>
> These files are automatically detected as binary files on unix and 
> thus automatically ignored by the RAT plugin, but not on Windows.
> Most Flink devs run unix and don't run into this problem, and thus 
> don't define a proper exclusion.
> This happens regularly.
>
> It would be great if you could open a PR to add the missing exclusions.
>
> On 09.07.2018 21:47, NEKRASSOV, ALEXEI wrote:
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__flink.apache.org_
> contribute-2Dcode.html=DwIC3g=LFYZ-o9_HUMeMTSQicvjIg=aQRKg6d5fsz
> 42zXkyiSdqg=BtUtXJtmUmFVBy9uXGsQs21fwy5dSvOHpeiXxLefQd8=_8QQZVgu0H
> FV2KxPaD6g8q4DHpRdXdvbjmV4u_STieg=
> recommends running 'mvn clean verify', but when I do (on master), the 
> build
> fails:
> >
> > ...
> > [INFO] Parsing exclusions from C:\usr\git\flink\.gitignore ...
> > [INFO] Rat check: Summary over all files. Unapproved: 2, unknown: 2,
> generated: 0, approved: 8064 licenses.
> > ...
> > [INFO] flink .. FAILURE 
> > [
> > 20.726 s]
> >
> > Files with unapproved licenses:
> >
> >
> > flink-runtime/src/test/resources/heap_keyed_statebackend_1_2.snapsho
> > t
> >
> > flink-runtime/src/test/resources/heap_keyed_statebackend_1_5_map.sna
> > ps
> > hot
> >
> > Do we need to update .gitignore? For these two files only, or for 
> > some
> new pattern?
> >
> > Thanks,
> > Alex
> >
>
>


[jira] [Created] (FLINK-9792) Cannot add html tags in options description

2018-07-10 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9792:
---

 Summary: Cannot add html tags in options description
 Key: FLINK-9792
 URL: https://issues.apache.org/jira/browse/FLINK-9792
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.5.1, 1.6.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


Right now it is impossible to add any html tags in options description, because 
all "<" and ">" are escaped. Therefore some links there do not work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[CANCEL][VOTE] Release 1.5.1, release candidate #2

2018-07-10 Thread Chesnay Schepler
I'm canceling the RC to include a fix for "FLINK-9789 - Watermark 
metrics for an operator shadow each other".


On 06.07.2018 17:09, Chesnay Schepler wrote:

Hi everyone,
Please review and vote on the release candidate #2 for the version 
1.5.1, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)


The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases 
to be deployed to dist.apache.org [2], which are signed with the key 
with fingerprint 11D464BA [3],

* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc2" [5],
* website pull request listing the new release and adding announcement 
blog post [6].


The vote will be open for at least 72 hours. It is adopted by majority 
approval, with at least 3 PMC affirmative votes.


Thanks,
Chesnay

[1] 
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12343053

[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] 
https://repository.apache.org/content/repositories/orgapacheflink-1170
[5] 
https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=refs/tags/release-1.5.1-rc2

[6] https://github.com/apache/flink-web/pull/112








[jira] [Created] (FLINK-9791) Outdated savepoint compatibility table

2018-07-10 Thread Dawid Wysakowicz (JIRA)
Dawid Wysakowicz created FLINK-9791:
---

 Summary: Outdated savepoint compatibility table
 Key: FLINK-9791
 URL: https://issues.apache.org/jira/browse/FLINK-9791
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.4.2, 1.5.1, 1.6.0
Reporter: Dawid Wysakowicz


Savepoint compatibility table is outdated, does no cover 1.4.x nor 1.5.x. We 
should either update it or remove it, as I think we agreed to support only two 
versions backward compatibility and such table is unnecessary.

 

You can check the table in version 1.5.x here: 
https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/upgrading.html#compatibility-table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9790) Add documentation for UDF in SQL Client

2018-07-10 Thread Timo Walther (JIRA)
Timo Walther created FLINK-9790:
---

 Summary: Add documentation for UDF in SQL Client
 Key: FLINK-9790
 URL: https://issues.apache.org/jira/browse/FLINK-9790
 Project: Flink
  Issue Type: Improvement
  Components: Table API  SQL
Reporter: Timo Walther






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Confusions About JDBCOutputFormat

2018-07-10 Thread Hequn Cheng
Hi wangsan,

I agree with you. It would be kind of you to open a jira to check the
problem.

For the first problem, I think we need to establish connection each time
execute batch write. And, it is better to get the connection from a
connection pool.
For the second problem, to avoid multithread problem, I think we should
synchronized the batch object in flush() method.

What do you think?

Best, Hequn



On Tue, Jul 10, 2018 at 2:36 PM, wangsan  wrote:

> Hi all,
>
> I'm going to use JDBCAppendTableSink and JDBCOutputFormat in my Flink
> application. But I am confused with the implementation of JDBCOutputFormat.
>
> 1. The Connection was established when JDBCOutputFormat is opened, and
> will be used all the time. But if this connction lies idle for a long time,
> the database will force close the connetion, thus errors may occur.
> 2. The flush() method is called when batchCount exceeds the threshold, but
> it is also called while snapshotting state. So two threads may modify
> upload and batchCount, but without synchronization.
>
> Please correct me if I am wrong.
>
> ——
> wangsan
>


Re: [VOTE] Release 1.5.1, release candidate #2

2018-07-10 Thread Till Rohrmann
+1 to Chesnay's proposal to create a new RC with a shortened voting period.

On Tue, Jul 10, 2018 at 2:02 PM Chesnay Schepler  wrote:

> I've opened a PR for the metric issue:
> https://github.com/apache/flink/pull/6292
>
> Given that we've already got the required votes and still got 24h left,
> I would like to cancel this vote, create a new RC this evening with a
> shortended
> voting period (24h).
>
> Virtually all checks made so far (excluding the signing) would still apply
> to the new RC.
>
> On 10.07.2018 13:12, Chesnay Schepler wrote:
> > +1
> >
> > * start local cluster using start-cluster.bat and checked log files
> > * uploaded and submitted multiple jobs through WebUI
> >
> > On 10.07.2018 12:47, Chesnay Schepler wrote:
> >> This issue has already affected 1.5.0 (in other places).
> >>
> >> I rescind my -1 and will continue testing.
> >>
> >> On 10.07.2018 12:45, Chesnay Schepler wrote:
> >>> I've linked the wrong jira:
> >>> https://issues.apache.org/jira/browse/FLINK-9789
> >>>
> >>> On 10.07.2018 12:42, Chesnay Schepler wrote:
>  -1
> 
>  I found an issue where watermark metrics override each other, which
>  would be a regression to 1.5.0:
> 
>  https://issues.apache.org/jira/browse/FLINK-8731
> 
>  On 10.07.2018 11:12, Gary Yao wrote:
> > +1 (non-binding)
> >
> > Ran Jepsen tests [1] on EC2 for around 8 hours without issues.
> >
> > [1] https://github.com/apache/flink/pull/6240
> >
> > On Tue, Jul 10, 2018 at 11:00 AM, Aljoscha Krettek
> > 
> > wrote:
> >
> >> +1 (binding)
> >>
> >> - verified signatures and checksums
> >> - built successfully from source
> >>
> >>> On 10. Jul 2018, at 09:48, Jeff Zhang  wrote:
> >>>
> >>> +1.
> >>>
> >>> Build form source successfully.
> >>>
> >>> Tested scala-shell in local and yarn mode, works well.
> >>>
> >>>
> >>>
> >>> Till Rohrmann 于2018年7月10日周二
> >>> 下午3:37写道:
> >>>
>  +1 (binding)
> 
>  - Verified that no new dependencies were added for which the
>  LICENSE and
>  NOTICE files need to be adapted.
>  - Build 1.5.1 from the source artifact
>  - Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7
> >> binary
>  artifact
>  - Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary
>  artifact
> 
>  Cheers,
>  Till
> 
>  On Sat, Jul 7, 2018 at 10:13 AM vino yang 
> >> wrote:
> > +1, but announcement blog should change the release date.
> >
> > 2018-07-07 7:31 GMT+08:00 Ted Yu :
> >
> >> +1
> >>
> >> Checked signatures of artifacts
> >>
> >> Ran test suite
> >>
> >> On Fri, Jul 6, 2018 at 11:42 AM Yaz Sh 
> >> wrote:
> >>
> >>> +1
> >>>
> >>> Tests have been done on OS X
> >>>
> >>> - Ran in cluster mode ./bin/start-cluster.sh
> >>> - Checked that *.out files are empty
> >>> - Stopped cluster ./bin/stop-cluster.sh
> >>> - Checked for No Exceptions on log output
> >>>
> >>> - Test Examples via WebUI
> >>> - Test Example via CLI with different flags (-d, -c, -q, -p)
> >>>
> >>> - Added more task Managers via flink-config.yml and re-ran the
>  Examples
> >>> - Added more task Manager via ./bin/taskmanager.sh and
> >>> re-ran the
> >> Examples
> >>> - Checked “checksum” for all packages
> >>> - Checked GPG signature for all packages
> >>> - Checked the README.md
> >>>
> >>>
> >>> Cheers,
> >>> Yazdan
> >>>
> >>>
>  On Jul 6, 2018, at 11:09 AM, Chesnay Schepler
>  
> >>> wrote:
>  Hi everyone,
>  Please review and vote on the release candidate #2 for the
>  version
> >>> 1.5.1, as follows:
>  [ ] +1, Approve the release
>  [ ] -1, Do not approve the release (please provide specific
>  comments)
> 
>  The complete staging area is available for your review, which
> > includes:
>  * JIRA release notes [1],
>  * the official Apache source release and binary convenience
>  releases
> > to
> >>> be deployed to dist.apache.org [2], which are signed with
> >>> the key
>  with
> >>> fingerprint 11D464BA [3],
>  * all artifacts to be deployed to the Maven Central
>  Repository [4],
>  * source code tag "release-1.5.1-rc2" [5],
>  * website pull request listing the new release and adding
> > announcement
> >>> blog post [6].
>  The vote will be open for at least 72 hours. It is adopted by
> 

Re: [VOTE] Release 1.5.1, release candidate #2

2018-07-10 Thread Chesnay Schepler
I've opened a PR for the metric issue: 
https://github.com/apache/flink/pull/6292


Given that we've already got the required votes and still got 24h left,
I would like to cancel this vote, create a new RC this evening with a 
shortended

voting period (24h).

Virtually all checks made so far (excluding the signing) would still apply
to the new RC.

On 10.07.2018 13:12, Chesnay Schepler wrote:

+1

* start local cluster using start-cluster.bat and checked log files
* uploaded and submitted multiple jobs through WebUI

On 10.07.2018 12:47, Chesnay Schepler wrote:

This issue has already affected 1.5.0 (in other places).

I rescind my -1 and will continue testing.

On 10.07.2018 12:45, Chesnay Schepler wrote:
I've linked the wrong jira: 
https://issues.apache.org/jira/browse/FLINK-9789


On 10.07.2018 12:42, Chesnay Schepler wrote:

-1

I found an issue where watermark metrics override each other, which 
would be a regression to 1.5.0:


https://issues.apache.org/jira/browse/FLINK-8731

On 10.07.2018 11:12, Gary Yao wrote:

+1 (non-binding)

Ran Jepsen tests [1] on EC2 for around 8 hours without issues.

[1] https://github.com/apache/flink/pull/6240

On Tue, Jul 10, 2018 at 11:00 AM, Aljoscha Krettek 


wrote:


+1 (binding)

- verified signatures and checksums
- built successfully from source


On 10. Jul 2018, at 09:48, Jeff Zhang  wrote:

+1.

Build form source successfully.

Tested scala-shell in local and yarn mode, works well.



Till Rohrmann 于2018年7月10日周二 
下午3:37写道:



+1 (binding)

- Verified that no new dependencies were added for which the 
LICENSE and

NOTICE files need to be adapted.
- Build 1.5.1 from the source artifact
- Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7

binary

artifact
- Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary 
artifact


Cheers,
Till

On Sat, Jul 7, 2018 at 10:13 AM vino yang 

wrote:

+1, but announcement blog should change the release date.

2018-07-07 7:31 GMT+08:00 Ted Yu :


+1

Checked signatures of artifacts

Ran test suite

On Fri, Jul 6, 2018 at 11:42 AM Yaz Sh  
wrote:



+1

Tests have been done on OS X

- Ran in cluster mode ./bin/start-cluster.sh
- Checked that *.out files are empty
- Stopped cluster ./bin/stop-cluster.sh
- Checked for No Exceptions on log output

- Test Examples via WebUI
- Test Example via CLI with different flags (-d, -c, -q, -p)

- Added more task Managers via flink-config.yml and re-ran the

Examples
- Added more task Manager via ./bin/taskmanager.sh and 
re-ran the

Examples

- Checked “checksum” for all packages
- Checked GPG signature for all packages
- Checked the README.md


Cheers,
Yazdan


On Jul 6, 2018, at 11:09 AM, Chesnay Schepler 


wrote:

Hi everyone,
Please review and vote on the release candidate #2 for the 
version

1.5.1, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific

comments)


The complete staging area is available for your review, which

includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience

releases

to
be deployed to dist.apache.org [2], which are signed with 
the key

with

fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central 
Repository [4],

* source code tag "release-1.5.1-rc2" [5],
* website pull request listing the new release and adding

announcement

blog post [6].

The vote will be open for at least 72 hours. It is adopted by

majority

approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1]

https://issues.apache.org/jira/secure/ReleaseNote.jspa?

projectId=12315522=12343053

[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]
https://repository.apache.org/content/repositories/orgapacheflink-1170 


[5]

https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=

refs/tags/release-1.5.1-rc2

[6] https://github.com/apache/flink-web/pull/112























Re: [VOTE] Release 1.5.1, release candidate #2

2018-07-10 Thread Chesnay Schepler

+1

* start local cluster using start-cluster.bat and checked log files
* uploaded and submitted multiple jobs through WebUI

On 10.07.2018 12:47, Chesnay Schepler wrote:

This issue has already affected 1.5.0 (in other places).

I rescind my -1 and will continue testing.

On 10.07.2018 12:45, Chesnay Schepler wrote:
I've linked the wrong jira: 
https://issues.apache.org/jira/browse/FLINK-9789


On 10.07.2018 12:42, Chesnay Schepler wrote:

-1

I found an issue where watermark metrics override each other, which 
would be a regression to 1.5.0:


https://issues.apache.org/jira/browse/FLINK-8731

On 10.07.2018 11:12, Gary Yao wrote:

+1 (non-binding)

Ran Jepsen tests [1] on EC2 for around 8 hours without issues.

[1] https://github.com/apache/flink/pull/6240

On Tue, Jul 10, 2018 at 11:00 AM, Aljoscha Krettek 


wrote:


+1 (binding)

- verified signatures and checksums
- built successfully from source


On 10. Jul 2018, at 09:48, Jeff Zhang  wrote:

+1.

Build form source successfully.

Tested scala-shell in local and yarn mode, works well.



Till Rohrmann 于2018年7月10日周二 
下午3:37写道:



+1 (binding)

- Verified that no new dependencies were added for which the 
LICENSE and

NOTICE files need to be adapted.
- Build 1.5.1 from the source artifact
- Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7

binary

artifact
- Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary 
artifact


Cheers,
Till

On Sat, Jul 7, 2018 at 10:13 AM vino yang 

wrote:

+1, but announcement blog should change the release date.

2018-07-07 7:31 GMT+08:00 Ted Yu :


+1

Checked signatures of artifacts

Ran test suite

On Fri, Jul 6, 2018 at 11:42 AM Yaz Sh  
wrote:



+1

Tests have been done on OS X

- Ran in cluster mode ./bin/start-cluster.sh
- Checked that *.out files are empty
- Stopped cluster ./bin/stop-cluster.sh
- Checked for No Exceptions on log output

- Test Examples via WebUI
- Test Example via CLI with different flags (-d, -c, -q, -p)

- Added more task Managers via flink-config.yml and re-ran the

Examples
- Added more task Manager via ./bin/taskmanager.sh and re-ran 
the

Examples

- Checked “checksum” for all packages
- Checked GPG signature for all packages
- Checked the README.md


Cheers,
Yazdan


On Jul 6, 2018, at 11:09 AM, Chesnay Schepler 


wrote:

Hi everyone,
Please review and vote on the release candidate #2 for the 
version

1.5.1, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific

comments)


The complete staging area is available for your review, which

includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience

releases

to
be deployed to dist.apache.org [2], which are signed with the 
key

with

fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central 
Repository [4],

* source code tag "release-1.5.1-rc2" [5],
* website pull request listing the new release and adding

announcement

blog post [6].

The vote will be open for at least 72 hours. It is adopted by

majority

approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1]

https://issues.apache.org/jira/secure/ReleaseNote.jspa?

projectId=12315522=12343053

[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]
https://repository.apache.org/content/repositories/orgapacheflink-1170 


[5]

https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=

refs/tags/release-1.5.1-rc2

[6] https://github.com/apache/flink-web/pull/112




















Re: [VOTE] Release 1.5.1, release candidate #2

2018-07-10 Thread Chesnay Schepler

This issue has already affected 1.5.0 (in other places).

I rescind my -1 and will continue testing.

On 10.07.2018 12:45, Chesnay Schepler wrote:
I've linked the wrong jira: 
https://issues.apache.org/jira/browse/FLINK-9789


On 10.07.2018 12:42, Chesnay Schepler wrote:

-1

I found an issue where watermark metrics override each other, which 
would be a regression to 1.5.0:


https://issues.apache.org/jira/browse/FLINK-8731

On 10.07.2018 11:12, Gary Yao wrote:

+1 (non-binding)

Ran Jepsen tests [1] on EC2 for around 8 hours without issues.

[1] https://github.com/apache/flink/pull/6240

On Tue, Jul 10, 2018 at 11:00 AM, Aljoscha Krettek 


wrote:


+1 (binding)

- verified signatures and checksums
- built successfully from source


On 10. Jul 2018, at 09:48, Jeff Zhang  wrote:

+1.

Build form source successfully.

Tested scala-shell in local and yarn mode, works well.



Till Rohrmann 于2018年7月10日周二 下午3:37写道: 




+1 (binding)

- Verified that no new dependencies were added for which the 
LICENSE and

NOTICE files need to be adapted.
- Build 1.5.1 from the source artifact
- Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7

binary

artifact
- Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary 
artifact


Cheers,
Till

On Sat, Jul 7, 2018 at 10:13 AM vino yang 

wrote:

+1, but announcement blog should change the release date.

2018-07-07 7:31 GMT+08:00 Ted Yu :


+1

Checked signatures of artifacts

Ran test suite

On Fri, Jul 6, 2018 at 11:42 AM Yaz Sh  wrote:


+1

Tests have been done on OS X

- Ran in cluster mode ./bin/start-cluster.sh
- Checked that *.out files are empty
- Stopped cluster ./bin/stop-cluster.sh
- Checked for No Exceptions on log output

- Test Examples via WebUI
- Test Example via CLI with different flags (-d, -c, -q, -p)

- Added more task Managers via flink-config.yml and re-ran the

Examples

- Added more task Manager via ./bin/taskmanager.sh and re-ran the

Examples

- Checked “checksum” for all packages
- Checked GPG signature for all packages
- Checked the README.md


Cheers,
Yazdan


On Jul 6, 2018, at 11:09 AM, Chesnay Schepler 


wrote:

Hi everyone,
Please review and vote on the release candidate #2 for the 
version

1.5.1, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific

comments)


The complete staging area is available for your review, which

includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience

releases

to

be deployed to dist.apache.org [2], which are signed with the key

with

fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central 
Repository [4],

* source code tag "release-1.5.1-rc2" [5],
* website pull request listing the new release and adding

announcement

blog post [6].

The vote will be open for at least 72 hours. It is adopted by

majority

approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1]

https://issues.apache.org/jira/secure/ReleaseNote.jspa?

projectId=12315522=12343053

[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]
https://repository.apache.org/content/repositories/orgapacheflink-1170 


[5]

https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=

refs/tags/release-1.5.1-rc2

[6] https://github.com/apache/flink-web/pull/112

















Re: [VOTE] Release 1.5.1, release candidate #2

2018-07-10 Thread Chesnay Schepler

I've linked the wrong jira: https://issues.apache.org/jira/browse/FLINK-9789

On 10.07.2018 12:42, Chesnay Schepler wrote:

-1

I found an issue where watermark metrics override each other, which 
would be a regression to 1.5.0:


https://issues.apache.org/jira/browse/FLINK-8731

On 10.07.2018 11:12, Gary Yao wrote:

+1 (non-binding)

Ran Jepsen tests [1] on EC2 for around 8 hours without issues.

[1] https://github.com/apache/flink/pull/6240

On Tue, Jul 10, 2018 at 11:00 AM, Aljoscha Krettek 
wrote:


+1 (binding)

- verified signatures and checksums
- built successfully from source


On 10. Jul 2018, at 09:48, Jeff Zhang  wrote:

+1.

Build form source successfully.

Tested scala-shell in local and yarn mode, works well.



Till Rohrmann 于2018年7月10日周二 下午3:37写道:


+1 (binding)

- Verified that no new dependencies were added for which the 
LICENSE and

NOTICE files need to be adapted.
- Build 1.5.1 from the source artifact
- Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7

binary

artifact
- Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary 
artifact


Cheers,
Till

On Sat, Jul 7, 2018 at 10:13 AM vino yang 

wrote:

+1, but announcement blog should change the release date.

2018-07-07 7:31 GMT+08:00 Ted Yu :


+1

Checked signatures of artifacts

Ran test suite

On Fri, Jul 6, 2018 at 11:42 AM Yaz Sh  wrote:


+1

Tests have been done on OS X

- Ran in cluster mode ./bin/start-cluster.sh
- Checked that *.out files are empty
- Stopped cluster ./bin/stop-cluster.sh
- Checked for No Exceptions on log output

- Test Examples via WebUI
- Test Example via CLI with different flags (-d, -c, -q, -p)

- Added more task Managers via flink-config.yml and re-ran the

Examples

- Added more task Manager via ./bin/taskmanager.sh and re-ran the

Examples

- Checked “checksum” for all packages
- Checked GPG signature for all packages
- Checked the README.md


Cheers,
Yazdan


On Jul 6, 2018, at 11:09 AM, Chesnay Schepler 


wrote:

Hi everyone,
Please review and vote on the release candidate #2 for the 
version

1.5.1, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific

comments)


The complete staging area is available for your review, which

includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience

releases

to

be deployed to dist.apache.org [2], which are signed with the key

with

fingerprint 11D464BA [3],
* all artifacts to be deployed to the Maven Central Repository 
[4],

* source code tag "release-1.5.1-rc2" [5],
* website pull request listing the new release and adding

announcement

blog post [6].

The vote will be open for at least 72 hours. It is adopted by

majority

approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1]

https://issues.apache.org/jira/secure/ReleaseNote.jspa?

projectId=12315522=12343053

[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]
https://repository.apache.org/content/repositories/orgapacheflink-1170 


[5]

https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=

refs/tags/release-1.5.1-rc2

[6] https://github.com/apache/flink-web/pull/112














[jira] [Created] (FLINK-9789) Watermark metrics for an operator shadow each other

2018-07-10 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9789:
---

 Summary: Watermark metrics for an operator shadow each other
 Key: FLINK-9789
 URL: https://issues.apache.org/jira/browse/FLINK-9789
 Project: Flink
  Issue Type: Bug
  Components: Metrics
Affects Versions: 1.5.1, 1.6.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler


In FLINK-4812 we reworked the watermark metrics to be exposed for each operator.
In FLINK-9467 we made further modifications to also expose these metrics again 
for tasks.

This works by re-using the input watermark gauge for the first operator in the 
task chain, and the output watermark gauge for the last operator in the task 
chain. This means that a single metric is registered multiple times.

Unfortunately, the metric system assumes metric objects to be unique, as can be 
seen in virtually all reporter implementations as well as the 
MetricQueryService.

As a result the watermark metrics override each other in the reporter, causing 
only one to be reported, whichever was registered last.

FLINK-4812 was implemented for 1.5.1, so this would be a regression.






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release 1.5.1, release candidate #2

2018-07-10 Thread Chesnay Schepler

-1

I found an issue where watermark metrics override each other, which 
would be a regression to 1.5.0:


https://issues.apache.org/jira/browse/FLINK-8731

On 10.07.2018 11:12, Gary Yao wrote:

+1 (non-binding)

Ran Jepsen tests [1] on EC2 for around 8 hours without issues.

[1] https://github.com/apache/flink/pull/6240

On Tue, Jul 10, 2018 at 11:00 AM, Aljoscha Krettek 
wrote:


+1 (binding)

- verified signatures and checksums
- built successfully from source


On 10. Jul 2018, at 09:48, Jeff Zhang  wrote:

+1.

Build form source successfully.

Tested scala-shell in local and yarn mode, works well.



Till Rohrmann 于2018年7月10日周二 下午3:37写道:


+1 (binding)

- Verified that no new dependencies were added for which the LICENSE and
NOTICE files need to be adapted.
- Build 1.5.1 from the source artifact
- Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7

binary

artifact
- Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary artifact

Cheers,
Till

On Sat, Jul 7, 2018 at 10:13 AM vino yang 

wrote:

+1, but announcement blog should change the release date.

2018-07-07 7:31 GMT+08:00 Ted Yu :


+1

Checked signatures of artifacts

Ran test suite

On Fri, Jul 6, 2018 at 11:42 AM Yaz Sh  wrote:


+1

Tests have been done on OS X

- Ran in cluster mode ./bin/start-cluster.sh
- Checked that *.out files are empty
- Stopped cluster ./bin/stop-cluster.sh
- Checked for No Exceptions on log output

- Test Examples via WebUI
- Test Example via CLI with different flags (-d, -c, -q, -p)

- Added more task Managers via flink-config.yml and re-ran the

Examples

- Added more task Manager via ./bin/taskmanager.sh and re-ran the

Examples

- Checked “checksum” for all packages
- Checked GPG signature for all packages
- Checked the README.md


Cheers,
Yazdan



On Jul 6, 2018, at 11:09 AM, Chesnay Schepler 

wrote:

Hi everyone,
Please review and vote on the release candidate #2 for the version

1.5.1, as follows:

[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific

comments)


The complete staging area is available for your review, which

includes:

* JIRA release notes [1],
* the official Apache source release and binary convenience

releases

to

be deployed to dist.apache.org [2], which are signed with the key

with

fingerprint 11D464BA [3],

* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.5.1-rc2" [5],
* website pull request listing the new release and adding

announcement

blog post [6].

The vote will be open for at least 72 hours. It is adopted by

majority

approval, with at least 3 PMC affirmative votes.

Thanks,
Chesnay

[1]

https://issues.apache.org/jira/secure/ReleaseNote.jspa?

projectId=12315522=12343053

[2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4]

https://repository.apache.org/content/repositories/orgapacheflink-1170

[5]

https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=

refs/tags/release-1.5.1-rc2

[6] https://github.com/apache/flink-web/pull/112











[jira] [Created] (FLINK-9788) ExecutionGraph Inconsistency prevents Job from recovering

2018-07-10 Thread Gary Yao (JIRA)
Gary Yao created FLINK-9788:
---

 Summary: ExecutionGraph Inconsistency prevents Job from recovering
 Key: FLINK-9788
 URL: https://issues.apache.org/jira/browse/FLINK-9788
 Project: Flink
  Issue Type: Bug
  Components: Core
Affects Versions: 1.6.0
 Environment: Rev: 4a06160
Hadoop 2.8.3
Reporter: Gary Yao
 Attachments: jobmanager_5000.log

Deployment mode: YARN job mode with HA

After killing many TaskManagers in succession, the state of the ExecutionGraph 
ran into an inconsistent state, which prevented job recovery. The following 
stacktrace was logged in the JobManager log several hundred times per second:
{noformat}
-08 16:47:18,855 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph   
 - Job General purpose test job (37a794195840700b98feb23e99f7ea24) switched 
from state RESTARTING to RESTARTING.
2018-07-08 16:47:18,856 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Restarting the 
job General purpose test job (37a794195840700b98feb23e99f7ea24).
2018-07-08 16:47:18,857 DEBUG 
org.apache.flink.runtime.executiongraph.ExecutionGraph- Resetting 
execution vertex Source: Custom Source -> Timestamps/Watermarks (1/10) for new 
execution.
2018-07-08 16:47:18,857 WARN  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Failed to 
restart the job.
java.lang.IllegalStateException: Cannot reset a vertex that is in non-terminal 
state CREATED
at 
org.apache.flink.runtime.executiongraph.ExecutionVertex.resetForNewExecution(ExecutionVertex.java:610)
at 
org.apache.flink.runtime.executiongraph.ExecutionJobVertex.resetForNewExecution(ExecutionJobVertex.java:573)
at 
org.apache.flink.runtime.executiongraph.ExecutionGraph.restart(ExecutionGraph.java:1251)
at 
org.apache.flink.runtime.executiongraph.restart.ExecutionGraphRestartCallback.triggerFullRecovery(ExecutionGraphRestartCallback.java:59)
at 
org.apache.flink.runtime.executiongraph.restart.FixedDelayRestartStrategy$1.run(FixedDelayRestartStrategy.java:68)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}

The resulting jobmanager log file was 4.7 GB in size. Find attached the first 
5000 lines of the log file. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release 1.5.1, release candidate #2

2018-07-10 Thread Gary Yao
+1 (non-binding)

Ran Jepsen tests [1] on EC2 for around 8 hours without issues.

[1] https://github.com/apache/flink/pull/6240

On Tue, Jul 10, 2018 at 11:00 AM, Aljoscha Krettek 
wrote:

> +1 (binding)
>
> - verified signatures and checksums
> - built successfully from source
>
> > On 10. Jul 2018, at 09:48, Jeff Zhang  wrote:
> >
> > +1.
> >
> > Build form source successfully.
> >
> > Tested scala-shell in local and yarn mode, works well.
> >
> >
> >
> > Till Rohrmann 于2018年7月10日周二 下午3:37写道:
> >
> >> +1 (binding)
> >>
> >> - Verified that no new dependencies were added for which the LICENSE and
> >> NOTICE files need to be adapted.
> >> - Build 1.5.1 from the source artifact
> >> - Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7
> binary
> >> artifact
> >> - Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary artifact
> >>
> >> Cheers,
> >> Till
> >>
> >> On Sat, Jul 7, 2018 at 10:13 AM vino yang 
> wrote:
> >>
> >>> +1, but announcement blog should change the release date.
> >>>
> >>> 2018-07-07 7:31 GMT+08:00 Ted Yu :
> >>>
>  +1
> 
>  Checked signatures of artifacts
> 
>  Ran test suite
> 
>  On Fri, Jul 6, 2018 at 11:42 AM Yaz Sh  wrote:
> 
> > +1
> >
> > Tests have been done on OS X
> >
> > - Ran in cluster mode ./bin/start-cluster.sh
> > - Checked that *.out files are empty
> > - Stopped cluster ./bin/stop-cluster.sh
> > - Checked for No Exceptions on log output
> >
> > - Test Examples via WebUI
> > - Test Example via CLI with different flags (-d, -c, -q, -p)
> >
> > - Added more task Managers via flink-config.yml and re-ran the
> >> Examples
> > - Added more task Manager via ./bin/taskmanager.sh and re-ran the
>  Examples
> >
> > - Checked “checksum” for all packages
> > - Checked GPG signature for all packages
> > - Checked the README.md
> >
> >
> > Cheers,
> > Yazdan
> >
> >
> >> On Jul 6, 2018, at 11:09 AM, Chesnay Schepler 
> > wrote:
> >>
> >> Hi everyone,
> >> Please review and vote on the release candidate #2 for the version
> > 1.5.1, as follows:
> >> [ ] +1, Approve the release
> >> [ ] -1, Do not approve the release (please provide specific
> >> comments)
> >>
> >>
> >> The complete staging area is available for your review, which
> >>> includes:
> >> * JIRA release notes [1],
> >> * the official Apache source release and binary convenience
> >> releases
> >>> to
> > be deployed to dist.apache.org [2], which are signed with the key
> >> with
> > fingerprint 11D464BA [3],
> >> * all artifacts to be deployed to the Maven Central Repository [4],
> >> * source code tag "release-1.5.1-rc2" [5],
> >> * website pull request listing the new release and adding
> >>> announcement
> > blog post [6].
> >>
> >> The vote will be open for at least 72 hours. It is adopted by
> >>> majority
> > approval, with at least 3 PMC affirmative votes.
> >>
> >> Thanks,
> >> Chesnay
> >>
> >> [1]
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
>  projectId=12315522=12343053
> >> [2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
> >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >> [4]
> >
> >> https://repository.apache.org/content/repositories/orgapacheflink-1170
> >> [5]
> > https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
>  refs/tags/release-1.5.1-rc2
> >> [6] https://github.com/apache/flink-web/pull/112
> >>
> >>
> >>
> >
> >
> 
> >>>
> >>
>
>


Re: [VOTE] Release 1.5.1, release candidate #2

2018-07-10 Thread Aljoscha Krettek
+1 (binding)

- verified signatures and checksums
- built successfully from source

> On 10. Jul 2018, at 09:48, Jeff Zhang  wrote:
> 
> +1.
> 
> Build form source successfully.
> 
> Tested scala-shell in local and yarn mode, works well.
> 
> 
> 
> Till Rohrmann 于2018年7月10日周二 下午3:37写道:
> 
>> +1 (binding)
>> 
>> - Verified that no new dependencies were added for which the LICENSE and
>> NOTICE files need to be adapted.
>> - Build 1.5.1 from the source artifact
>> - Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7 binary
>> artifact
>> - Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary artifact
>> 
>> Cheers,
>> Till
>> 
>> On Sat, Jul 7, 2018 at 10:13 AM vino yang  wrote:
>> 
>>> +1, but announcement blog should change the release date.
>>> 
>>> 2018-07-07 7:31 GMT+08:00 Ted Yu :
>>> 
 +1
 
 Checked signatures of artifacts
 
 Ran test suite
 
 On Fri, Jul 6, 2018 at 11:42 AM Yaz Sh  wrote:
 
> +1
> 
> Tests have been done on OS X
> 
> - Ran in cluster mode ./bin/start-cluster.sh
> - Checked that *.out files are empty
> - Stopped cluster ./bin/stop-cluster.sh
> - Checked for No Exceptions on log output
> 
> - Test Examples via WebUI
> - Test Example via CLI with different flags (-d, -c, -q, -p)
> 
> - Added more task Managers via flink-config.yml and re-ran the
>> Examples
> - Added more task Manager via ./bin/taskmanager.sh and re-ran the
 Examples
> 
> - Checked “checksum” for all packages
> - Checked GPG signature for all packages
> - Checked the README.md
> 
> 
> Cheers,
> Yazdan
> 
> 
>> On Jul 6, 2018, at 11:09 AM, Chesnay Schepler 
> wrote:
>> 
>> Hi everyone,
>> Please review and vote on the release candidate #2 for the version
> 1.5.1, as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific
>> comments)
>> 
>> 
>> The complete staging area is available for your review, which
>>> includes:
>> * JIRA release notes [1],
>> * the official Apache source release and binary convenience
>> releases
>>> to
> be deployed to dist.apache.org [2], which are signed with the key
>> with
> fingerprint 11D464BA [3],
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> * source code tag "release-1.5.1-rc2" [5],
>> * website pull request listing the new release and adding
>>> announcement
> blog post [6].
>> 
>> The vote will be open for at least 72 hours. It is adopted by
>>> majority
> approval, with at least 3 PMC affirmative votes.
>> 
>> Thanks,
>> Chesnay
>> 
>> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?
 projectId=12315522=12343053
>> [2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> [4]
> 
>> https://repository.apache.org/content/repositories/orgapacheflink-1170
>> [5]
> https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
 refs/tags/release-1.5.1-rc2
>> [6] https://github.com/apache/flink-web/pull/112
>> 
>> 
>> 
> 
> 
 
>>> 
>> 



[jira] [Created] (FLINK-9787) Change ExecutionConfig#getGlobalJobParameters to return an instance of GlobalJobParameters instead of null if no custom globalJobParameters are set yet

2018-07-10 Thread Florian Schmidt (JIRA)
Florian Schmidt created FLINK-9787:
--

 Summary: Change ExecutionConfig#getGlobalJobParameters to return 
an instance of GlobalJobParameters instead of null if no custom 
globalJobParameters are set yet
 Key: FLINK-9787
 URL: https://issues.apache.org/jira/browse/FLINK-9787
 Project: Flink
  Issue Type: Improvement
Reporter: Florian Schmidt


Currently when accessing 

 

ExecutionConfig#getGlobalJobParameters this will return `null` if no 
globalJobParameters are set. This can easily lead to NullPointerExceptions when 
used with getGlobalJobParameters.toMap()

 

An easy improvement for this would be to just return a new instance of 
GlobalJobParameters if none is set with the empty map as the parameters



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release 1.5.1, release candidate #2

2018-07-10 Thread Jeff Zhang
+1.

Build form source successfully.

Tested scala-shell in local and yarn mode, works well.



Till Rohrmann 于2018年7月10日周二 下午3:37写道:

> +1 (binding)
>
> - Verified that no new dependencies were added for which the LICENSE and
> NOTICE files need to be adapted.
> - Build 1.5.1 from the source artifact
> - Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7 binary
> artifact
> - Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary artifact
>
> Cheers,
> Till
>
> On Sat, Jul 7, 2018 at 10:13 AM vino yang  wrote:
>
> > +1, but announcement blog should change the release date.
> >
> > 2018-07-07 7:31 GMT+08:00 Ted Yu :
> >
> > > +1
> > >
> > > Checked signatures of artifacts
> > >
> > > Ran test suite
> > >
> > > On Fri, Jul 6, 2018 at 11:42 AM Yaz Sh  wrote:
> > >
> > > > +1
> > > >
> > > > Tests have been done on OS X
> > > >
> > > > - Ran in cluster mode ./bin/start-cluster.sh
> > > > - Checked that *.out files are empty
> > > > - Stopped cluster ./bin/stop-cluster.sh
> > > > - Checked for No Exceptions on log output
> > > >
> > > > - Test Examples via WebUI
> > > > - Test Example via CLI with different flags (-d, -c, -q, -p)
> > > >
> > > > - Added more task Managers via flink-config.yml and re-ran the
> Examples
> > > > - Added more task Manager via ./bin/taskmanager.sh and re-ran the
> > > Examples
> > > >
> > > > - Checked “checksum” for all packages
> > > > - Checked GPG signature for all packages
> > > > - Checked the README.md
> > > >
> > > >
> > > > Cheers,
> > > > Yazdan
> > > >
> > > >
> > > > > On Jul 6, 2018, at 11:09 AM, Chesnay Schepler 
> > > > wrote:
> > > > >
> > > > > Hi everyone,
> > > > > Please review and vote on the release candidate #2 for the version
> > > > 1.5.1, as follows:
> > > > > [ ] +1, Approve the release
> > > > > [ ] -1, Do not approve the release (please provide specific
> comments)
> > > > >
> > > > >
> > > > > The complete staging area is available for your review, which
> > includes:
> > > > > * JIRA release notes [1],
> > > > > * the official Apache source release and binary convenience
> releases
> > to
> > > > be deployed to dist.apache.org [2], which are signed with the key
> with
> > > > fingerprint 11D464BA [3],
> > > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > > * source code tag "release-1.5.1-rc2" [5],
> > > > > * website pull request listing the new release and adding
> > announcement
> > > > blog post [6].
> > > > >
> > > > > The vote will be open for at least 72 hours. It is adopted by
> > majority
> > > > approval, with at least 3 PMC affirmative votes.
> > > > >
> > > > > Thanks,
> > > > > Chesnay
> > > > >
> > > > > [1]
> > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > > projectId=12315522=12343053
> > > > > [2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
> > > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > > [4]
> > > >
> https://repository.apache.org/content/repositories/orgapacheflink-1170
> > > > > [5]
> > > > https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
> > > refs/tags/release-1.5.1-rc2
> > > > > [6] https://github.com/apache/flink-web/pull/112
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > >
> >
>


Re: [VOTE] Release 1.5.1, release candidate #2

2018-07-10 Thread Till Rohrmann
+1 (binding)

- Verified that no new dependencies were added for which the LICENSE and
NOTICE files need to be adapted.
- Build 1.5.1 from the source artifact
- Run flink-end-to-end tests for 12 hours for the 1.5.1 Hadoop 2.7 binary
artifact
- Run Jepsen tests for 12 hours for the 1.5.1 Hadoop 2.8 binary artifact

Cheers,
Till

On Sat, Jul 7, 2018 at 10:13 AM vino yang  wrote:

> +1, but announcement blog should change the release date.
>
> 2018-07-07 7:31 GMT+08:00 Ted Yu :
>
> > +1
> >
> > Checked signatures of artifacts
> >
> > Ran test suite
> >
> > On Fri, Jul 6, 2018 at 11:42 AM Yaz Sh  wrote:
> >
> > > +1
> > >
> > > Tests have been done on OS X
> > >
> > > - Ran in cluster mode ./bin/start-cluster.sh
> > > - Checked that *.out files are empty
> > > - Stopped cluster ./bin/stop-cluster.sh
> > > - Checked for No Exceptions on log output
> > >
> > > - Test Examples via WebUI
> > > - Test Example via CLI with different flags (-d, -c, -q, -p)
> > >
> > > - Added more task Managers via flink-config.yml and re-ran the Examples
> > > - Added more task Manager via ./bin/taskmanager.sh and re-ran the
> > Examples
> > >
> > > - Checked “checksum” for all packages
> > > - Checked GPG signature for all packages
> > > - Checked the README.md
> > >
> > >
> > > Cheers,
> > > Yazdan
> > >
> > >
> > > > On Jul 6, 2018, at 11:09 AM, Chesnay Schepler 
> > > wrote:
> > > >
> > > > Hi everyone,
> > > > Please review and vote on the release candidate #2 for the version
> > > 1.5.1, as follows:
> > > > [ ] +1, Approve the release
> > > > [ ] -1, Do not approve the release (please provide specific comments)
> > > >
> > > >
> > > > The complete staging area is available for your review, which
> includes:
> > > > * JIRA release notes [1],
> > > > * the official Apache source release and binary convenience releases
> to
> > > be deployed to dist.apache.org [2], which are signed with the key with
> > > fingerprint 11D464BA [3],
> > > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > > * source code tag "release-1.5.1-rc2" [5],
> > > > * website pull request listing the new release and adding
> announcement
> > > blog post [6].
> > > >
> > > > The vote will be open for at least 72 hours. It is adopted by
> majority
> > > approval, with at least 3 PMC affirmative votes.
> > > >
> > > > Thanks,
> > > > Chesnay
> > > >
> > > > [1]
> > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > projectId=12315522=12343053
> > > > [2] https://dist.apache.org/repos/dist/dev/flink/1.5.1/
> > > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > > [4]
> > > https://repository.apache.org/content/repositories/orgapacheflink-1170
> > > > [5]
> > > https://git-wip-us.apache.org/repos/asf?p=flink.git;a=tag;h=
> > refs/tags/release-1.5.1-rc2
> > > > [6] https://github.com/apache/flink-web/pull/112
> > > >
> > > >
> > > >
> > >
> > >
> >
>


Re: [jira] [Created] (FLINK-9784) Inconsistent use of 'static' in AsyncIOExample.java

2018-07-10 Thread Till Rohrmann
I've assigned you.

On Mon, Jul 9, 2018 at 5:53 PM NEKRASSOV, ALEXEI  wrote:

> In JIRA I don't see an option to assign this issue to myself. Can someone
> please assign?
>
> Thanks,
> Alex
>
> -Original Message-
> From: Alexei Nekrassov (JIRA) [mailto:j...@apache.org]
> Sent: Monday, July 09, 2018 10:07 AM
> To: dev@flink.apache.org
> Subject: [jira] [Created] (FLINK-9784) Inconsistent use of 'static' in
> AsyncIOExample.java
>
> Alexei Nekrassov created FLINK-9784:
> ---
>
>  Summary: Inconsistent use of 'static' in AsyncIOExample.java
>  Key: FLINK-9784
>  URL:
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_FLINK-2D9784=DwICaQ=LFYZ-o9_HUMeMTSQicvjIg=aQRKg6d5fsz42zXkyiSdqg=Pbmeb7Zyk2Wn-mZsyPmOXap0vmNDg6VmS0WFzbSACsw=FP7Qbc18SSgRqKF72mtLVzT4jUVeEsBMc2BVAnhTCAA=
>  Project: Flink
>   Issue Type: Bug
>   Components: Examples
> Affects Versions: 1.4.2, 1.5.0
> Reporter: Alexei Nekrassov
>
>
> In SampleAsyncFunction having _executorService_ and _random_ as static,
> but _counter_ as instance-specific will not work when someone creates a
> second instance of SampleAsyncFunction. In second SampleAsyncFunction the
> _counter_ will be 0 and we will re-intialize static _executorService_ and
> _random_, thus interfering with the first SampleAsyncFunction object.
>
> To fix this example and make it clearer, make _executorService_ and
> _random_ instance-specific. That will make _counter_ redundant; also
> synchronization on class will no longer be needed.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>


Confusions About JDBCOutputFormat

2018-07-10 Thread wangsan
Hi all,

I'm going to use JDBCAppendTableSink and JDBCOutputFormat in my Flink 
application. But I am confused with the implementation of JDBCOutputFormat.

1. The Connection was established when JDBCOutputFormat is opened, and will be 
used all the time. But if this connction lies idle for a long time, the 
database will force close the connetion, thus errors may occur.
2. The flush() method is called when batchCount exceeds the threshold, but it 
is also called while snapshotting state. So two threads may modify upload and 
batchCount, but without synchronization.

Please correct me if I am wrong.

——
wangsan