Re: Jobmanager drops upon submitting a jar

2017-04-13 Thread amir bahmanyari
It ends up to be a release gap between the build env libs and the 
runtime.nothing else.Am updating everything to the latest+greatest.With the 
latest Flink, and the current (old) code the Maven reports:[ERROR]   symbol:   
class FlinkKafkaConsumer08
Meaning it needs to be replaced with the latest consumer object.
Any suggestions on modernizing the FlinkKafkaConsumer 
implementation?Thanks+regards

  From: wenlong.lwl 
 To: dev@flink.apache.org; amir bahmanyari  
 Sent: Wednesday, April 12, 2017 8:01 PM
 Subject: Re: Jobmanager drops upon submitting a jar
   
Hi, amir, I think you could check the log of job manager to make sure that
job manager [192.168.56.101:6123 ] is
running well firstly, you may get what is wrong in the log.

On 13 April 2017 at 08:54, amir bahmanyari 
wrote:

>
>
>
> Hi Colleagues,I have a simple test job when I submit it to the Flink
> cluster the JM seems to disconnect.Its a one node cluster implemented in a
> VirtualBox Centos 7 VM.Flink starts fine and everything else look fine.
> Following is stack trace.I appreciate a feedback.Cheers
>
> 17/04/12 15:53:04 INFO node.Node: Connected to Node 192.168.56.101
> 17/04/12 15:53:04 INFO config.ConfigurationProvider: Opened bucket default
> 17/04/12 15:53:04 INFO config.ConfigurationProvider: Closed bucket default
> 17/04/12 15:53:04 INFO node.Node: Disconnected from Node 192.168.56.101
> 17/04/12 15:53:07 INFO kafka.FlinkKafkaConsumerBase: Trying to get topic
> metadata from broker localhost:9092 in try 0/3
> 17/04/12 15:53:07 INFO kafka.FlinkKafkaConsumerBase: Consumer is going to
> read the following topics (with number of partitions): abc_pharma_qa (2),
> 17/04/12 15:53:07 INFO environment.RemoteStreamEnvironment: Running
> remotely at 192.168.56.101:6123
> 17/04/12 15:53:07 INFO program.StandaloneClusterClient: Submitting job
> with JobID: c9c717d6a6d0d5ce9a8758b0fb7dae7c. Waiting for job completion.
> Submitting job with JobID: c9c717d6a6d0d5ce9a8758b0fb7dae7c. Waiting for
> job completion.
> 17/04/12 15:53:07 INFO program.StandaloneClusterClient: Starting client
> actor system.
> 17/04/12 15:53:08 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 17/04/12 15:53:08 INFO Remoting: Starting remoting
> 17/04/12 15:53:08 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://flink@127.0.0.1:32776]
> 17/04/12 15:53:08 INFO client.JobClientActor: Received job test (
> c9c717d6a6d0d5ce9a8758b0fb7dae7c).
> 17/04/12 15:53:08 INFO client.JobClientActor: Could not submit job test (
> c9c717d6a6d0d5ce9a8758b0fb7dae7c), because there is no connection to a
> JobManager.
> 17/04/12 15:53:08 INFO client.JobClientActor: Disconnect from JobManager
> null.
>
> 17/04/12 15:53:08 WARN remote.ReliableDeliverySupervisor: Association
> with remote system [akka.tcp://flink@192.168.56.101:6123] has failed,
> address is now gated for [5000] ms. Reason is: [Disassociated].
> 17/04/12 15:54:08 INFO client.JobClientActor: Terminate JobClientActor.
> 17/04/12 15:54:08 INFO client.JobClientActor: Disconnect from JobManager
> null.
> 17/04/12 15:54:08 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Shutting down remote daemon.
> 17/04/12 15:54:08 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Remote daemon shut down; proceeding with flushing remote transports.
> 17/04/12 15:54:08 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Remoting shut down.
> Exception in thread "main" 
> org.apache.flink.client.program.ProgramInvocationException:
> The program execution failed: Communication with JobManager failed: Lost
> connection to the JobManager.
>
>        at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:413)
>        at org.apache.flink.client.program.StandaloneClusterClient.
> submitJob(StandaloneClusterClient.java:92)
>        at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:389)
>        at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:381)
>        at org.apache.flink.streaming.api.environment.
> RemoteStreamEnvironment.executeRemotely(RemoteStreamEnvironment.java:209)
>        at org.apache.flink.streaming.api.environment.
> RemoteStreamEnvironment.execute(RemoteStreamEnvironment.java:173)
>        at com.rfxcel.rts.operations.EventProcessorDriver.start(
> EventProcessorDriver.java:103)
>        at com.rfxcel.rts.operations.EventProcessorDriver.main(
> EventProcessorDriver.java:109)
> Caused by: org.apache.flink.runtime.client.JobExecutionException:
> Communication with JobManager failed: Lost connection to the JobManager.
>        at org.apache.flink.runtime.client.JobClient.
> submitJobAndWait(JobClient.java:137)
>        at org.apache.flink.client.program.ClusterClient.run(
> ClusterClient.java:409)
>        ... 7 more
> Caused by: 
> org.apache.flink.runtime.client.JobClientActorConnectionTimeoutException:
> Lost connection to the 

Re: Jobmanager drops upon submitting a jar

2017-04-13 Thread amir bahmanyari
Thanks so much for your help.Below is whats in JM logs.Appreciate your feedback.
2017-04-12 15:51:01,723 WARN  org.apache.hadoop.util.NativeCodeLoader           
            - Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable2017-04-12 15:51:01,836 INFO  
org.apache.flink.runtime.jobmanager.JobManager                - 
2017-04-12
 15:51:01,836 INFO  org.apache.flink.runtime.jobmanager.JobManager              
  -  Starting JobManager (Version: 1.2.0, Rev:1c659cf, Date:29.01.2017 @ 
21:19:15 UTC)2017-04-12 15:51:01,836 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -  Current user: 
root2017-04-12 15:51:01,836 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -  JVM: Java 
HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.121-b132017-04-12 
15:51:01,836 INFO  org.apache.flink.runtime.jobmanager.JobManager               
 -  Maximum heap size: 245 MiBytes2017-04-12 15:51:01,836 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -  JAVA_HOME: 
/opt/software/jdk1.8.0_1212017-04-12 15:51:01,840 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -  Hadoop 
version: 2.7.22017-04-12 15:51:01,840 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -  JVM 
Options:2017-04-12 15:51:01,840 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -     
-Xms256m2017-04-12 15:51:01,841 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -     
-Xmx256m2017-04-12 15:51:01,841 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -     
-Dlog.file=/opt/software/flink-1.2.0/log/flink-root-jobmanager-0-localhost.localdomain.log2017-04-12
 15:51:01,841 INFO  org.apache.flink.runtime.jobmanager.JobManager              
  -     
-Dlog4j.configuration=file:/opt/software/flink-1.2.0/conf/log4j.properties2017-04-12
 15:51:01,841 INFO  org.apache.flink.runtime.jobmanager.JobManager              
  -     
-Dlogback.configurationFile=file:/opt/software/flink-1.2.0/conf/logback.xml2017-04-12
 15:51:01,841 INFO  org.apache.flink.runtime.jobmanager.JobManager              
  -  Program Arguments:2017-04-12 15:51:01,841 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -     
--configDir2017-04-12 15:51:01,841 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -     
/opt/software/flink-1.2.0/conf2017-04-12 15:51:01,841 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -     
--executionMode2017-04-12 15:51:01,841 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -     
cluster2017-04-12 15:51:01,841 INFO  
org.apache.flink.runtime.jobmanager.JobManager                -  Classpath: 
/opt/software/flink-1.2.0/lib/log4j-1.2.17.jar:/opt/software/flink-1.2.0/lib/flink-python_2.11-1.2.0.jar:/opt/software/flink-1.2.0/lib/flink-dist_2.11-1.2.0.jar:/opt/software/flink-1.2.0/lib/slf4j-log4j12-1.7.7.jar:::2017-04-12
 15:51:01,841 INFO  org.apache.flink.runtime.jobmanager.JobManager              
  - 
2017-04-12
 15:51:01,842 INFO  org.apache.flink.runtime.jobmanager.JobManager              
  - Registered UNIX signal handlers for [TERM, HUP, INT]2017-04-12 15:51:02,025 
INFO  org.apache.flink.runtime.jobmanager.JobManager                - Loading 
configuration from /opt/software/flink-1.2.0/conf2017-04-12 15:51:02,031 INFO  
org.apache.flink.configuration.GlobalConfiguration            - Loading 
configuration property: jobmanager.rpc.address, localhost2017-04-12 
15:51:02,031 INFO  org.apache.flink.configuration.GlobalConfiguration           
 - Loading configuration property: jobmanager.rpc.port, 61232017-04-12 
15:51:02,031 INFO  org.apache.flink.configuration.GlobalConfiguration           
 - Loading configuration property: jobmanager.heap.mb, 2562017-04-12 
15:51:02,032 INFO  org.apache.flink.configuration.GlobalConfiguration           
 - Loading configuration property: taskmanager.heap.mb, 5122017-04-12 
15:51:02,032 INFO  org.apache.flink.configuration.GlobalConfiguration           
 - Loading configuration property: taskmanager.numberOfTaskSlots, 12017-04-12 
15:51:02,032 INFO  org.apache.flink.configuration.GlobalConfiguration           
 - Loading configuration property: taskmanager.memory.preallocate, 
false2017-04-12 15:51:02,032 INFO  
org.apache.flink.configuration.GlobalConfiguration            - Loading 
configuration property: parallelism.default, 12017-04-12 15:51:02,032 INFO  
org.apache.flink.configuration.GlobalConfiguration            - Loading 
configuration property: jobmanager.web.port, 80812017-04-12 15:51:02,043 INFO  
org.apache.flink.runtime.jobmanager.JobManager                - Starting 
JobManager without high-availability2017-04-12 15:51:02,070 INFO  

[jira] [Created] (FLINK-6305) flink-ml tests are executed in all flink-fast-test profiles

2017-04-13 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-6305:
--

 Summary: flink-ml tests are executed in all flink-fast-test 
profiles
 Key: FLINK-6305
 URL: https://issues.apache.org/jira/browse/FLINK-6305
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.3.0
Reporter: Nico Kruber
Priority: Minor


The {{flink-fast-tests-\*}} profiles partition the unit tests based on their 
starting letter. However, this does not affect the Scala tests run via the 
ScalaTest plugin and therefore, {{flink-ml}} tests are executed in all three 
currently existing profiles. While this may not be that grave, it does run for 
about 2.5 minutes on Travis CI which may be saved in 2/3 of the profiles there.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


RE: Sliding Window - Weird behaviour

2017-04-13 Thread Radu Tudoran
Hi,

You need to implement your own timer. You do this when you create your window 
by assigning the timer. In your custom timer you would need to implement the 
desired logic in the onElement method.
You can keep a counter that you increment for each element up to your desired 
number of elements and FIRE only when this value is reaches your threshold 
after which you want to trigger

You can take a look in existing triggers
https://github.com/apache/flink/tree/1875cac03042dad4a4c47b0de8364f02fbe457c6/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/triggers
 


-Original Message-
From: madhairsilence [mailto:harish.kum...@tcs.com] 
Sent: Thursday, April 13, 2017 12:25 PM
To: dev@flink.apache.org
Subject: Re: Sliding Window - Weird behaviour

Hi Xingcan

Thanks for the answer. But up to my understanding

countWindow(4,2) - Should wait for 4 elements (or window not more than 4
element) and once the window is ready, slide two items

Now if I have to stopped asking why questions and worry about my current 
problem, how do I achieve this expected output.

Stream : 1,2,3,4,5,6,7,8...

Output:
1,2
2,3
3,4
4,5...



--
View this message in context: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Sliding-Window-Weird-behaviour-tp17013p17019.html
Sent from the Apache Flink Mailing List archive. mailing list archive at 
Nabble.com.


Re: Sliding Window - Weird behaviour

2017-04-13 Thread madhairsilence
Hi Xingcan

Thanks for the answer. But up to my understanding

countWindow(4,2) - Should wait for 4 elements (or window not more than 4
element) and once the window is ready, slide two items

Now if I have to stopped asking why questions and worry about my current
problem, how do I achieve this expected output.

Stream : 1,2,3,4,5,6,7,8...

Output:
1,2
2,3
3,4
4,5...



--
View this message in context: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Sliding-Window-Weird-behaviour-tp17013p17019.html
Sent from the Apache Flink Mailing List archive. mailing list archive at 
Nabble.com.


Re: Sliding Window - Weird behaviour

2017-04-13 Thread Xingcan Cui
​Hi harish,

I will not argue for the correctness of the result​s, but just tell you why
this happens.

The countWindow(2, 1) can be regarded as two separate processes: 1)
maintain a window whose size *not exceeds* 2 and 2) trigger window
evaluation every single record.

Actually, in Flink the two processes execute independently and that's why
the first record 1 triggered window accumulation in you example.

Hope this helps,
Xingcan

On Thu, Apr 13, 2017 at 4:43 PM, madhairsilence 
wrote:

> I have a datastream
> 1,2,3,4,5,6,7
>
> I applied a sliding countWindow as
> inputStream.keyBy("num").countWindow(2,1)
>
> I expect an output as
> 1,2
> 2,3
> 3,4
>
> But am getting an output as
> 1
> 1,2
> 2,3
> 3,4
>
> Why does the data slide first and then accumulate the window size
>
>
>
> --
> View this message in context: http://apache-flink-mailing-
> list-archive.1008284.n3.nabble.com/Sliding-Window-
> Weird-behaviour-tp17013.html
> Sent from the Apache Flink Mailing List archive. mailing list archive at
> Nabble.com.
>


[jira] [Created] (FLINK-6304) Clear a lot of useless import

2017-04-13 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6304:
--

 Summary: Clear a lot of useless import
 Key: FLINK-6304
 URL: https://issues.apache.org/jira/browse/FLINK-6304
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: sunjincheng
Assignee: sunjincheng


There are some classes(as follow) have useless import, I want clear them before 
release 1.3.
{code}
DataSetSlideTimeWindowAggFlatMapFunction
CommonScan
FlinkRel
StreamTableSourceScanRule
DataStreamOverAggregateRule
DataStreamAggregateRule
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] Release Apache Flink 1.2.1 (RC2)

2017-04-13 Thread Gyula Fóra
Hi,

Unfortunately I cannot test run the rc as I am on vacation. But we have
been running pretty much the same build (+1-2 commits) in production for
some time now.

+1 from me

Gyula

On Thu, Apr 13, 2017, 08:27 Andrew Psaltis  wrote:

> +1 -- checked out all code, built with all tests, ran local cluster,
> deployed example streaming jobs
>
> On Thu, Apr 13, 2017 at 2:26 AM, Andrew Psaltis 
> wrote:
>
> > Ted -- I did not see those errors. My environment is:
> > Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
> > 2015-11-10T11:41:47-05:00)
> > Maven home: /usr/local/Cellar/maven/3.3.9/libexec
> > Java version: 1.8.0_121, vendor: Oracle Corporation
> > Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_
> > 121.jdk/Contents/Home/jre
> > Default locale: en_US, platform encoding: UTF-8
> > OS name: "mac os x", version: "10.12.3", arch: "x86_64", family: "mac"
> >
> >
> >
> >
> > On Thu, Apr 13, 2017 at 12:36 AM, Ted Yu  wrote:
> >
> >> I ran test suite where the following failed:
> >>
> >> Failed tests:
> >>   StreamExecutionEnvironmentTest.testDefaultParallelismIsDefault:143
> >> expected:<-1> but was:<24>
> >>
> >> StreamExecutionEnvironmentTest.testMaxParallelismMustBeBigge
> >> rEqualParallelism
> >> Expected test to throw an instance of java.lang.IllegalArgumentException
> >>
> >> StreamExecutionEnvironmentTest.testParallelismMustBeSmallerE
> >> qualMaxParallelism
> >> Expected test to throw an instance of java.lang.IllegalArgumentException
> >>
> >> This is what I used:
> >>
> >> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
> >> 2015-11-10T16:41:47+00:00)
> >> Java version: 1.8.0_101, vendor: Oracle Corporation
> >>
> >> Have anyone else seen the above failures ?
> >>
> >> Cheers
> >>
> >> On Wed, Apr 12, 2017 at 4:06 PM, Robert Metzger 
> >> wrote:
> >>
> >> > Dear Flink community,
> >> >
> >> > Please vote on releasing the following candidate as Apache Flink
> version
> >> > 1.2
> >> > .1.
> >> >
> >> > The commit to be voted on:
> >> > 76eba4e0 <
> http://git-wip-us.apache.org/repos/asf/flink/commit/76eba4e0>
> >> > (*http://git-wip-us.apache.org/repos/asf/flink/commit/76eba4e0
> >> > *)
> >> >
> >> > Branch:
> >> > release-1.2.1-rc2
> >> >
> >> > The release artifacts to be voted on can be found at:
> >> > http://people.apache.org/~rmetzger/flink-1.2.1-rc2/
> >> >
> >> >
> >> > The release artifacts are signed with the key with fingerprint
> D9839159:
> >> > http://www.apache.org/dist/flink/KEYS
> >> >
> >> > The staging repository for this release can be found at:
> >> > *
> https://repository.apache.org/content/repositories/orgapacheflink-1117
> >> > <
> https://repository.apache.org/content/repositories/orgapacheflink-1117
> >> >*
> >> >
> >> > -
> >> >
> >> >
> >> > The vote ends on Tuesday, 1pm CET.
> >> >
> >> > [ ] +1 Release this package as Apache Flink 1.2.1
> >> > [ ] -1 Do not release this package, because ...
> >> >
> >>
> >
> >
> >
> > --
> > Thanks,
> > Andrew
> >
> > Subscribe to my book: Streaming Data 
> > 
> > twiiter: @itmdata 
> >
>
>
>
> --
> Thanks,
> Andrew
>
> Subscribe to my book: Streaming Data 
> 
> twiiter: @itmdata 
>


[jira] [Created] (FLINK-6303) Documentation support build in docker on OSX

2017-04-13 Thread Tao Meng (JIRA)
Tao Meng created FLINK-6303:
---

 Summary: Documentation support build in docker on OSX
 Key: FLINK-6303
 URL: https://issues.apache.org/jira/browse/FLINK-6303
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Tao Meng
Assignee: Tao Meng
Priority: Trivial


Now docker only support linux. Because {{ ENV  HOME /home/${USER_NAME} }} only 
support linux. We need change {{ /home/${USER_NAME} }} to {${HOME}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6302) Documentation build error on ruby 2.4

2017-04-13 Thread Tao Meng (JIRA)
Tao Meng created FLINK-6302:
---

 Summary: Documentation build error on ruby 2.4
 Key: FLINK-6302
 URL: https://issues.apache.org/jira/browse/FLINK-6302
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Tao Meng
Assignee: Tao Meng
Priority: Trivial


{{/usr/local/Cellar/ruby/2.4.1_1/include/ruby-2.4.0/ruby/ruby.h:981:28: note: 
expanded from macro 'RSTRING_LEN'
 RSTRING(str)->as.heap.len)
 ~~^~~
yajl_ext.c:881:22: error: use of undeclared identifier 'rb_cFixnum'
rb_define_method(rb_cFixnum, "to_json", rb_yajl_json_ext_fixnum_to_json, 
-1);
 ^
17 warnings and 1 error generated.
make: *** [yajl_ext.o] Error 1

make failed, exit code 2 }}

We should update Gemfile.lock.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Sliding Window - Weird behaviour

2017-04-13 Thread madhairsilence
I have a datastream
1,2,3,4,5,6,7

I applied a sliding countWindow as
inputStream.keyBy("num").countWindow(2,1)

I expect an output as
1,2
2,3
3,4

But am getting an output as
1
1,2
2,3
3,4

Why does the data slide first and then accumulate the window size



--
View this message in context: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Sliding-Window-Weird-behaviour-tp17013.html
Sent from the Apache Flink Mailing List archive. mailing list archive at 
Nabble.com.


Re: [VOTE] Release Apache Flink 1.2.1 (RC2)

2017-04-13 Thread Andrew Psaltis
+1 -- checked out all code, built with all tests, ran local cluster,
deployed example streaming jobs

On Thu, Apr 13, 2017 at 2:26 AM, Andrew Psaltis 
wrote:

> Ted -- I did not see those errors. My environment is:
> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
> 2015-11-10T11:41:47-05:00)
> Maven home: /usr/local/Cellar/maven/3.3.9/libexec
> Java version: 1.8.0_121, vendor: Oracle Corporation
> Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_
> 121.jdk/Contents/Home/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "mac os x", version: "10.12.3", arch: "x86_64", family: "mac"
>
>
>
>
> On Thu, Apr 13, 2017 at 12:36 AM, Ted Yu  wrote:
>
>> I ran test suite where the following failed:
>>
>> Failed tests:
>>   StreamExecutionEnvironmentTest.testDefaultParallelismIsDefault:143
>> expected:<-1> but was:<24>
>>
>> StreamExecutionEnvironmentTest.testMaxParallelismMustBeBigge
>> rEqualParallelism
>> Expected test to throw an instance of java.lang.IllegalArgumentException
>>
>> StreamExecutionEnvironmentTest.testParallelismMustBeSmallerE
>> qualMaxParallelism
>> Expected test to throw an instance of java.lang.IllegalArgumentException
>>
>> This is what I used:
>>
>> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
>> 2015-11-10T16:41:47+00:00)
>> Java version: 1.8.0_101, vendor: Oracle Corporation
>>
>> Have anyone else seen the above failures ?
>>
>> Cheers
>>
>> On Wed, Apr 12, 2017 at 4:06 PM, Robert Metzger 
>> wrote:
>>
>> > Dear Flink community,
>> >
>> > Please vote on releasing the following candidate as Apache Flink version
>> > 1.2
>> > .1.
>> >
>> > The commit to be voted on:
>> > 76eba4e0 
>> > (*http://git-wip-us.apache.org/repos/asf/flink/commit/76eba4e0
>> > *)
>> >
>> > Branch:
>> > release-1.2.1-rc2
>> >
>> > The release artifacts to be voted on can be found at:
>> > http://people.apache.org/~rmetzger/flink-1.2.1-rc2/
>> >
>> >
>> > The release artifacts are signed with the key with fingerprint D9839159:
>> > http://www.apache.org/dist/flink/KEYS
>> >
>> > The staging repository for this release can be found at:
>> > *https://repository.apache.org/content/repositories/orgapacheflink-1117
>> > > >*
>> >
>> > -
>> >
>> >
>> > The vote ends on Tuesday, 1pm CET.
>> >
>> > [ ] +1 Release this package as Apache Flink 1.2.1
>> > [ ] -1 Do not release this package, because ...
>> >
>>
>
>
>
> --
> Thanks,
> Andrew
>
> Subscribe to my book: Streaming Data 
> 
> twiiter: @itmdata 
>



-- 
Thanks,
Andrew

Subscribe to my book: Streaming Data 

twiiter: @itmdata 


Re: [VOTE] Release Apache Flink 1.2.1 (RC2)

2017-04-13 Thread Andrew Psaltis
Ted -- I did not see those errors. My environment is:
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
2015-11-10T11:41:47-05:00)
Maven home: /usr/local/Cellar/maven/3.3.9/libexec
Java version: 1.8.0_121, vendor: Oracle Corporation
Java home:
/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.12.3", arch: "x86_64", family: "mac"




On Thu, Apr 13, 2017 at 12:36 AM, Ted Yu  wrote:

> I ran test suite where the following failed:
>
> Failed tests:
>   StreamExecutionEnvironmentTest.testDefaultParallelismIsDefault:143
> expected:<-1> but was:<24>
>
> StreamExecutionEnvironmentTest.testMaxParallelismMustBeBigger
> EqualParallelism
> Expected test to throw an instance of java.lang.IllegalArgumentException
>
> StreamExecutionEnvironmentTest.testParallelismMustBeSmallerEq
> ualMaxParallelism
> Expected test to throw an instance of java.lang.IllegalArgumentException
>
> This is what I used:
>
> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
> 2015-11-10T16:41:47+00:00)
> Java version: 1.8.0_101, vendor: Oracle Corporation
>
> Have anyone else seen the above failures ?
>
> Cheers
>
> On Wed, Apr 12, 2017 at 4:06 PM, Robert Metzger 
> wrote:
>
> > Dear Flink community,
> >
> > Please vote on releasing the following candidate as Apache Flink version
> > 1.2
> > .1.
> >
> > The commit to be voted on:
> > 76eba4e0 
> > (*http://git-wip-us.apache.org/repos/asf/flink/commit/76eba4e0
> > *)
> >
> > Branch:
> > release-1.2.1-rc2
> >
> > The release artifacts to be voted on can be found at:
> > http://people.apache.org/~rmetzger/flink-1.2.1-rc2/
> >
> >
> > The release artifacts are signed with the key with fingerprint D9839159:
> > http://www.apache.org/dist/flink/KEYS
> >
> > The staging repository for this release can be found at:
> > *https://repository.apache.org/content/repositories/orgapacheflink-1117
> >  >*
> >
> > -
> >
> >
> > The vote ends on Tuesday, 1pm CET.
> >
> > [ ] +1 Release this package as Apache Flink 1.2.1
> > [ ] -1 Do not release this package, because ...
> >
>



-- 
Thanks,
Andrew

Subscribe to my book: Streaming Data 

twiiter: @itmdata