[jira] [Created] (FLINK-8336) YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3 test instability

2018-01-01 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-8336:


 Summary: YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3 
test instability
 Key: FLINK-8336
 URL: https://issues.apache.org/jira/browse/FLINK-8336
 Project: Flink
  Issue Type: Bug
  Components: FileSystem, Tests, YARN
Affects Versions: 1.5.0
Reporter: Till Rohrmann
Priority: Critical
 Fix For: 1.5.0


The {{YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3}} fails on Travis. 
I suspect that this has something to do with the consistency guarantees S3 
gives us.

https://travis-ci.org/tillrohrmann/flink/jobs/323930297



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8317) Enable Triggering of Savepoints via RestfulGateway

2018-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307518#comment-16307518
 ] 

ASF GitHub Bot commented on FLINK-8317:
---

GitHub user GJL opened a pull request:

https://github.com/apache/flink/pull/5223

[FLINK-8317][flip6] Implement Triggering of Savepoints

## What is the purpose of the change

*Implement triggering of savepoints through HTTP and through command line 
in FLIP-6 mode. This PR is based on #5207.*

CC: @tillrohrmann 

## Brief change log

- *Allow triggering of savepoints through RestfulGateway.*
- *Implement REST handlers to trigger and query the status of savepoints.*
- *Implement savepoint command in RestClusterClient.*


## Verifying this change

This change added tests and can be verified as follows:

  - *Added unit tests for REST handlesr, and `RestClusterClient`*
  - *Manually deployed the `SocketWindowWordCount` job and triggered a 
savepoint using Flink's command line client and `curl`*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/GJL/flink FLINK-8317-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5223.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5223


commit e91f15fcbbe52d6d47cc1ba3d35ae4768fc6309d
Author: gyao 
Date:   2017-12-19T17:58:53Z

[FLINK-8234][flip6] Cache JobExecutionResult in Dispatcher

- Introduce new JobExecutionResult used by JobMaster to forward the 
information in
  the already existing JobExecutionResult.
- Always cache a JobExecutionResult. Even in case of job failures. In case 
of
  job failures, the serialized exception is stored additionally.
- Introduce new methods to RestfulGateway to allow retrieval of cached
  JobExecutionResults

commit 748745ac3521a20040cbda4056dfd9c53bc24a82
Author: gyao 
Date:   2017-12-20T13:44:03Z

[FLINK-8233][flip6] Add JobExecutionResultHandler

- Allow retrieval of the JobExecutionResult cached in Dispatcher.
- Implement serializer and deserializer for JobExecutionResult.

commit adf091a2770f42d6f8a0c19ab88cc7a208943a32
Author: gyao 
Date:   2017-12-20T13:44:26Z

[hotfix] Clean up ExecutionGraph

- Remove unnecessary throws clause.
- Format whitespace.

commit f5c28527b3a1a0c8ec52f2a5616ebb634397b69c
Author: gyao 
Date:   2017-12-22T23:02:10Z

[FLINK-8299][flip6] Retrieve JobExecutionResult after job submission

commit 55d920f628d7ef3f5b0db7fd843dfdd2d96a3917
Author: gyao 
Date:   2018-01-01T17:59:42Z

[FLINK-8317][flip6] Implement savepoints in RestClusterClient

Allow triggering of savepoints through RestfulGateway. Implement REST 
handlers
to trigger and query the status of savepoints. Implement
savepoint command in RestClusterClient.




> Enable Triggering of Savepoints via RestfulGateway
> --
>
> Key: FLINK-8317
> URL: https://issues.apache.org/jira/browse/FLINK-8317
> Project: Flink
>  Issue Type: New Feature
>  Components: Distributed Coordination, REST
>Affects Versions: 1.5.0
>Reporter: Gary Yao
>Assignee: Gary Yao
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Enable triggering of savepoints in FLIP-6 mode via RestfulGateway:
> * Add method to {{CompletableFuture 
> triggerSavepoint(long timestamp, String targetDirectory)}} to interface
> * Implement method in {{Dispatcher}} and {{JobMaster}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5223: [FLINK-8317][flip6] Implement Triggering of Savepo...

2018-01-01 Thread GJL
GitHub user GJL opened a pull request:

https://github.com/apache/flink/pull/5223

[FLINK-8317][flip6] Implement Triggering of Savepoints

## What is the purpose of the change

*Implement triggering of savepoints through HTTP and through command line 
in FLIP-6 mode. This PR is based on #5207.*

CC: @tillrohrmann 

## Brief change log

- *Allow triggering of savepoints through RestfulGateway.*
- *Implement REST handlers to trigger and query the status of savepoints.*
- *Implement savepoint command in RestClusterClient.*


## Verifying this change

This change added tests and can be verified as follows:

  - *Added unit tests for REST handlesr, and `RestClusterClient`*
  - *Manually deployed the `SocketWindowWordCount` job and triggered a 
savepoint using Flink's command line client and `curl`*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/GJL/flink FLINK-8317-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5223.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5223


commit e91f15fcbbe52d6d47cc1ba3d35ae4768fc6309d
Author: gyao 
Date:   2017-12-19T17:58:53Z

[FLINK-8234][flip6] Cache JobExecutionResult in Dispatcher

- Introduce new JobExecutionResult used by JobMaster to forward the 
information in
  the already existing JobExecutionResult.
- Always cache a JobExecutionResult. Even in case of job failures. In case 
of
  job failures, the serialized exception is stored additionally.
- Introduce new methods to RestfulGateway to allow retrieval of cached
  JobExecutionResults

commit 748745ac3521a20040cbda4056dfd9c53bc24a82
Author: gyao 
Date:   2017-12-20T13:44:03Z

[FLINK-8233][flip6] Add JobExecutionResultHandler

- Allow retrieval of the JobExecutionResult cached in Dispatcher.
- Implement serializer and deserializer for JobExecutionResult.

commit adf091a2770f42d6f8a0c19ab88cc7a208943a32
Author: gyao 
Date:   2017-12-20T13:44:26Z

[hotfix] Clean up ExecutionGraph

- Remove unnecessary throws clause.
- Format whitespace.

commit f5c28527b3a1a0c8ec52f2a5616ebb634397b69c
Author: gyao 
Date:   2017-12-22T23:02:10Z

[FLINK-8299][flip6] Retrieve JobExecutionResult after job submission

commit 55d920f628d7ef3f5b0db7fd843dfdd2d96a3917
Author: gyao 
Date:   2018-01-01T17:59:42Z

[FLINK-8317][flip6] Implement savepoints in RestClusterClient

Allow triggering of savepoints through RestfulGateway. Implement REST 
handlers
to trigger and query the status of savepoints. Implement
savepoint command in RestClusterClient.




---


[jira] [Updated] (FLINK-8335) Upgrade hbase connector dependency to 1.4.0

2018-01-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-8335:
--
Description: 
hbase 1.4.0 has been released.

1.4.0 shows speed improvement over previous 1.x releases.

http://search-hadoop.com/m/HBase/YGbbBxedD1Mnm8t?subj=Re+VOTE+The+second+HBase+1+4+0+release+candidate+RC1+is+available

This issue is to upgrade the dependency to 1.4.0

  was:
hbase 1.4.0 has been released.

1.4.0 shows speed improvement over previous 1.x releases.

This issue is to upgrade the dependency to 1.4.0


> Upgrade hbase connector dependency to 1.4.0
> ---
>
> Key: FLINK-8335
> URL: https://issues.apache.org/jira/browse/FLINK-8335
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> hbase 1.4.0 has been released.
> 1.4.0 shows speed improvement over previous 1.x releases.
> http://search-hadoop.com/m/HBase/YGbbBxedD1Mnm8t?subj=Re+VOTE+The+second+HBase+1+4+0+release+candidate+RC1+is+available
> This issue is to upgrade the dependency to 1.4.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8335) Upgrade hbase connector dependency to 1.4.0

2018-01-01 Thread Ted Yu (JIRA)
Ted Yu created FLINK-8335:
-

 Summary: Upgrade hbase connector dependency to 1.4.0
 Key: FLINK-8335
 URL: https://issues.apache.org/jira/browse/FLINK-8335
 Project: Flink
  Issue Type: Improvement
Reporter: Ted Yu
Priority: Minor


hbase 1.4.0 has been released.

1.4.0 shows speed improvement over previous 1.x releases.

This issue is to upgrade the dependency to 1.4.0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8334) Elasticsearch Connector throwing java.lang.ClassNotFoundException: org.jboss.netty.channel.socket.nio.SocketSendBufferPool$GatheringSendBuffer

2018-01-01 Thread Bhaskar Divya (JIRA)
Bhaskar Divya created FLINK-8334:


 Summary: Elasticsearch Connector throwing 
java.lang.ClassNotFoundException: 
org.jboss.netty.channel.socket.nio.SocketSendBufferPool$GatheringSendBuffer
 Key: FLINK-8334
 URL: https://issues.apache.org/jira/browse/FLINK-8334
 Project: Flink
  Issue Type: Bug
  Components: ElasticSearch Connector
Affects Versions: 1.4.0
 Environment: Using Elasticsearch 5.1.2 in a docker environment
Flink is deployed on a different docker
Reporter: Bhaskar Divya


I have a Elasticsearch sink configured. When a job is submitted, It goes into 
fail status in a few seconds. 
Following is the Exception from the Job screen:

{code:java}
java.lang.RuntimeException: Elasticsearch client is not connected to any 
Elasticsearch nodes!
at 
org.apache.flink.streaming.connectors.elasticsearch5.Elasticsearch5ApiCallBridge.createClient(Elasticsearch5ApiCallBridge.java:80)
at 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:281)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at 
org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:48)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:393)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:254)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Thread.java:748)
{code}

In the logs, Following stack trace is shown.

{code}
2018-01-01 12:15:14,432 INFO  
org.elasticsearch.client.transport.TransportClientNodesService  - failed to get 
node info for 
{#transport#-1}{8IZTMPcSRCyKRynhfyN2fA}{192.168.99.100}{192.168.99.100:9300}, 
disconnecting...
NodeDisconnectedException[[][192.168.99.100:9300][cluster:monitor/nodes/liveness]
 disconnected]
2018-01-01 12:15:19,433 ERROR org.elasticsearch.transport.netty3.Netty3Utils
- fatal error on the network layer
at 
org.elasticsearch.transport.netty3.Netty3Utils.maybeDie(Netty3Utils.java:195)
at 
org.elasticsearch.transport.netty3.Netty3MessageChannelHandler.exceptionCaught(Netty3MessageChannelHandler.java:82)
at 
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
at 
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at 
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at 
org.jboss.netty.handler.codec.frame.FrameDecoder.exceptionCaught(FrameDecoder.java:377)
at 
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
at 
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at 
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at 
org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:525)
at 
org.jboss.netty.channel.socket.nio.AbstractNioWorker.write0(AbstractNioWorker.java:291)
at 
org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromTaskLoop(AbstractNioWorker.java:151)
at 
org.jboss.netty.channel.socket.nio.AbstractNioChannel$WriteTask.run(AbstractNioChannel.java:292)
at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391)
at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315)
at 
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at 
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at 
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-01-01 12:15:19,448 WARN  
org.elasticsearch.transport.netty3.Netty3Transport- exception 
caught on transport layer [[id: 0xef889995, /172.17.0.4:48450 => 
/192.168.99.100:9300]], closing connection
ElasticsearchException[java.lang.NoClassDefFoundError: 
org/jboss/netty/channel/socket/nio/SocketSendBufferPool$GatheringSendBuffer]; 
nested: 
NoClassDefFoundError[org/jboss/netty/channel/socket/nio/SocketSendBufferPool$GatheringSendBuffer];
 nested: 
ClassNotFoundExce

[jira] [Commented] (FLINK-8268) Test instability for 'TwoPhaseCommitSinkFunctionTest'

2018-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307426#comment-16307426
 ] 

ASF GitHub Bot commented on FLINK-8268:
---

Github user GJL commented on a diff in the pull request:

https://github.com/apache/flink/pull/5193#discussion_r159153854
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/ContentDump.java
 ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.functions.sink;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Utility class to simulate in memory file like writes, flushes and 
closing.
+ */
+public class ContentDump {
+   private boolean writable = true;
+   private Map> filesContent = new HashMap<>();
+
+   public Set listFiles() {
+   return filesContent.keySet();
--- End diff --

Maybe return a copy here because the key set will reflect changes in the 
map.


> Test instability for 'TwoPhaseCommitSinkFunctionTest'
> -
>
> Key: FLINK-8268
> URL: https://issues.apache.org/jira/browse/FLINK-8268
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stephan Ewen
>Assignee: Piotr Nowojski
>Priority: Critical
>  Labels: test-stability
>
> The following exception / failure message occurs.
> {code}
> Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.824 sec <<< 
> FAILURE! - in 
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunctionTest
> testIgnoreCommitExceptionDuringRecovery(org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunctionTest)
>   Time elapsed: 0.068 sec  <<< ERROR!
> java.lang.Exception: Could not complete snapshot 0 for operator MockTask 
> (1/1).
>   at java.io.FileOutputStream.writeBytes(Native Method)
>   at java.io.FileOutputStream.write(FileOutputStream.java:326)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
>   at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
>   at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
>   at java.io.BufferedWriter.flush(BufferedWriter.java:254)
>   at 
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunctionTest$FileBasedSinkFunction.preCommit(TwoPhaseCommitSinkFunctionTest.java:313)
>   at 
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunctionTest$FileBasedSinkFunction.preCommit(TwoPhaseCommitSinkFunctionTest.java:288)
>   at 
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.snapshotState(TwoPhaseCommitSinkFunction.java:290)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:90)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:357)
>   at 
> org.apache.flink.streaming.util.AbstractStreamOperatorTestHarness.snapshot(AbstractStreamOperatorTestHarness.java:459)
>   at 
> org.apache.flink.streaming.api.functions.sink.T

[jira] [Commented] (FLINK-8268) Test instability for 'TwoPhaseCommitSinkFunctionTest'

2018-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307425#comment-16307425
 ] 

ASF GitHub Bot commented on FLINK-8268:
---

Github user GJL commented on a diff in the pull request:

https://github.com/apache/flink/pull/5193#discussion_r159154132
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/util/AbstractStreamOperatorTestHarness.java
 ---
@@ -492,6 +503,10 @@ public void close() throws Exception {
processingTimeService.shutdownService();
}
setupCalled = false;
+
+   if (internalEnvironment.isPresent()) {
--- End diff --

I think to enable this `Environment` must implement `AutoCloseable` as 
well. Maybe an empty default `close()` method? 
If you decide to stick with `Optional`, maybe change this line to: 
`internalEnvironment.ifPresent(MockEnvironment::close);`


> Test instability for 'TwoPhaseCommitSinkFunctionTest'
> -
>
> Key: FLINK-8268
> URL: https://issues.apache.org/jira/browse/FLINK-8268
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stephan Ewen
>Assignee: Piotr Nowojski
>Priority: Critical
>  Labels: test-stability
>
> The following exception / failure message occurs.
> {code}
> Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.824 sec <<< 
> FAILURE! - in 
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunctionTest
> testIgnoreCommitExceptionDuringRecovery(org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunctionTest)
>   Time elapsed: 0.068 sec  <<< ERROR!
> java.lang.Exception: Could not complete snapshot 0 for operator MockTask 
> (1/1).
>   at java.io.FileOutputStream.writeBytes(Native Method)
>   at java.io.FileOutputStream.write(FileOutputStream.java:326)
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>   at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>   at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
>   at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
>   at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
>   at java.io.BufferedWriter.flush(BufferedWriter.java:254)
>   at 
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunctionTest$FileBasedSinkFunction.preCommit(TwoPhaseCommitSinkFunctionTest.java:313)
>   at 
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunctionTest$FileBasedSinkFunction.preCommit(TwoPhaseCommitSinkFunctionTest.java:288)
>   at 
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.snapshotState(TwoPhaseCommitSinkFunction.java:290)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:90)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:357)
>   at 
> org.apache.flink.streaming.util.AbstractStreamOperatorTestHarness.snapshot(AbstractStreamOperatorTestHarness.java:459)
>   at 
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunctionTest.testIgnoreCommitExceptionDuringRecovery(TwoPhaseCommitSinkFunctionTest.java:208)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5193: [FLINK-8268][tests] Improve tests stability

2018-01-01 Thread GJL
Github user GJL commented on a diff in the pull request:

https://github.com/apache/flink/pull/5193#discussion_r159153854
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/functions/sink/ContentDump.java
 ---
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.functions.sink;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Utility class to simulate in memory file like writes, flushes and 
closing.
+ */
+public class ContentDump {
+   private boolean writable = true;
+   private Map> filesContent = new HashMap<>();
+
+   public Set listFiles() {
+   return filesContent.keySet();
--- End diff --

Maybe return a copy here because the key set will reflect changes in the 
map.


---


[GitHub] flink pull request #5193: [FLINK-8268][tests] Improve tests stability

2018-01-01 Thread GJL
Github user GJL commented on a diff in the pull request:

https://github.com/apache/flink/pull/5193#discussion_r159154132
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/util/AbstractStreamOperatorTestHarness.java
 ---
@@ -492,6 +503,10 @@ public void close() throws Exception {
processingTimeService.shutdownService();
}
setupCalled = false;
+
+   if (internalEnvironment.isPresent()) {
--- End diff --

I think to enable this `Environment` must implement `AutoCloseable` as 
well. Maybe an empty default `close()` method? 
If you decide to stick with `Optional`, maybe change this line to: 
`internalEnvironment.ifPresent(MockEnvironment::close);`


---


[jira] [Commented] (FLINK-8302) Support shift_left and shift_right in TableAPI

2018-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307394#comment-16307394
 ] 

ASF GitHub Bot commented on FLINK-8302:
---

Github user dubin555 commented on the issue:

https://github.com/apache/flink/pull/5202
  
Hi @sunjincheng121 . Thanks for the review.
I already finished some changed for code cleaning and remove the duplicate 
test case base on your comment.

There are some similar implements in other Database, 
like MySQL, https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html
Oracle, https://docs.oracle.com/cd/E22583_01/DR/help/BitShift.html
In Apache Spark, also exists similar implements, 
https://issues.apache.org/jira/browse/SPARK-8223

Thanks,
Du Bin


> Support shift_left and shift_right in TableAPI
> --
>
> Key: FLINK-8302
> URL: https://issues.apache.org/jira/browse/FLINK-8302
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: DuBin
>  Labels: features
> Fix For: 1.5.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Add shift_left and shift_right support in TableAPI, shift_left(input, n) act 
> as input << n, shift_right(input, n) act as input >> n.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5202: [FLINK-8302][table]Add SHIFT_LEFT and SHIFT_RIGHT support...

2018-01-01 Thread dubin555
Github user dubin555 commented on the issue:

https://github.com/apache/flink/pull/5202
  
Hi @sunjincheng121 . Thanks for the review.
I already finished some changed for code cleaning and remove the duplicate 
test case base on your comment.

There are some similar implements in other Database, 
like MySQL, https://dev.mysql.com/doc/refman/5.7/en/bit-functions.html
Oracle, https://docs.oracle.com/cd/E22583_01/DR/help/BitShift.html
In Apache Spark, also exists similar implements, 
https://issues.apache.org/jira/browse/SPARK-8223

Thanks,
Du Bin


---


[GitHub] flink pull request #5202: [FLINK-8302][table]Add SHIFT_LEFT and SHIFT_RIGHT ...

2018-01-01 Thread dubin555
Github user dubin555 commented on a diff in the pull request:

https://github.com/apache/flink/pull/5202#discussion_r159151505
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/expressions/ScalarFunctionsTest.scala
 ---
@@ -1216,6 +1216,69 @@ class ScalarFunctionsTest extends 
ScalarTypesTestBase {
 )
   }
 
+  @Test
+  def testShiftLeft(): Unit = {
+testSqlApi(
+  "SHIFT_LEFT(1,1)",
+  "2"
+)
+
+testSqlApi(
+  "SHIFT_LEFT(21,1)",
+  "42"
+)
+
+testSqlApi(
+  "SHIFT_LEFT(21,1)",
+  "42"
+)
+
--- End diff --

removed


---


[jira] [Commented] (FLINK-8302) Support shift_left and shift_right in TableAPI

2018-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307391#comment-16307391
 ] 

ASF GitHub Bot commented on FLINK-8302:
---

Github user dubin555 commented on a diff in the pull request:

https://github.com/apache/flink/pull/5202#discussion_r159151505
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/expressions/ScalarFunctionsTest.scala
 ---
@@ -1216,6 +1216,69 @@ class ScalarFunctionsTest extends 
ScalarTypesTestBase {
 )
   }
 
+  @Test
+  def testShiftLeft(): Unit = {
+testSqlApi(
+  "SHIFT_LEFT(1,1)",
+  "2"
+)
+
+testSqlApi(
+  "SHIFT_LEFT(21,1)",
+  "42"
+)
+
+testSqlApi(
+  "SHIFT_LEFT(21,1)",
+  "42"
+)
+
--- End diff --

removed


> Support shift_left and shift_right in TableAPI
> --
>
> Key: FLINK-8302
> URL: https://issues.apache.org/jira/browse/FLINK-8302
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: DuBin
>  Labels: features
> Fix For: 1.5.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Add shift_left and shift_right support in TableAPI, shift_left(input, n) act 
> as input << n, shift_right(input, n) act as input >> n.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)