[jira] [Assigned] (FLINK-11616) Flink official document has an error

2019-02-20 Thread TANG Wen-hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TANG Wen-hui reassigned FLINK-11616:


Assignee: xulinjie  (was: TANG Wen-hui)

> Flink official document has an error
> 
>
> Key: FLINK-11616
> URL: https://issues.apache.org/jira/browse/FLINK-11616
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: xulinjie
>Assignee: xulinjie
>Priority: Major
> Attachments: wx20190214-214...@2x.png
>
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/tutorials/flink_on_windows.html]
> The mistake is in paragraph “Installing Flink from Git”.
> “The solution is to adjust the Cygwin settings to deal with the correct line 
> endings by following these three steps:”,
> The sequence of steps you wrote was "1, 2, 1".But I think you might want to 
> write "1, 2, 3".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11616) Flink official document has an error

2019-02-20 Thread TANG Wen-hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TANG Wen-hui reassigned FLINK-11616:


Assignee: TANG Wen-hui  (was: xulinjie)

> Flink official document has an error
> 
>
> Key: FLINK-11616
> URL: https://issues.apache.org/jira/browse/FLINK-11616
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: xulinjie
>Assignee: TANG Wen-hui
>Priority: Major
> Attachments: wx20190214-214...@2x.png
>
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/tutorials/flink_on_windows.html]
> The mistake is in paragraph “Installing Flink from Git”.
> “The solution is to adjust the Cygwin settings to deal with the correct line 
> endings by following these three steps:”,
> The sequence of steps you wrote was "1, 2, 1".But I think you might want to 
> write "1, 2, 3".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] alexeyt820 commented on a change in pull request #6615: [FLINK-8354] [flink-connectors] Add ability to access and provider Kafka headers

2019-02-20 Thread GitBox
alexeyt820 commented on a change in pull request #6615: [FLINK-8354] 
[flink-connectors] Add ability to access and provider Kafka headers
URL: https://github.com/apache/flink/pull/6615#discussion_r258370493
 
 

 ##
 File path: 
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/util/serialization/KeyedSerializationSchema.java
 ##
 @@ -55,4 +57,13 @@
 * @return null or the target topic
 */
String getTargetTopic(T element);
+
+   /**
+*
+* @param element The incoming element to be serialized
+* @return collection of headers (maybe empty)
+*/
+   default Iterable> headers(T element) {
 
 Review comment:
   It is quick enough. Keep in mind that it also affects all constructors etc, 
so it is more then 2 files change. And there were conflicting changes in 
`KafkaConsumerBase` and `KafkaConsumerBaseTest` so had to merge upstream master 
in PR branch.
   Hopefully 7e6753b will be ok
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11667) Add Synchronous Checkpoint handling in StreamTask

2019-02-20 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-11667:
--

 Summary: Add Synchronous Checkpoint handling in StreamTask
 Key: FLINK-11667
 URL: https://issues.apache.org/jira/browse/FLINK-11667
 Project: Flink
  Issue Type: Sub-task
  Components: State Backends, Checkpointing
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas


This is the basic building block for the SUSPEND/TERMINATE functionality. 

In case of a synchronous checkpoint barrier, the checkpointing thread will 
block (without holding the checkpoint lock) until the 
{{notifyCheckpointComplete}} is executed successfully. This  will allow the 
checkpoint to be considered successful ONLY when also the 
{{notifyCheckpointComplete}} is successfully executed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11668) Allow sources to advance time to max watermark on checkpoint.

2019-02-20 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-11668:
--

 Summary: Allow sources to advance time to max watermark on 
checkpoint.
 Key: FLINK-11668
 URL: https://issues.apache.org/jira/browse/FLINK-11668
 Project: Flink
  Issue Type: Sub-task
  Components: State Backends, Checkpointing, Streaming
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas


This is needed for the TERMINATE case. It will allow the sources to inject the 
{{MAX_WATERMARK}} before the barrier that will trigger the last savepoint. This 
will fire any registered event-time timers and flush any state associated with 
these timers, e.g. windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11669) Add Synchronous Checkpoint Triggering RPCs.

2019-02-20 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-11669:
--

 Summary: Add Synchronous Checkpoint Triggering RPCs.
 Key: FLINK-11669
 URL: https://issues.apache.org/jira/browse/FLINK-11669
 Project: Flink
  Issue Type: Sub-task
  Components: JobManager, TaskManager
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas


Wires the triggering of the Synchronous {{Checkpoint/Savepoint}} from the 
{{JobMaster}} to the {{TaskExecutor}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11670) Add SUSPEND/TERMINATE calls to REST API

2019-02-20 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-11670:
--

 Summary: Add SUSPEND/TERMINATE calls to REST API
 Key: FLINK-11670
 URL: https://issues.apache.org/jira/browse/FLINK-11670
 Project: Flink
  Issue Type: Sub-task
  Components: REST
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas


Exposes the SUSPEND/TERMINATE functionality to the user through the REST API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11671) Expose SUSPEND/TERMINATE to CLI

2019-02-20 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-11671:
--

 Summary: Expose SUSPEND/TERMINATE to CLI
 Key: FLINK-11671
 URL: https://issues.apache.org/jira/browse/FLINK-11671
 Project: Flink
  Issue Type: Sub-task
  Components: Client
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas


Expose the SUSPEND/TERMINATE functionality to the user through the command line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] twalthr commented on a change in pull request #7664: [FLINK-11449][table] Uncouple the Expression class from RexNodes.

2019-02-20 Thread GitBox
twalthr commented on a change in pull request #7664: [FLINK-11449][table] 
Uncouple the Expression class from RexNodes.
URL: https://github.com/apache/flink/pull/7664#discussion_r258376018
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/expressions/TableFunctionCall.java
 ##
 @@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.expressions;
+
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * The table function call.
+ */
+public class TableFunctionCall extends Call {
+
+   private Optional alias = Optional.empty();
+
+   public TableFunctionCall(TableFunctionDefinition func, List 
args) {
+   super(func, args);
+   }
+
+   public TableFunctionCall alias(String[] alias) {
 
 Review comment:
   If it simplifies this PR, feel free to apply this change here. Otherwise we 
can also do it in a separate PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11421) Add compilation options to allow compiling generated code with JDK compiler

2019-02-20 Thread Liya Fan (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772760#comment-16772760
 ] 

Liya Fan commented on FLINK-11421:
--

[~ykt836] I see. Thanks a lot for the comments. I will work on this Jira after 
Blink is merged. 

> Add compilation options to allow compiling generated code with JDK compiler 
> 
>
> Key: FLINK-11421
> URL: https://issues.apache.org/jira/browse/FLINK-11421
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Liya Fan
>Assignee: Liya Fan
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 240h
>  Time Spent: 10m
>  Remaining Estimate: 239h 50m
>
> Flink supports some operators (like Calc, Hash Agg, Hash Join, etc.) by code 
> generation. That is, Flink generates their source code dynamically, and then 
> compile it into Java Byte Code, which is load and executed at runtime.
>  
> By default, Flink compiles the generated source code by Janino. This is fast, 
> as the compilation often finishes in hundreds of milliseconds. The generated 
> Java Byte Code, however, is of poor quality. To illustrate, we use Java 
> Compiler API (JCA) to compile the generated code. Experiments on TPC-H (1 TB) 
> queries show that the E2E time can be more than 10% shorter, when operators 
> are compiled by JCA, despite that it takes more time (a few seconds) to 
> compile with JCA.
>  
> Therefore, we believe it is beneficial to compile generated code by JCA in 
> the following scenarios: 1) For batch jobs, the E2E time is relatively long, 
> so it is worth of spending more time compiling and generating high quality 
> Java Byte Code. 2) For repeated stream jobs, the generated code will be 
> compiled once and run many times. Therefore, it pays to spend more time 
> compiling for the first time, and enjoy the high byte code qualities for 
> later runs.
>  
> According to the above observations, we want to provide a compilation option 
> (Janino, JCA, or dynamic) for Flink, so that the user can choose the one 
> suitable for their specific scenario and obtain better performance whenever 
> possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] wenhuitang opened a new pull request #7761: [hotfix][docs] Fix typo in sourceSinks.md.

2019-02-20 Thread GitBox
wenhuitang opened a new pull request #7761: [hotfix][docs] Fix typo in 
sourceSinks.md.
URL: https://github.com/apache/flink/pull/7761
 
 
   [hotfix][docs] Fix typo in sourceSinks.md.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7761: [hotfix][docs] Fix typo in sourceSinks.md.

2019-02-20 Thread GitBox
flinkbot commented on issue #7761: [hotfix][docs] Fix typo in sourceSinks.md.
URL: https://github.com/apache/flink/pull/7761#issuecomment-465479794
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❌ 1. The [description] looks good.
   * ❌ 2. There is [consensus] that the contribution should go into to Flink.
   * ❔ 3. Needs [attention] from.
   * ❌ 4. The change fits into the overall [architecture].
   * ❌ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11666) Add Amazon Kinesis Data Analytics to poweredby.zh.md for Chinese

2019-02-20 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-11666.
---
Resolution: Fixed

fixed in flink-web: 2f9522f104d1261cf6b30d033ede001a6ab349b1

> Add Amazon Kinesis Data Analytics to poweredby.zh.md for Chinese
> 
>
> Key: FLINK-11666
> URL: https://issues.apache.org/jira/browse/FLINK-11666
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Project Website
>Reporter: Jark Wu
>Assignee: Forward Xu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The English content is added in this commit:  
> [https://github.com/apache/flink-web/commit/2917889b22c68b51579cc3b323ebab2c0dd23aed]
> Please add a corresponding Chinese content to poweredby.zh.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11559) Translate "FAQ" page into Chinese

2019-02-20 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-11559.
---
Resolution: Fixed

Fixed in flink-web: baab1cafa73b395f5180034026b27c7deb9e89c1

> Translate "FAQ" page into Chinese
> -
>
> Key: FLINK-11559
> URL: https://issues.apache.org/jira/browse/FLINK-11559
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Project Website
>Reporter: Jark Wu
>Assignee: Kaibo Zhou
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Translate "FAQ" page into Chinese
> The markdown file is located in: flink-web/faq.zh.md
> The url link is: https://flink.apache.org/zh/faq.html
> Please adjust the links in the page to Chinese pages when translating. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StefanRRichter commented on a change in pull request #7674: [FLINK-10043] [State Backends] Refactor RocksDBKeyedStateBackend object construction/initialization/restore code

2019-02-20 Thread GitBox
StefanRRichter commented on a change in pull request #7674: [FLINK-10043] 
[State Backends] Refactor RocksDBKeyedStateBackend object 
construction/initialization/restore code
URL: https://github.com/apache/flink/pull/7674#discussion_r258382597
 
 

 ##
 File path: 
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/restore/AbstractRocksDBRestoreOperation.java
 ##
 @@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.contrib.streaming.state.restore;
+
+import org.apache.flink.api.common.typeutils.TypeSerializerSchemaCompatibility;
+import org.apache.flink.contrib.streaming.state.RocksDBNativeMetricMonitor;
+import org.apache.flink.contrib.streaming.state.RocksDBNativeMetricOptions;
+import org.apache.flink.contrib.streaming.state.RocksDBOperationUtils;
+import org.apache.flink.contrib.streaming.state.StateColumnFamilyHandle;
+import org.apache.flink.core.fs.CloseableRegistry;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.metrics.MetricGroup;
+import org.apache.flink.runtime.state.KeyGroupRange;
+import org.apache.flink.runtime.state.KeyedBackendSerializationProxy;
+import org.apache.flink.runtime.state.KeyedStateHandle;
+import org.apache.flink.runtime.state.RegisteredStateMetaInfoBase;
+import org.apache.flink.runtime.state.StateSerializerProvider;
+import org.apache.flink.runtime.state.metainfo.StateMetaInfoSnapshot;
+import org.apache.flink.util.IOUtils;
+import org.apache.flink.util.StateMigrationException;
+
+import org.rocksdb.ColumnFamilyDescriptor;
+import org.rocksdb.ColumnFamilyHandle;
+import org.rocksdb.ColumnFamilyOptions;
+import org.rocksdb.DBOptions;
+import org.rocksdb.RocksDB;
+import org.rocksdb.RocksDBException;
+
+import javax.annotation.Nonnull;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Base implementation of RocksDB restore operation.
+ *
+ * @param  The data type that the serializer serializes.
+ */
+public abstract class AbstractRocksDBRestoreOperation implements 
RocksDBRestoreOperation, AutoCloseable {
+   protected final KeyGroupRange keyGroupRange;
+   protected final int keyGroupPrefixBytes;
+   protected final int numberOfTransferringThreads;
+   protected final CloseableRegistry cancelStreamRegistry;
+   protected final ClassLoader userCodeClassLoader;
+   protected final ColumnFamilyOptions columnOptions;
+   protected final DBOptions dbOptions;
+   protected final Map kvStateInformation;
+   protected final File instanceBasePath;
+   protected final File instanceRocksDBPath;
+   protected final String dbPath;
+   protected List columnFamilyHandles;
+   protected List columnFamilyDescriptors;
+   protected final StateSerializerProvider keySerializerProvider;
+   protected final RocksDBNativeMetricOptions nativeMetricOptions;
+   protected final MetricGroup metricGroup;
+   protected final Collection restoreStateHandles;
+
+   protected RocksDB db;
+   protected ColumnFamilyHandle defaultColumnFamilyHandle;
+   protected RocksDBNativeMetricMonitor nativeMetricMonitor;
+   protected boolean isKeySerializerCompatibilityChecked;
+
+   protected AbstractRocksDBRestoreOperation(
+   KeyGroupRange keyGroupRange,
+   int keyGroupPrefixBytes,
+   int numberOfTransferringThreads,
+   CloseableRegistry cancelStreamRegistry,
+   ClassLoader userCodeClassLoader,
+   Map kvStateInformation,
+   StateSerializerProvider keySerializerProvider,
+   File instanceBasePath,
+   File instanceRocksDBPath,
+   DBOptions dbOptions,
+   ColumnFamilyOptions columnOptions,
+   RocksDBNativeMetricOptions nativeMetricOptions,
+   MetricGroup metricGroup,
+   @Nonnull Collection stateHandles) {
+   this.keyGroupRange = keyGroupRange;
+   this.keyGroupPrefixBytes = keyGroupPrefixByt

[jira] [Closed] (FLINK-11564) Translate "How To Contribute" page into Chinese

2019-02-20 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-11564.
---
Resolution: Fixed

Fixed in flink-web: bf1374cf5cf7ebe6c5bebc5b79d4208a74e0b409

> Translate "How To Contribute" page into Chinese
> ---
>
> Key: FLINK-11564
> URL: https://issues.apache.org/jira/browse/FLINK-11564
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Project Website
>Reporter: Jark Wu
>Assignee: Forward Xu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Translate "How To Contribute" page into Chinese.
> The markdown file is located in: flink-web/how-to-contribute.zh.md
> The url link is: https://flink.apache.org/zh/how-to-contribute.html
> Please adjust the links in the page to Chinese pages when translating. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] rmetzger commented on a change in pull request #6594: [FLINK-9311] [pubsub] Added PubSub source connector with support for checkpointing (ATLEAST_ONCE)

2019-02-20 Thread GitBox
rmetzger commented on a change in pull request #6594: [FLINK-9311] [pubsub] 
Added PubSub source connector with support for checkpointing (ATLEAST_ONCE)
URL: https://github.com/apache/flink/pull/6594#discussion_r258382648
 
 

 ##
 File path: 
flink-connectors/flink-connector-pubsub/src/main/java/org/apache/flink/streaming/connectors/pubsub/DefaultPubSubSubscriberFactory.java
 ##
 @@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.pubsub;
+
+import 
org.apache.flink.streaming.connectors.pubsub.common.PubSubSubscriberFactory;
+
+import com.google.api.gax.batching.FlowControlSettings;
+import com.google.api.gax.batching.FlowController;
+import com.google.api.gax.core.FixedCredentialsProvider;
+import com.google.api.gax.grpc.GrpcTransportChannel;
+import com.google.api.gax.rpc.FixedTransportChannelProvider;
+import com.google.api.gax.rpc.TransportChannel;
+import com.google.auth.Credentials;
+import com.google.cloud.pubsub.v1.MessageReceiver;
+import com.google.cloud.pubsub.v1.Subscriber;
+import com.google.pubsub.v1.ProjectSubscriptionName;
+import io.grpc.ManagedChannel;
+import io.grpc.ManagedChannelBuilder;
+
+final class DefaultPubSubSubscriberFactory implements PubSubSubscriberFactory {
+   private String hostAndPort;
+
+   void withHostAndPort(String hostAndPort) {
+   this.hostAndPort = hostAndPort;
+   }
+
+   @Override
+   public Subscriber getSubscriber(Credentials credentials, 
ProjectSubscriptionName projectSubscriptionName, MessageReceiver 
messageReceiver) {
+   FlowControlSettings flowControlSettings = 
FlowControlSettings.newBuilder()
+   .setMaxOutstandingElementCount(1L)
+   .setMaxOutstandingRequestBytes(10L)
 
 Review comment:
   Yes, I think that would be a better separation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rmetzger commented on a change in pull request #6594: [FLINK-9311] [pubsub] Added PubSub source connector with support for checkpointing (ATLEAST_ONCE)

2019-02-20 Thread GitBox
rmetzger commented on a change in pull request #6594: [FLINK-9311] [pubsub] 
Added PubSub source connector with support for checkpointing (ATLEAST_ONCE)
URL: https://github.com/apache/flink/pull/6594#discussion_r258383131
 
 

 ##
 File path: 
flink-connectors/flink-connector-pubsub/src/main/java/org/apache/flink/streaming/connectors/pubsub/DefaultPubSubSubscriberFactory.java
 ##
 @@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.pubsub;
+
+import 
org.apache.flink.streaming.connectors.pubsub.common.PubSubSubscriberFactory;
+
+import com.google.api.gax.batching.FlowControlSettings;
+import com.google.api.gax.batching.FlowController;
+import com.google.api.gax.core.FixedCredentialsProvider;
+import com.google.api.gax.grpc.GrpcTransportChannel;
+import com.google.api.gax.rpc.FixedTransportChannelProvider;
+import com.google.api.gax.rpc.TransportChannel;
+import com.google.auth.Credentials;
+import com.google.cloud.pubsub.v1.MessageReceiver;
+import com.google.cloud.pubsub.v1.Subscriber;
+import com.google.pubsub.v1.ProjectSubscriptionName;
+import io.grpc.ManagedChannel;
+import io.grpc.ManagedChannelBuilder;
+
+final class DefaultPubSubSubscriberFactory implements PubSubSubscriberFactory {
+   private String hostAndPort;
+
+   void withHostAndPort(String hostAndPort) {
+   this.hostAndPort = hostAndPort;
+   }
+
+   @Override
+   public Subscriber getSubscriber(Credentials credentials, 
ProjectSubscriptionName projectSubscriptionName, MessageReceiver 
messageReceiver) {
+   FlowControlSettings flowControlSettings = 
FlowControlSettings.newBuilder()
+   .setMaxOutstandingElementCount(1L)
+   .setMaxOutstandingRequestBytes(10L)
+   
.setLimitExceededBehavior(FlowController.LimitExceededBehavior.Block)
+   .build();
+
+   Subscriber.Builder builder = Subscriber
+   
.newBuilder(ProjectSubscriptionName.of(projectSubscriptionName.getProject(), 
projectSubscriptionName.getSubscription()), messageReceiver)
+   .setFlowControlSettings(flowControlSettings)
+   
.setCredentialsProvider(FixedCredentialsProvider.create(credentials));
+
+   if (hostAndPort != null) {
+   ManagedChannel managedChannel = ManagedChannelBuilder
+   .forTarget(hostAndPort)
+   .usePlaintext() // This is 'Ok' because this is 
ONLY used for testing.
 
 Review comment:
   In my experience, some user will eventually need to have everything 
configurable :) There are many esoteric scenarios people are working in :)
   But I'm also okay leaving it as-is for now, and address this in a follow up 
pull request.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] carp84 commented on a change in pull request #7674: [FLINK-10043] [State Backends] Refactor RocksDBKeyedStateBackend object construction/initialization/restore code

2019-02-20 Thread GitBox
carp84 commented on a change in pull request #7674: [FLINK-10043] [State 
Backends] Refactor RocksDBKeyedStateBackend object 
construction/initialization/restore code
URL: https://github.com/apache/flink/pull/7674#discussion_r258383219
 
 

 ##
 File path: 
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/restore/AbstractRocksDBRestoreOperation.java
 ##
 @@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.contrib.streaming.state.restore;
+
+import org.apache.flink.api.common.typeutils.TypeSerializerSchemaCompatibility;
+import org.apache.flink.contrib.streaming.state.RocksDBNativeMetricMonitor;
+import org.apache.flink.contrib.streaming.state.RocksDBNativeMetricOptions;
+import org.apache.flink.contrib.streaming.state.RocksDBOperationUtils;
+import org.apache.flink.contrib.streaming.state.StateColumnFamilyHandle;
+import org.apache.flink.core.fs.CloseableRegistry;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.metrics.MetricGroup;
+import org.apache.flink.runtime.state.KeyGroupRange;
+import org.apache.flink.runtime.state.KeyedBackendSerializationProxy;
+import org.apache.flink.runtime.state.KeyedStateHandle;
+import org.apache.flink.runtime.state.RegisteredStateMetaInfoBase;
+import org.apache.flink.runtime.state.StateSerializerProvider;
+import org.apache.flink.runtime.state.metainfo.StateMetaInfoSnapshot;
+import org.apache.flink.util.IOUtils;
+import org.apache.flink.util.StateMigrationException;
+
+import org.rocksdb.ColumnFamilyDescriptor;
+import org.rocksdb.ColumnFamilyHandle;
+import org.rocksdb.ColumnFamilyOptions;
+import org.rocksdb.DBOptions;
+import org.rocksdb.RocksDB;
+import org.rocksdb.RocksDBException;
+
+import javax.annotation.Nonnull;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Base implementation of RocksDB restore operation.
+ *
+ * @param  The data type that the serializer serializes.
+ */
+public abstract class AbstractRocksDBRestoreOperation implements 
RocksDBRestoreOperation, AutoCloseable {
+   protected final KeyGroupRange keyGroupRange;
+   protected final int keyGroupPrefixBytes;
+   protected final int numberOfTransferringThreads;
+   protected final CloseableRegistry cancelStreamRegistry;
+   protected final ClassLoader userCodeClassLoader;
+   protected final ColumnFamilyOptions columnOptions;
+   protected final DBOptions dbOptions;
+   protected final Map kvStateInformation;
+   protected final File instanceBasePath;
+   protected final File instanceRocksDBPath;
+   protected final String dbPath;
+   protected List columnFamilyHandles;
+   protected List columnFamilyDescriptors;
+   protected final StateSerializerProvider keySerializerProvider;
+   protected final RocksDBNativeMetricOptions nativeMetricOptions;
+   protected final MetricGroup metricGroup;
+   protected final Collection restoreStateHandles;
+
+   protected RocksDB db;
+   protected ColumnFamilyHandle defaultColumnFamilyHandle;
+   protected RocksDBNativeMetricMonitor nativeMetricMonitor;
+   protected boolean isKeySerializerCompatibilityChecked;
+
+   protected AbstractRocksDBRestoreOperation(
+   KeyGroupRange keyGroupRange,
+   int keyGroupPrefixBytes,
+   int numberOfTransferringThreads,
+   CloseableRegistry cancelStreamRegistry,
+   ClassLoader userCodeClassLoader,
+   Map kvStateInformation,
+   StateSerializerProvider keySerializerProvider,
+   File instanceBasePath,
+   File instanceRocksDBPath,
+   DBOptions dbOptions,
+   ColumnFamilyOptions columnOptions,
+   RocksDBNativeMetricOptions nativeMetricOptions,
+   MetricGroup metricGroup,
+   @Nonnull Collection stateHandles) {
+   this.keyGroupRange = keyGroupRange;
+   this.keyGroupPrefixBytes = keyGroupPrefixBytes;
+   

[GitHub] rmetzger commented on a change in pull request #6594: [FLINK-9311] [pubsub] Added PubSub source connector with support for checkpointing (ATLEAST_ONCE)

2019-02-20 Thread GitBox
rmetzger commented on a change in pull request #6594: [FLINK-9311] [pubsub] 
Added PubSub source connector with support for checkpointing (ATLEAST_ONCE)
URL: https://github.com/apache/flink/pull/6594#discussion_r258385662
 
 

 ##
 File path: 
flink-connectors/flink-connector-pubsub/src/main/java/org/apache/flink/streaming/connectors/pubsub/PubSubSource.java
 ##
 @@ -0,0 +1,294 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.pubsub;
+
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.common.functions.StoppableFunction;
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.api.java.typeutils.ResultTypeQueryable;
+import org.apache.flink.configuration.Configuration;
+import 
org.apache.flink.streaming.api.functions.source.MultipleIdsMessageAcknowledgingSourceBase;
+import org.apache.flink.streaming.api.functions.source.ParallelSourceFunction;
+import org.apache.flink.streaming.api.operators.StreamingRuntimeContext;
+import 
org.apache.flink.streaming.connectors.pubsub.common.PubSubSubscriberFactory;
+
+import com.google.auth.Credentials;
+import com.google.cloud.pubsub.v1.AckReplyConsumer;
+import com.google.cloud.pubsub.v1.Subscriber;
+import com.google.pubsub.v1.ProjectSubscriptionName;
+import com.google.pubsub.v1.PubsubMessage;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Optional;
+
+import static 
com.google.cloud.pubsub.v1.SubscriptionAdminSettings.defaultCredentialsProviderBuilder;
+
+/**
+ * PubSub Source, this Source will consume PubSub messages from a subscription 
and Acknowledge them on the next checkpoint.
+ * This ensures every message will get acknowledged at least once.
+ */
+public class PubSubSource extends 
MultipleIdsMessageAcknowledgingSourceBase
+   implements ResultTypeQueryable, 
ParallelSourceFunction, StoppableFunction {
+   private static final Logger LOG = 
LoggerFactory.getLogger(PubSubSource.class);
+   protected DeserializationSchema deserializationSchema;
+   protected SubscriberWrapper subscriberWrapper;
+
+   protected boolean running = true;
+   protected transient volatile SourceContext sourceContext = null;
+
+   protected PubSubSource() {
+   super(String.class);
+   }
+
+   protected void setDeserializationSchema(DeserializationSchema 
deserializationSchema) {
+   this.deserializationSchema = deserializationSchema;
+   }
+
+   protected void setSubscriberWrapper(SubscriberWrapper 
subscriberWrapper) {
+   this.subscriberWrapper = subscriberWrapper;
+   }
+
+   @Override
+   public void open(Configuration configuration) throws Exception {
+   super.open(configuration);
+   subscriberWrapper.initialize();
+   if (hasNoCheckpointingEnabled(getRuntimeContext())) {
+   throw new IllegalArgumentException("The PubSubSource 
REQUIRES Checkpointing to be enabled and " +
+   "the checkpointing frequency must be MUCH lower 
than the PubSub timeout for it to retry a message.");
+   }
+
+   
getRuntimeContext().getMetricGroup().gauge("PubSubMessagesProcessedNotAcked", 
this::getOutstandingMessagesToAck);
+   
getRuntimeContext().getMetricGroup().gauge("PubSubMessagesReceivedNotProcessed",
 subscriberWrapper::amountOfMessagesInBuffer);
+   }
+
+   private boolean hasNoCheckpointingEnabled(RuntimeContext 
runtimeContext) {
+   return !(runtimeContext instanceof StreamingRuntimeContext && 
((StreamingRuntimeContext) runtimeContext).isCheckpointingEnabled());
+   }
+
+   @Override
+   protected void acknowledgeSessionIDs(List 
ackReplyConsumers) {
+   ackReplyConsumers.forEach(AckReplyConsumer::ack);
+   }
+
+   @Override
+   public void run(SourceContext sourceContext) throws Exception {
+   this.sourceCo

[GitHub] TisonKun opened a new pull request #7762: [FLINK-11146] Remove invalid codes in ClusterClient

2019-02-20 Thread GitBox
TisonKun opened a new pull request #7762: [FLINK-11146] Remove invalid codes in 
ClusterClient
URL: https://github.com/apache/flink/pull/7762
 
 
   ## What is the purpose of the change
   
   **DO NOT MERGE(SEE SECTION 3)**
   
   Remove invalid codes in `ClusterClient`.
   
   This pr is aimed at dropping actor/message dependencies in `ClusterClient`. 
Mainly. it does
   
   1. Delete implementation of following methods and make them `abstract`, 
because they are implemented by `MiniClusterClient` and `RestClusterClient`, 
and the implementation in `ClusterClient` depends on legacy logic and thus is 
invalid. METHODS include 
 - getJobStatus(jobID)
 - cancel(jobID)
 - cancelWithSavepoint(jobID, savepointDir)
 - stop(jobID)
 - triggerSavepoint(jobID, savepointDir)
 - disposeSavepoint(savepointPath)
 - listJobs()
 - getAccumulators(jobID, ClassLoader) 
   
   2. Delete several legacy abstract methods. Since the implementations are 
simply return a literal None value. Also delete their use points, which includes
 - logAndSysout(String message)
 - waitForClusterToBeReady() 
 - getClusterStatus()
 - getNewMessages()
 - getMaxSlots()
 - hasUserJarsInClassPath(List userJarFiles)
   
   3. There is still one method depend on legacy mode but I am not sure if we 
can directly remove it or should be there a port. It is `endSession(JobID 
jobID)`. I can see now the message(`JobManagerMessages.RemoveCachedJob`) it 
sends is no one processes it and thus guess that we can directly remove this 
method as well as its use point in `ContextEnvironment` and `RemoteExecutor`. 
But I think it is necessary to involve a committer who know how this logic 
works to confirm that we can do that.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector:(no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   
   cc @tillrohrmann @mxm 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StefanRRichter commented on a change in pull request #7674: [FLINK-10043] [State Backends] Refactor RocksDBKeyedStateBackend object construction/initialization/restore code

2019-02-20 Thread GitBox
StefanRRichter commented on a change in pull request #7674: [FLINK-10043] 
[State Backends] Refactor RocksDBKeyedStateBackend object 
construction/initialization/restore code
URL: https://github.com/apache/flink/pull/7674#discussion_r258385521
 
 

 ##
 File path: 
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/restore/RocksDBIncrementalRestoreOperation.java
 ##
 @@ -0,0 +1,507 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.contrib.streaming.state.restore;
+
+import org.apache.flink.configuration.ConfigConstants;
+import 
org.apache.flink.contrib.streaming.state.RocksDBIncrementalCheckpointUtils;
+import org.apache.flink.contrib.streaming.state.RocksDBKeySerializationUtils;
+import org.apache.flink.contrib.streaming.state.RocksDBNativeMetricOptions;
+import org.apache.flink.contrib.streaming.state.RocksDBOperationUtils;
+import org.apache.flink.contrib.streaming.state.RocksDBStateDownloader;
+import org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapper;
+import org.apache.flink.contrib.streaming.state.RocksIteratorWrapper;
+import org.apache.flink.contrib.streaming.state.StateColumnFamilyHandle;
+import org.apache.flink.core.fs.CloseableRegistry;
+import org.apache.flink.core.fs.FSDataInputStream;
+import org.apache.flink.core.fs.FileStatus;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataInputViewStreamWrapper;
+import org.apache.flink.metrics.MetricGroup;
+import org.apache.flink.runtime.state.BackendBuildingException;
+import org.apache.flink.runtime.state.DirectoryStateHandle;
+import org.apache.flink.runtime.state.IncrementalKeyedStateHandle;
+import org.apache.flink.runtime.state.IncrementalLocalKeyedStateHandle;
+import org.apache.flink.runtime.state.KeyGroupRange;
+import org.apache.flink.runtime.state.KeyedBackendSerializationProxy;
+import org.apache.flink.runtime.state.KeyedStateHandle;
+import org.apache.flink.runtime.state.StateHandleID;
+import org.apache.flink.runtime.state.StateSerializerProvider;
+import org.apache.flink.runtime.state.StreamStateHandle;
+import org.apache.flink.runtime.state.metainfo.StateMetaInfoSnapshot;
+import org.apache.flink.util.IOUtils;
+
+import org.rocksdb.ColumnFamilyDescriptor;
+import org.rocksdb.ColumnFamilyHandle;
+import org.rocksdb.ColumnFamilyOptions;
+import org.rocksdb.DBOptions;
+import org.rocksdb.RocksDB;
+import org.rocksdb.RocksDBException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nonnull;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.StandardCopyOption;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.UUID;
+
+import static 
org.apache.flink.contrib.streaming.state.snapshot.RocksSnapshotUtil.SST_FILE_SUFFIX;
+
+/**
+ * Encapsulates the process of restoring a RocksDB instance from an 
incremental snapshot.
+ */
+public class RocksDBIncrementalRestoreOperation extends 
AbstractRocksDBRestoreOperation {
+   private static final Logger LOG = 
LoggerFactory.getLogger(RocksDBIncrementalRestoreOperation.class);
+
+   private final String operatorIdentifier;
+   protected final SortedMap> restoredSstFiles;
+   protected long lastCompletedCheckpointId;
+   protected UUID backendUID;
+   private boolean isKeySerializerCompatibilityChecked;
+
+   public RocksDBIncrementalRestoreOperation(
+   String operatorIdentifier,
+   KeyGroupRange keyGroupRange,
+   int keyGroupPrefixBytes,
+   int numberOfTransferringThreads,
+   CloseableRegistry cancelStreamRegistry,
+   ClassLoader userCodeClassLoader,
+   Map kvStateInformation,
+   StateSerializerProvider keySerializerProvider,
+   File instanceBasePath,
+   File instanceRocksDBPath,
+

[GitHub] flinkbot commented on issue #7762: [FLINK-11146] Remove invalid codes in ClusterClient

2019-02-20 Thread GitBox
flinkbot commented on issue #7762: [FLINK-11146] Remove invalid codes in 
ClusterClient
URL: https://github.com/apache/flink/pull/7762#issuecomment-465485078
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❌ 1. The [description] looks good.
   * ❌ 2. There is [consensus] that the contribution should go into to Flink.
   * ❔ 3. Needs [attention] from.
   * ❌ 4. The change fits into the overall [architecture].
   * ❌ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11146) Get rid of legacy codes from ClusterClient

2019-02-20 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11146:
---
Labels: pull-request-available  (was: )

> Get rid of legacy codes from ClusterClient
> --
>
> Key: FLINK-11146
> URL: https://issues.apache.org/jira/browse/FLINK-11146
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client, Tests
>Affects Versions: 1.8.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>
> As [~StephanEwen] mentioned in ML
> https://lists.apache.org/thread.html/cc46fde6f8b31d4e833b01e5814a9547c8a67ea3e08a31ec5d71145e@%3Cdev.flink.apache.org%3E
> , the client needs big refactoring / cleanup. It should use a proper HTTP 
> client library to help with future authentication mechanisms.
> After an investigation I notice that the valid cluster clients are only 
> {{MiniClusterClient}} and {{RestClusterClient}}. Legacy clients, 
> {{StandaloneClusterClient}} and {{YarnClusterClient}}, as well as pre-FLIP-6 
> codes inside {{ClusterClient}}, should be removed as part of FLINK-10392. 
> With this removal we arrive a clean stage where we can think how to implement 
> a proper HTTP client more comfortably.
> 1. {{StandaloneClusterClient}} is now depended on by 
> {{LegacyStandaloneClusterDescriptor}} (the removal is tracked by FLINK-10700) 
> and {{FlinkClient}}(part of flink-storm which is decided to be removed 
> FLINK-10571). Also relevant tests need to be ported(or directly removed).
> 2. The removal of {{YarnClusterClient}} should go along with FLINK-11106 
> Remove legacy flink-yarn component.
> 3. Testing classes inheriting from {{ClusterClient}} need to be ported(or 
> directly removed).
> 4. Get rid of legacy codes inside {{ClusterClient}} it self, such as 
> {{#run(JobGraph, ClassLoader)}}
> Besides, what is {{JobClient}} used for? I cannot find valid usages of it. 
> (Till mentioned it at ML 
> https://lists.apache.org/thread.html/ce99cba4a10b9dc40eb729d39910f315ae41d80ec74f09a356c73938@%3Cdev.flink.apache.org%3E)
> cc [~mxm] [~till.rohrmann]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StefanRRichter commented on a change in pull request #7674: [FLINK-10043] [State Backends] Refactor RocksDBKeyedStateBackend object construction/initialization/restore code

2019-02-20 Thread GitBox
StefanRRichter commented on a change in pull request #7674: [FLINK-10043] 
[State Backends] Refactor RocksDBKeyedStateBackend object 
construction/initialization/restore code
URL: https://github.com/apache/flink/pull/7674#discussion_r258386352
 
 

 ##
 File path: 
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/restore/RocksDBIncrementalRestoreOperation.java
 ##
 @@ -0,0 +1,507 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.contrib.streaming.state.restore;
+
+import org.apache.flink.configuration.ConfigConstants;
+import 
org.apache.flink.contrib.streaming.state.RocksDBIncrementalCheckpointUtils;
+import org.apache.flink.contrib.streaming.state.RocksDBKeySerializationUtils;
+import org.apache.flink.contrib.streaming.state.RocksDBNativeMetricOptions;
+import org.apache.flink.contrib.streaming.state.RocksDBOperationUtils;
+import org.apache.flink.contrib.streaming.state.RocksDBStateDownloader;
+import org.apache.flink.contrib.streaming.state.RocksDBWriteBatchWrapper;
+import org.apache.flink.contrib.streaming.state.RocksIteratorWrapper;
+import org.apache.flink.contrib.streaming.state.StateColumnFamilyHandle;
+import org.apache.flink.core.fs.CloseableRegistry;
+import org.apache.flink.core.fs.FSDataInputStream;
+import org.apache.flink.core.fs.FileStatus;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataInputViewStreamWrapper;
+import org.apache.flink.metrics.MetricGroup;
+import org.apache.flink.runtime.state.BackendBuildingException;
+import org.apache.flink.runtime.state.DirectoryStateHandle;
+import org.apache.flink.runtime.state.IncrementalKeyedStateHandle;
+import org.apache.flink.runtime.state.IncrementalLocalKeyedStateHandle;
+import org.apache.flink.runtime.state.KeyGroupRange;
+import org.apache.flink.runtime.state.KeyedBackendSerializationProxy;
+import org.apache.flink.runtime.state.KeyedStateHandle;
+import org.apache.flink.runtime.state.StateHandleID;
+import org.apache.flink.runtime.state.StateSerializerProvider;
+import org.apache.flink.runtime.state.StreamStateHandle;
+import org.apache.flink.runtime.state.metainfo.StateMetaInfoSnapshot;
+import org.apache.flink.util.IOUtils;
+
+import org.rocksdb.ColumnFamilyDescriptor;
+import org.rocksdb.ColumnFamilyHandle;
+import org.rocksdb.ColumnFamilyOptions;
+import org.rocksdb.DBOptions;
+import org.rocksdb.RocksDB;
+import org.rocksdb.RocksDBException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nonnull;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.StandardCopyOption;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.UUID;
+
+import static 
org.apache.flink.contrib.streaming.state.snapshot.RocksSnapshotUtil.SST_FILE_SUFFIX;
+
+/**
+ * Encapsulates the process of restoring a RocksDB instance from an 
incremental snapshot.
+ */
+public class RocksDBIncrementalRestoreOperation extends 
AbstractRocksDBRestoreOperation {
+   private static final Logger LOG = 
LoggerFactory.getLogger(RocksDBIncrementalRestoreOperation.class);
+
+   private final String operatorIdentifier;
+   protected final SortedMap> restoredSstFiles;
+   protected long lastCompletedCheckpointId;
+   protected UUID backendUID;
+   private boolean isKeySerializerCompatibilityChecked;
+
+   public RocksDBIncrementalRestoreOperation(
+   String operatorIdentifier,
+   KeyGroupRange keyGroupRange,
+   int keyGroupPrefixBytes,
+   int numberOfTransferringThreads,
+   CloseableRegistry cancelStreamRegistry,
+   ClassLoader userCodeClassLoader,
+   Map kvStateInformation,
+   StateSerializerProvider keySerializerProvider,
+   File instanceBasePath,
+   File instanceRocksDBPath,
+

[GitHub] dawidwys closed pull request #7758: blink first commit

2019-02-20 Thread GitBox
dawidwys closed pull request #7758: blink first commit
URL: https://github.com/apache/flink/pull/7758
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7763: [hotfix][tests] Fix typo in PackagedProgramTest.java.

2019-02-20 Thread GitBox
flinkbot commented on issue #7763: [hotfix][tests] Fix typo in 
PackagedProgramTest.java.
URL: https://github.com/apache/flink/pull/7763#issuecomment-465486021
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❌ 1. The [description] looks good.
   * ❌ 2. There is [consensus] that the contribution should go into to Flink.
   * ❔ 3. Needs [attention] from.
   * ❌ 4. The change fits into the overall [architecture].
   * ❌ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenhuitang opened a new pull request #7763: [hotfix][tests] Fix typo in PackagedProgramTest.java.

2019-02-20 Thread GitBox
wenhuitang opened a new pull request #7763: [hotfix][tests] Fix typo in 
PackagedProgramTest.java.
URL: https://github.com/apache/flink/pull/7763
 
 
   [hotfix][tests] Fix typo in PackagedProgramTest.java.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StefanRRichter commented on issue #7674: [FLINK-10043] [State Backends] Refactor RocksDBKeyedStateBackend object construction/initialization/restore code

2019-02-20 Thread GitBox
StefanRRichter commented on issue #7674: [FLINK-10043] [State Backends] 
Refactor RocksDBKeyedStateBackend object construction/initialization/restore 
code
URL: https://github.com/apache/flink/pull/7674#issuecomment-465486853
 
 
   Latest changes look all very good, just had very few minor comments about 
exception handling and resources closing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha commented on issue #7732: [FLINK-9803] Drop canEqual() from TypeSerializer

2019-02-20 Thread GitBox
aljoscha commented on issue #7732: [FLINK-9803] Drop canEqual() from 
TypeSerializer
URL: https://github.com/apache/flink/pull/7732#issuecomment-465487028
 
 
   Merged. Thanks for the reviews!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha closed pull request #7732: [FLINK-9803] Drop canEqual() from TypeSerializer

2019-02-20 Thread GitBox
aljoscha closed pull request #7732: [FLINK-9803] Drop canEqual() from 
TypeSerializer
URL: https://github.com/apache/flink/pull/7732
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-9803) Drop canEqual() from TypeSerializer

2019-02-20 Thread Aljoscha Krettek (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-9803.
---
Resolution: Fixed

Fixed on master in
09bb7bbc0f2535dab90c59a3362dfe53a70055ef

> Drop canEqual() from TypeSerializer
> ---
>
> Key: FLINK-9803
> URL: https://issues.apache.org/jira/browse/FLINK-9803
> Project: Flink
>  Issue Type: Improvement
>  Components: Core, Type Serialization System
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> See discussion from 
> https://lists.apache.org/thread.html/7cc6cfd66e96e8d33c768629b55481b6c951c68128f10256abb328fe@%3Cdev.flink.apache.org%3E
> {quote}
> Hi all!
> As part of an attempt to simplify some code in the TypeInfo and
> TypeSerializer area, I would like to drop the "canEqual" methods for the
> following reason:
> "canEqual()" is necessary to make proper equality checks across hierarchies
> of types. This is for example useful in a collection API, stating for
> example whether a List can be equal to a Collection if they have the same
> contents. We don't have that here.
> A certain type information (and serializer) is equal to another one if they
> describe the same type, strictly. There is no necessity for cross hierarchy
> checks.
> This has also let to the situation that most type infos and serializers
> implement just a dummy/default version of "canEqual". Many "equals()"
> methods do not even call the other object's "canEqual", etc.
> As a first step, we could simply deprecate the method and implement an
> empty default, and remove all calls to that method.
> Best,
> Stephan
> {quote}
> This is a reduced version of FLINK-9798, we can't modify {{TypeInformation}} 
> because it is {{@Public}}. We should change {{TypeSerializer}} now because 
> we're already breaking it as part of FLINK-9376.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] yanghua opened a new pull request #7764: [FLINK-11546] Add option to manually set job ID in CLI

2019-02-20 Thread GitBox
yanghua opened a new pull request #7764: [FLINK-11546] Add option to manually 
set job ID in CLI
URL: https://github.com/apache/flink/pull/7764
 
 
   ## What is the purpose of the change
   
   *This pull request adds an option to manually set job ID in CLI*
   
   ## Brief change log
   
 - *Add option (`-jid`) to manually set job ID in CLI*
 - *Add test for the option*
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as *`ClientTest`* and 
*`CliFrontendRunTest`* .
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7764: [FLINK-11546] Add option to manually set job ID in CLI

2019-02-20 Thread GitBox
flinkbot commented on issue #7764: [FLINK-11546] Add option to manually set job 
ID in CLI
URL: https://github.com/apache/flink/pull/7764#issuecomment-465488570
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❌ 1. The [description] looks good.
   * ❌ 2. There is [consensus] that the contribution should go into to Flink.
   * ❔ 3. Needs [attention] from.
   * ❌ 4. The change fits into the overall [architecture].
   * ❌ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11546) Add option to manually set job ID in CLI

2019-02-20 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11546:
---
Labels: pull-request-available  (was: )

> Add option to manually set job ID in CLI
> 
>
> Key: FLINK-11546
> URL: https://issues.apache.org/jira/browse/FLINK-11546
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.7.0
>Reporter: Ufuk Celebi
>Assignee: vinoyang
>Priority: Minor
>  Labels: pull-request-available
>
> Add an option to specify the job ID during job submissions via the CLI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] dawidwys commented on issue #7742: [hotfix] Remove redundant keyword public

2019-02-20 Thread GitBox
dawidwys commented on issue #7742: [hotfix] Remove redundant keyword public
URL: https://github.com/apache/flink/pull/7742#issuecomment-465489364
 
 
   Hi @leesf I see no benefit of introducing this change on its own, therefore 
I would vote for closing this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11672) Add example for streaming operator connect

2019-02-20 Thread shengjk1 (JIRA)
shengjk1 created FLINK-11672:


 Summary: Add example for streaming operator connect  
 Key: FLINK-11672
 URL: https://issues.apache.org/jira/browse/FLINK-11672
 Project: Flink
  Issue Type: Improvement
Reporter: shengjk1
Assignee: shengjk1


add example for streaming operator in code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11673) add example for streaming operators's broadcast

2019-02-20 Thread shengjk1 (JIRA)
shengjk1 created FLINK-11673:


 Summary: add example for streaming operators's broadcast
 Key: FLINK-11673
 URL: https://issues.apache.org/jira/browse/FLINK-11673
 Project: Flink
  Issue Type: Improvement
  Components: Examples
Reporter: shengjk1
Assignee: shengjk1


add example for streaming operators's broadcast in code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] tillrohrmann commented on issue #7755: [FLINK-11663] Remove control flow break point from Execution#releaseAssignedResource

2019-02-20 Thread GitBox
tillrohrmann commented on issue #7755: [FLINK-11663] Remove control flow break 
point from Execution#releaseAssignedResource
URL: https://github.com/apache/flink/pull/7755#issuecomment-465490443
 
 
   Thanks for the review @StefanRRichter. Merging this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11672) Add example for streaming operator connect

2019-02-20 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11672:
-
Component/s: Examples

> Add example for streaming operator connect  
> 
>
> Key: FLINK-11672
> URL: https://issues.apache.org/jira/browse/FLINK-11672
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: shengjk1
>Assignee: shengjk1
>Priority: Major
>
> add example for streaming operator in code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11672) Add example for streaming operator connect

2019-02-20 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11672:
-
Description: add example for streaming operator connect in code  (was: add 
example for streaming operator in code)

> Add example for streaming operator connect  
> 
>
> Key: FLINK-11672
> URL: https://issues.apache.org/jira/browse/FLINK-11672
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: shengjk1
>Assignee: shengjk1
>Priority: Major
>
> add example for streaming operator connect in code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] asfgit closed pull request #7755: [FLINK-11663] Remove control flow break point from Execution#releaseAssignedResource

2019-02-20 Thread GitBox
asfgit closed pull request #7755: [FLINK-11663] Remove control flow break point 
from Execution#releaseAssignedResource
URL: https://github.com/apache/flink/pull/7755
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StefanRRichter commented on issue #7756: [FLINK-11357] Make ExecutionGraph#suspend terminate ExecutionGraph atomically

2019-02-20 Thread GitBox
StefanRRichter commented on issue #7756: [FLINK-11357] Make 
ExecutionGraph#suspend terminate ExecutionGraph atomically
URL: https://github.com/apache/flink/pull/7756#issuecomment-465493214
 
 
   @flinkbot approve description


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StefanRRichter commented on issue #7756: [FLINK-11357] Make ExecutionGraph#suspend terminate ExecutionGraph atomically

2019-02-20 Thread GitBox
StefanRRichter commented on issue #7756: [FLINK-11357] Make 
ExecutionGraph#suspend terminate ExecutionGraph atomically
URL: https://github.com/apache/flink/pull/7756#issuecomment-465492752
 
 
   @tillrohrmann I noticed that this PR and the commit seems to be tagged with 
a wrong issue id. We should at leaast change that before merging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StefanRRichter commented on issue #7756: [FLINK-11357] Make ExecutionGraph#suspend terminate ExecutionGraph atomically

2019-02-20 Thread GitBox
StefanRRichter commented on issue #7756: [FLINK-11357] Make 
ExecutionGraph#suspend terminate ExecutionGraph atomically
URL: https://github.com/apache/flink/pull/7756#issuecomment-465493315
 
 
   @flinkbot approve consensus


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot edited a comment on issue #7756: [FLINK-11357] Make ExecutionGraph#suspend terminate ExecutionGraph atomically

2019-02-20 Thread GitBox
flinkbot edited a comment on issue #7756: [FLINK-11357] Make 
ExecutionGraph#suspend terminate ExecutionGraph atomically
URL: https://github.com/apache/flink/pull/7756#issuecomment-465204812
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @StefanRRichter [committer]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @StefanRRichter [committer]
   * ❔ 3. Needs [attention] from.
   * ❌ 4. The change fits into the overall [architecture].
   * ❌ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11663) Remove control flow break point from Execution#releaseAssignedResource

2019-02-20 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-11663.
-
Resolution: Fixed

Fixed via 76ecd7abd239d6690e5e6ea9afe8262c7a389f26

> Remove control flow break point from Execution#releaseAssignedResource
> --
>
> Key: FLINK-11663
> URL: https://issues.apache.org/jira/browse/FLINK-11663
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.8.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In {{Execution#releaseAssignedResource}} we release the assigned resource by 
> calling {{LogicalSlot#releaseSlot}} and use 
> {{FutureUtils.whenCompleteAsyncIfNotDone}} to merge the future back into the 
> main thread in order to complete the {{Execution#releaseFuture}}. This is no 
> longer necessary since the returned future is always completed from within 
> the main thread (with the changes from FLINK-10431).
> In fact this control flow break point makes it hard to properly suspend the 
> {{ExecutionGraph}} atomically as required for FLINK-11537.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on issue #7753: [FLINK-11041][tests] Fix ReinterpretDataStreamAsKeyedStreamITCase

2019-02-20 Thread GitBox
zentol commented on issue #7753: [FLINK-11041][tests] Fix 
ReinterpretDataStreamAsKeyedStreamITCase
URL: https://github.com/apache/flink/pull/7753#issuecomment-465494291
 
 
   @flinkbot disapprove description
   Please explain what the underlying problem was; I can't tell _easily_ since 
it's buried in a pile of style changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol merged pull request #7763: [hotfix][tests] Fix typo in PackagedProgramTest.java.

2019-02-20 Thread GitBox
zentol merged pull request #7763: [hotfix][tests] Fix typo in 
PackagedProgramTest.java.
URL: https://github.com/apache/flink/pull/7763
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] 1u0 commented on a change in pull request #7745: [FLINK-11632] Add new config option for TaskManager automatic address binding

2019-02-20 Thread GitBox
1u0 commented on a change in pull request #7745: [FLINK-11632] Add new config 
option for TaskManager automatic address binding
URL: https://github.com/apache/flink/pull/7745#discussion_r258397017
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskManagerRunner.java
 ##
 @@ -409,13 +409,30 @@ public static RpcService createRpcService(
if (taskManagerHostname != null) {
LOG.info("Using configured hostname/address for 
TaskManager: {}.", taskManagerHostname);
} else {
-   Time lookupTimeout = 
Time.milliseconds(AkkaUtils.getLookupTimeout(configuration).toMillis());
-
-   InetAddress taskManagerAddress = 
LeaderRetrievalUtils.findConnectingAddress(
-   haServices.getResourceManagerLeaderRetriever(),
-   lookupTimeout);
-
-   taskManagerHostname = taskManagerAddress.getHostName();
+   InetAddress taskManagerAddress;
 
 Review comment:
   @uce, the problem that with your approach, heuristic is still used and it 
won't be possible to validate if it's needed or not.
   One of the goals of FLINK-11632 is to explicitly separate the heuristic 
mechanism (in hope that it may be deprecated) and simpler alternative. 
Extending the heuristic to IP binding would make it harder to get rid of the 
heuristic mechanism, as it adds additional edge cases (also, there is no 
request for such feature, afaik).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tillrohrmann commented on issue #7760: [FLINK-11655][core] Remove Serializable from CallAsync

2019-02-20 Thread GitBox
tillrohrmann commented on issue #7760: [FLINK-11655][core] Remove Serializable 
from CallAsync
URL: https://github.com/apache/flink/pull/7760#issuecomment-465495911
 
 
   @flinkbot approve all


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot edited a comment on issue #7760: [FLINK-11655][core] Remove Serializable from CallAsync

2019-02-20 Thread GitBox
flinkbot edited a comment on issue #7760: [FLINK-11655][core] Remove 
Serializable from CallAsync
URL: https://github.com/apache/flink/pull/7760#issuecomment-465451394
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @tillrohrmann [PMC]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @tillrohrmann [PMC]
   * ❔ 3. Needs [attention] from.
   * ✅ 4. The change fits into the overall [architecture].
   - Approved by @tillrohrmann [PMC]
   * ✅ 5. Overall code [quality] is good.
   - Approved by @tillrohrmann [PMC]
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #7742: [hotfix] Remove redundant keyword public

2019-02-20 Thread GitBox
zentol commented on issue #7742: [hotfix] Remove redundant keyword public
URL: https://github.com/apache/flink/pull/7742#issuecomment-465496466
 
 
   I agree with @dawidwys, closing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #7742: [hotfix] Remove redundant keyword public

2019-02-20 Thread GitBox
zentol closed pull request #7742: [hotfix] Remove redundant keyword public
URL: https://github.com/apache/flink/pull/7742
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #7749: [hotfix][sql-client] bump jline version

2019-02-20 Thread GitBox
zentol closed pull request #7749: [hotfix][sql-client] bump jline version
URL: https://github.com/apache/flink/pull/7749
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #7749: [hotfix][sql-client] bump jline version

2019-02-20 Thread GitBox
zentol commented on issue #7749: [hotfix][sql-client] bump jline version
URL: https://github.com/apache/flink/pull/7749#issuecomment-465497006
 
 
   Dependency versions are significant enough to deserve an actual JIRA. 
Closing this PR for the time being.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11552) Akka association issues in 1.7.x

2019-02-20 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772814#comment-16772814
 ] 

Till Rohrmann commented on FLINK-11552:
---

Good to hear that you could resolve the problem. It says 
"akka.actor.ActorNotFound: Actor not found for ..." which means the same. But 
maybe we could make this more explicit.

> Akka association issues in 1.7.x
> 
>
> Key: FLINK-11552
> URL: https://issues.apache.org/jira/browse/FLINK-11552
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Affects Versions: 1.7.0, 1.7.1
>Reporter: William Cummings
>Priority: Critical
>
> When testing our application on 1.7.0 and 1.7.1, taskmanagers associate 
> correctly, but when a job is submitted it enters the RUNNING state, but no 
> work is ever done. In the jobmanager logs (w/ akka logging turned up to DEBUG 
> & "akka.log.lifecycle.events: true") I can observe some akka errors. 
> Eventually a taskmanager is lost, and the task fails.
> Please let me know if there is any additional information I can collect to 
> help diagnose. If someone can point me in the right direction I'd be happy to 
> implement a fix.
> I've attached the relevant logs below:
> {noformat}
> 019-02-07 17:45:58,543 WARN  akka.remote.ReliableDeliverySupervisor   
>  - Association with remote system 
> [akka.tcp://flink-metrics@app-flink-taskmanager-065199ce40199b440:39271] has 
> failed, address is now gated for [50] ms. Reason: [Association failed with 
> [akka.tcp://flink-metrics@app-flink-taskmanager-065199ce40199b440:39271]] 
> Caused by: [app-flink-taskmanager-065199ce40199b440: unknown error]
> 2019-02-07 17:45:58,548 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink-metrics@app-flink-taskmanager-0056cc1c18d1cff79:41833] has 
> failed, address is now gated for [50] ms. Reason: [Association failed with 
> [akka.tcp://flink-metrics@app-flink-taskmanager-0056cc1c18d1cff79:41833]] 
> Caused by: [app-flink-taskmanager-0056cc1c18d1cff79: unknown error]
> 2019-02-07 17:45:58,563 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink-metrics@app-flink-taskmanager-034a6a653e17966ed:46051] has 
> failed, address is now gated for [50] ms. Reason: [Association failed with 
> [akka.tcp://flink-metrics@app-flink-taskmanager-034a6a653e17966ed:46051]] 
> Caused by: [app-flink-taskmanager-034a6a653e17966ed: unknown error]
> 2019-02-07 17:46:10,538 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink-metrics@app-flink-taskmanager-034a6a653e17966ed:46051] has 
> failed, address is now gated for [50] ms. Reason: [Association failed with 
> [akka.tcp://flink-metrics@app-flink-taskmanager-034a6a653e17966ed:46051]] 
> Caused by: [app-flink-taskmanager-034a6a653e17966ed: unknown error]
> 2019-02-07 17:46:10,548 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink-metrics@app-flink-taskmanager-0056cc1c18d1cff79:41833] has 
> failed, address is now gated for [50] ms. Reason: [Association failed with 
> [akka.tcp://flink-metrics@app-flink-taskmanager-0056cc1c18d1cff79:41833]] 
> Caused by: [app-flink-taskmanager-0056cc1c18d1cff79: unknown error]
> 2019-02-07 17:46:10,568 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink-metrics@app-flink-taskmanager-065199ce40199b440:39271] has 
> failed, address is now gated for [50] ms. Reason: [Association failed with 
> [akka.tcp://flink-metrics@app-flink-taskmanager-065199ce40199b440:39271]] 
> Caused by: [app-flink-taskmanager-065199ce40199b440: unknown error]
> 2019-02-07 17:46:24,204 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink-metrics@app-flink-taskmanager-0056cc1c18d1cff79:41833] has 
> failed, address is now gated for [50] ms. Reason: [Association failed with 
> [akka.tcp://flink-metrics@app-flink-taskmanager-0056cc1c18d1cff79:41833]] 
> Caused by: [app-flink-taskmanager-0056cc1c18d1cff79: unknown error]
> 2019-02-07 17:46:24,210 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink-metrics@app-flink-taskmanager-065199ce40199b440:39271] has 
> failed, address is now gated for [50] ms. Reason: [Association failed with 
> [akka.tcp://flink-metrics@app-flink-taskmanager-065199ce40199b440:39271]] 
> Caused by: [app-flink-taskmanager-065199ce40199b440: unknown error]
> 2019-02-07 17:46:24,211 WARN  akka.remote.ReliableDeliverySupervisor  
>    

[GitHub] igalshilman commented on issue #7759: [FLINK-11485][FLINK-10897] POJO state schema evolution / migrate PojoSerializer to use new compatibility APIs

2019-02-20 Thread GitBox
igalshilman commented on issue #7759: [FLINK-11485][FLINK-10897] POJO state 
schema evolution / migrate PojoSerializer to use new compatibility APIs
URL: https://github.com/apache/flink/pull/7759#issuecomment-465497769
 
 
   @flinkbot approve description


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tillrohrmann commented on issue #7756: [FLINK-11357] Make ExecutionGraph#suspend terminate ExecutionGraph atomically

2019-02-20 Thread GitBox
tillrohrmann commented on issue #7756: [FLINK-11357] Make 
ExecutionGraph#suspend terminate ExecutionGraph atomically
URL: https://github.com/apache/flink/pull/7756#issuecomment-465498000
 
 
   Good catch @StefanRRichter. I'll correct the transposed digits when merging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot edited a comment on issue #7759: [FLINK-11485][FLINK-10897] POJO state schema evolution / migrate PojoSerializer to use new compatibility APIs

2019-02-20 Thread GitBox
flinkbot edited a comment on issue #7759: [FLINK-11485][FLINK-10897] POJO state 
schema evolution / migrate PojoSerializer to use new compatibility APIs
URL: https://github.com/apache/flink/pull/7759#issuecomment-465441144
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @igalshilman
   * ❌ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @aljoscha [PMC], @igalshilman
   * ❌ 4. The change fits into the overall [architecture].
   * ❌ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha opened a new pull request #7765: [FLINK-11334][core] Migrate enum serializers to use new serialization compatibility abstractions

2019-02-20 Thread GitBox
aljoscha opened a new pull request #7765: [FLINK-11334][core] Migrate enum 
serializers to use new serialization compatibility abstractions
URL: https://github.com/apache/flink/pull/7765
 
 
   This is an updated version of #7710.
   
   It has the following changes on top:
- Remove old deserialization logic from new EnumSerializer Snapshot
- Fix compatibility check in EnumSerializer Snapshot and do proper 
reconfiguration
   
   The individual commits have explanations for why the change is necessary.
   
   cc @klion26 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StefanRRichter commented on issue #7753: [FLINK-11041][tests] Fix ReinterpretDataStreamAsKeyedStreamITCase

2019-02-20 Thread GitBox
StefanRRichter commented on issue #7753: [FLINK-11041][tests] Fix 
ReinterpretDataStreamAsKeyedStreamITCase
URL: https://github.com/apache/flink/pull/7753#issuecomment-465498517
 
 
   @zentol updated


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha opened a new pull request #7766: [FLINK-11334][core] Migrate EnumValueSerializer to use new serialization compatibility abstractions

2019-02-20 Thread GitBox
aljoscha opened a new pull request #7766: [FLINK-11334][core] Migrate 
EnumValueSerializer to use new serialization compatibility abstractions
URL: https://github.com/apache/flink/pull/7766
 
 
   This is an updated version of #7734 
   
   It has the following changes on top:
- Don't use IntSerializer in EnumValueSerializer
- Remove old deserialization logic from ScalaEnumSerializerSnapshot
- Create proper restore serializer in ScalaEnumSerializerSnapshot
- Fix migration check in ScalaEnumSerializerSnapshot
   
   The individual commits have explanations for why the change is necessary.
   
   cc @klion26 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha commented on issue #7734: [FLINK-11334][core] Migrate EnumValueSerializer to use new serialization compatibility abstractions

2019-02-20 Thread GitBox
aljoscha commented on issue #7734: [FLINK-11334][core] Migrate 
EnumValueSerializer to use new serialization compatibility abstractions
URL: https://github.com/apache/flink/pull/7734#issuecomment-465499351
 
 
   @klion26 Thanks for your contribution again! 😄It turns out that the enum 
serializers are quite tricky,  @igalshilman and I spent some time to create an 
updated PR that includes your commit: #7766  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha closed pull request #7710: [FLINK-11334][core] Migrate enum serializers to use new serialization compatibility abstractions

2019-02-20 Thread GitBox
aljoscha closed pull request #7710: [FLINK-11334][core] Migrate enum 
serializers to use new serialization compatibility abstractions
URL: https://github.com/apache/flink/pull/7710
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7766: [FLINK-11334][core] Migrate EnumValueSerializer to use new serialization compatibility abstractions

2019-02-20 Thread GitBox
flinkbot commented on issue #7766: [FLINK-11334][core] Migrate 
EnumValueSerializer to use new serialization compatibility abstractions
URL: https://github.com/apache/flink/pull/7766#issuecomment-465499085
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❌ 1. The [description] looks good.
   * ❌ 2. There is [consensus] that the contribution should go into to Flink.
   * ❔ 3. Needs [attention] from.
   * ❌ 4. The change fits into the overall [architecture].
   * ❌ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7765: [FLINK-11334][core] Migrate enum serializers to use new serialization compatibility abstractions

2019-02-20 Thread GitBox
flinkbot commented on issue #7765: [FLINK-11334][core] Migrate enum serializers 
to use new serialization compatibility abstractions
URL: https://github.com/apache/flink/pull/7765#issuecomment-465499092
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❌ 1. The [description] looks good.
   * ❌ 2. There is [consensus] that the contribution should go into to Flink.
   * ❔ 3. Needs [attention] from.
   * ❌ 4. The change fits into the overall [architecture].
   * ❌ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha commented on issue #7710: [FLINK-11334][core] Migrate enum serializers to use new serialization compatibility abstractions

2019-02-20 Thread GitBox
aljoscha commented on issue #7710: [FLINK-11334][core] Migrate enum serializers 
to use new serialization compatibility abstractions
URL: https://github.com/apache/flink/pull/7710#issuecomment-465499519
 
 
   @klion26 Thanks for your contribution again! 😄It turns out that the enum 
serializers are quite tricky,  @igalshilman and I spent some time to create an 
updated PR that includes your commit: #7765  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha closed pull request #7734: [FLINK-11334][core] Migrate EnumValueSerializer to use new serialization compatibility abstractions

2019-02-20 Thread GitBox
aljoscha closed pull request #7734: [FLINK-11334][core] Migrate 
EnumValueSerializer to use new serialization compatibility abstractions
URL: https://github.com/apache/flink/pull/7734
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on a change in pull request #7719: [FLINK-10569] Remove Instance usage in ExecutionGraphDeploymentTest

2019-02-20 Thread GitBox
zentol commented on a change in pull request #7719: [FLINK-10569] Remove 
Instance usage in ExecutionGraphDeploymentTest
URL: https://github.com/apache/flink/pull/7719#discussion_r258400019
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionVertexDeploymentTest.java
 ##
 @@ -273,15 +270,7 @@ public void testFailExternallyDuringDeploy() {
}
};
 
-   TestingLogicalSlot testingLogicalSlot =
-   new TestingLogicalSlot(
-   new LocalTaskManagerLocation(),
-   blockSubmitGateway,
-   0,
-   new AllocationID(),
-   new SlotRequestId(),
-   new SlotSharingGroupId(),
-   null);
+   TestingLogicalSlot testingLogicalSlot = new 
TestingLogicalSlot(blockSubmitGateway);
 
 Review comment:
   this change doesn't belong into this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on a change in pull request #7719: [FLINK-10569] Remove Instance usage in ExecutionGraphDeploymentTest

2019-02-20 Thread GitBox
zentol commented on a change in pull request #7719: [FLINK-10569] Remove 
Instance usage in ExecutionGraphDeploymentTest
URL: https://github.com/apache/flink/pull/7719#discussion_r258403040
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionGraphDeploymentTest.java
 ##
 @@ -182,14 +179,20 @@ public void testBuildDeploymentDescriptor() {
ExecutionJobVertex ejv = eg.getAllVertices().get(jid2);
ExecutionVertex vertex = ejv.getTaskVertices()[3];
 
-   ExecutionGraphTestUtils.SimpleActorGatewayWithTDD 
instanceGateway =
-   new 
ExecutionGraphTestUtils.SimpleActorGatewayWithTDD(
-   TestingUtils.directExecutionContext(),
-   blobCache);
+   SimpleAckingTaskManagerGateway taskManagerGateway = new 
SimpleAckingTaskManagerGateway();
+   CompletableFuture tdd = new 
CompletableFuture<>();
+   
taskManagerGateway.setSubmitConsumer(taskDeploymentDescriptor -> {
+   try {
+   
taskDeploymentDescriptor.loadBigData(blobCache);
+   } catch (Exception e) {
+   e.printStackTrace();
 
 Review comment:
   Alternatively we could use a `ThrowingConsumer` here and actually catch 
exceptions in the `SimpleAckingTaskManagerGateway`, that logs them and returns 
a failed future.
   
   Throwing an erro here is essentially undefined behavior.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #7722: [FLINK-10569] Remove Instance usage in FailoverRegionTest

2019-02-20 Thread GitBox
zentol commented on issue #7722: [FLINK-10569] Remove Instance usage in 
FailoverRegionTest
URL: https://github.com/apache/flink/pull/7722#issuecomment-465501441
 
 
   Why can't we refactor the test to ``TestingLogicalSlotProvider` now? What is 
blocking this effort?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol edited a comment on issue #7722: [FLINK-10569] Remove Instance usage in FailoverRegionTest

2019-02-20 Thread GitBox
zentol edited a comment on issue #7722: [FLINK-10569] Remove Instance usage in 
FailoverRegionTest
URL: https://github.com/apache/flink/pull/7722#issuecomment-465501441
 
 
   Why can't we refactor the test to `TestingLogicalSlotProvider` now? What is 
blocking this effort?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11421) Add compilation options to allow compiling generated code with JDK compiler

2019-02-20 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772832#comment-16772832
 ] 

Kurt Young commented on FLINK-11421:


[~fan_li_ya] Sounds great, thanks! BTW, could please close the pull request for 
now? We want to keep all PR more effective for current status, and you can 
reopen it anytime once you think the timing is good. 

> Add compilation options to allow compiling generated code with JDK compiler 
> 
>
> Key: FLINK-11421
> URL: https://issues.apache.org/jira/browse/FLINK-11421
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Liya Fan
>Assignee: Liya Fan
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 240h
>  Time Spent: 10m
>  Remaining Estimate: 239h 50m
>
> Flink supports some operators (like Calc, Hash Agg, Hash Join, etc.) by code 
> generation. That is, Flink generates their source code dynamically, and then 
> compile it into Java Byte Code, which is load and executed at runtime.
>  
> By default, Flink compiles the generated source code by Janino. This is fast, 
> as the compilation often finishes in hundreds of milliseconds. The generated 
> Java Byte Code, however, is of poor quality. To illustrate, we use Java 
> Compiler API (JCA) to compile the generated code. Experiments on TPC-H (1 TB) 
> queries show that the E2E time can be more than 10% shorter, when operators 
> are compiled by JCA, despite that it takes more time (a few seconds) to 
> compile with JCA.
>  
> Therefore, we believe it is beneficial to compile generated code by JCA in 
> the following scenarios: 1) For batch jobs, the E2E time is relatively long, 
> so it is worth of spending more time compiling and generating high quality 
> Java Byte Code. 2) For repeated stream jobs, the generated code will be 
> compiled once and run many times. Therefore, it pays to spend more time 
> compiling for the first time, and enjoy the high byte code qualities for 
> later runs.
>  
> According to the above observations, we want to provide a compilation option 
> (Janino, JCA, or dynamic) for Flink, so that the user can choose the one 
> suitable for their specific scenario and obtain better performance whenever 
> possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #7723: [FLINK-10569] Remove Instance usage in ExecutionVertexCancelTest

2019-02-20 Thread GitBox
zentol commented on a change in pull request #7723:  [FLINK-10569] Remove 
Instance usage in ExecutionVertexCancelTest
URL: https://github.com/apache/flink/pull/7723#discussion_r258405621
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionVertexCancelTest.java
 ##
 @@ -399,33 +379,26 @@ public void testActionsWhileCancelling() {
}
}
 
-   public static class CancelSequenceActorGateway extends 
BaseTestingActorGateway {
+   public static class CancelSequenceSimpleAckingTaskManagerGateway 
extends SimpleAckingTaskManagerGateway {
private final int successfulOperations;
private int index = -1;
 
-   public CancelSequenceActorGateway(ExecutionContext 
executionContext, int successfulOperations) {
-   super(executionContext);
+   public CancelSequenceSimpleAckingTaskManagerGateway(int 
successfulOperations) {
+   super();
this.successfulOperations = successfulOperations;
}
 
@Override
-   public Object handleMessage(Object message) throws Exception {
-   Object result;
-   if(message instanceof SubmitTask) {
-   result = Acknowledge.get();
-   } else if(message instanceof CancelTask) {
-   index++;
-
-   if(index >= successfulOperations){
-   throw new IOException("RPC call 
failed.");
-   } else {
-   result = Acknowledge.get();
-   }
+   public CompletableFuture 
cancelTask(ExecutionAttemptID executionAttemptID, Time timeout) {
+   index++;
+
+   if (index >= successfulOperations) {
+   CompletableFuture result = new 
CompletableFuture<>();
+   result.completeExceptionally(new 
IOException("Rpc call fails"));
 
 Review comment:
   you can use `FutureUtils#completedExceptionally` for this


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tzulitai commented on a change in pull request #7759: [FLINK-11485][FLINK-10897] POJO state schema evolution / migrate PojoSerializer to use new compatibility APIs

2019-02-20 Thread GitBox
tzulitai commented on a change in pull request #7759: 
[FLINK-11485][FLINK-10897] POJO state schema evolution / migrate PojoSerializer 
to use new compatibility APIs
URL: https://github.com/apache/flink/pull/7759#discussion_r258406515
 
 

 ##
 File path: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializerSnapshot.java
 ##
 @@ -0,0 +1,424 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils.runtime;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.typeutils.CompositeTypeSerializerUtil;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.common.typeutils.TypeSerializerSchemaCompatibility;
+import org.apache.flink.api.common.typeutils.TypeSerializerSnapshot;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataOutputView;
+import org.apache.flink.util.LinkedOptionalMap;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.Map;
+import java.util.Set;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Snapshot class for the {@link PojoSerializer}.
+ */
+@Internal
+public class PojoSerializerSnapshot implements TypeSerializerSnapshot {
+
+   /**
+* We start from version {@code 2}. {@code 1} is retained for {@link 
PojoSerializer.PojoSerializerConfigSnapshot}.
+*/
+   private static final int VERSION = 2;
+
+   /**
+* Contains the actual content for the serializer snapshot.
+*/
+   private PojoSerializerSnapshotData snapshotData;
+
+   /**
+* Constructor for reading the snapshot.
+*/
+   public PojoSerializerSnapshot() {}
+
+   /**
+* Constructor for writing the snapshot.
+*
+* @param pojoClass the Pojo type class.
+* @param fieldSerializers map of fields to their corresponding 
serializers.
+* @param registeredSubclassSerializers map of registered subclasses to 
their corresponding serializers.
+* @param nonRegisteredSubclassSerializers map of non-registered 
subclasses to their corresponding serializers.
+*/
+   PojoSerializerSnapshot(
+   Class pojoClass,
+   LinkedHashMap> 
fieldSerializers,
+   LinkedHashMap, TypeSerializer> 
registeredSubclassSerializers,
+   HashMap, TypeSerializer> 
nonRegisteredSubclassSerializers) {
+
+   this.snapshotData = PojoSerializerSnapshotData.createFrom(
+   pojoClass,
+   fieldSerializers,
+   registeredSubclassSerializers,
+   nonRegisteredSubclassSerializers);
+   }
+
+   /**
+* Constructor for backwards compatibilty paths with the {@link 
PojoSerializer.PojoSerializerConfigSnapshot}.
+* This is used in {@link 
PojoSerializer.PojoSerializerConfigSnapshot#resolveSchemaCompatibility(TypeSerializer)}
+* to delegate the compatibility check to this snapshot class.
+*
+* @param pojoClass the Pojo type class.
+* @param existingFieldSerializerSnapshots the map of field serializer 
snapshots in the legacy snapshot.
+* @param existingRegisteredSubclassSerializerSnapshots the map of 
registered subclass serializer snapshots in the legacy snapshot.
+* @param existingNonRegisteredSubclassSerializerSnapshots the map of 
non-registered subclass serializer snapshots in the legacy snapshot.
+*/
+   PojoSerializerSnapshot(
+   Class pojoClass,
+   LinkedHashMap> 
existingFieldSerializerSnapshots,
+   LinkedHashMap, TypeSerializerSnapshot> 
existingRegisteredSubclassSerializerSnapshots,
+   LinkedHashMap, TypeSerializerSnapshot> 
existingNonRegisteredSubclassSe

[GitHub] zentol commented on a change in pull request #7724: [FLINK-10569] Remove Instance usage in ExecutionVertexDeploymentTest

2019-02-20 Thread GitBox
zentol commented on a change in pull request #7724:  [FLINK-10569] Remove 
Instance usage in ExecutionVertexDeploymentTest
URL: https://github.com/apache/flink/pull/7724#discussion_r258407392
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/ExecutionVertexDeploymentTest.java
 ##
 @@ -328,8 +314,6 @@ public void testTddProducedPartitionsLazyScheduling() 
throws Exception {
}
}
 
-
-
 
 Review comment:
   revert


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kisimple commented on issue #7757: [FLINK-11630] Triggers the termination of all running Tasks when shutting down TaskExecutor

2019-02-20 Thread GitBox
kisimple commented on issue #7757: [FLINK-11630] Triggers the termination of 
all running Tasks when shutting down TaskExecutor
URL: https://github.com/apache/flink/pull/7757#issuecomment-465505905
 
 
   Thanks for the quick response @tillrohrmann . My bad, will correct it soon :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11640) Caused by: java.lang.IllegalStateException: No operators defined in streaming topology. Cannot execute.

2019-02-20 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-11640:
-
Fix Version/s: (was: 1.7.1)

> Caused by: java.lang.IllegalStateException: No operators defined in streaming 
> topology. Cannot execute.
> ---
>
> Key: FLINK-11640
> URL: https://issues.apache.org/jira/browse/FLINK-11640
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.7.1
> Environment: ).centos 7,jdk 1.8
> ).Run command  :   
>       flink run flink-maven-stala-2-0.0.1.jar  
> com.opensourceteams.module.bigdata.flink.example.stream.worldcount.nc.Run
> ).link-maven-stala-2-0.0.1.jar  
> jar Function description, this function is the official website example, 
> wordcount, but I will report an error when I run in the Flink cluster, and it 
> can run normally in the idea.
>Reporter: thinktothings
>Priority: Minor
> Attachments: 屏幕快照 2019-02-17 下午6.37.16.png
>
>
> Caused by: java.lang.IllegalStateException: No operators defined in streaming 
> topology. Cannot execute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11674) Add an initial Blink SQL code generator

2019-02-20 Thread Kurt Young (JIRA)
Kurt Young created FLINK-11674:
--

 Summary: Add an initial Blink SQL code generator
 Key: FLINK-11674
 URL: https://issues.apache.org/jira/browse/FLINK-11674
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Reporter: Kurt Young


A more detailed description can be found in 
[FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions]
 

This issue is an umbrella issue for tasks related to the code generator for 
Blink SQL planner. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-11640) Caused by: java.lang.IllegalStateException: No operators defined in streaming topology. Cannot execute.

2019-02-20 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-11640:
--

> Caused by: java.lang.IllegalStateException: No operators defined in streaming 
> topology. Cannot execute.
> ---
>
> Key: FLINK-11640
> URL: https://issues.apache.org/jira/browse/FLINK-11640
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.7.1
> Environment: ).centos 7,jdk 1.8
> ).Run command  :   
>       flink run flink-maven-stala-2-0.0.1.jar  
> com.opensourceteams.module.bigdata.flink.example.stream.worldcount.nc.Run
> ).link-maven-stala-2-0.0.1.jar  
> jar Function description, this function is the official website example, 
> wordcount, but I will report an error when I run in the Flink cluster, and it 
> can run normally in the idea.
>Reporter: thinktothings
>Priority: Minor
> Fix For: 1.7.1
>
> Attachments: 屏幕快照 2019-02-17 下午6.37.16.png
>
>
> Caused by: java.lang.IllegalStateException: No operators defined in streaming 
> topology. Cannot execute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11640) Caused by: java.lang.IllegalStateException: No operators defined in streaming topology. Cannot execute.

2019-02-20 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-11640.

Resolution: Not A Problem

> Caused by: java.lang.IllegalStateException: No operators defined in streaming 
> topology. Cannot execute.
> ---
>
> Key: FLINK-11640
> URL: https://issues.apache.org/jira/browse/FLINK-11640
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.7.1
> Environment: ).centos 7,jdk 1.8
> ).Run command  :   
>       flink run flink-maven-stala-2-0.0.1.jar  
> com.opensourceteams.module.bigdata.flink.example.stream.worldcount.nc.Run
> ).link-maven-stala-2-0.0.1.jar  
> jar Function description, this function is the official website example, 
> wordcount, but I will report an error when I run in the Flink cluster, and it 
> can run normally in the idea.
>Reporter: thinktothings
>Priority: Minor
> Attachments: 屏幕快照 2019-02-17 下午6.37.16.png
>
>
> Caused by: java.lang.IllegalStateException: No operators defined in streaming 
> topology. Cannot execute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11490) Add an initial Blink SQL batch runtime

2019-02-20 Thread Jingsong Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-11490:


Assignee: Jingsong Lee  (was: Danny Chan)

> Add an initial Blink SQL batch runtime
> --
>
> Key: FLINK-11490
> URL: https://issues.apache.org/jira/browse/FLINK-11490
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jingsong Lee
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions].
> This issue is an umbrella issue for tasks related to the merging of Blink 
> batch runtime features. The goal is to provide minimum viable product (MVP) 
> to batch users.
> An exact list of batch features, their properties, and dependencies needs to 
> be defined.
> The type system might not have been reworked at this stage. Operations might 
> not be executed with the full performance until changes in other Flink core 
> components have taken place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] igalshilman commented on a change in pull request #7759: [FLINK-11485][FLINK-10897] POJO state schema evolution / migrate PojoSerializer to use new compatibility APIs

2019-02-20 Thread GitBox
igalshilman commented on a change in pull request #7759: 
[FLINK-11485][FLINK-10897] POJO state schema evolution / migrate PojoSerializer 
to use new compatibility APIs
URL: https://github.com/apache/flink/pull/7759#discussion_r258410039
 
 

 ##
 File path: 
flink-core/src/main/java/org/apache/flink/api/common/typeutils/CompositeTypeSerializerUtil.java
 ##
 @@ -62,4 +65,147 @@ public static void setNestedSerializersSnapshots(
NestedSerializersSnapshotDelegate delegate = new 
NestedSerializersSnapshotDelegate(nestedSnapshots);

compositeSnapshot.setNestedSerializersSnapshotDelegate(delegate);
}
+
+   /**
+* Constructs an {@link IntermediateCompatibilityResult} with the given 
array of nested serializers and their
+* corresponding serializer snapshots.
+*
+* This result is considered "intermediate", because the actual 
final result is not yet built if it isn't
+* defined. This is the case if the final result is supposed to be
+* {@link 
TypeSerializerSchemaCompatibility#compatibleWithReconfiguredSerializer(TypeSerializer)},
 where
+* construction of the reconfigured serializer instance should be done 
by the caller.
+*
+* For other cases, i.e. {@link 
TypeSerializerSchemaCompatibility#compatibleAsIs()},
+* {@link 
TypeSerializerSchemaCompatibility#compatibleAfterMigration()}, and
+* {@link TypeSerializerSchemaCompatibility#incompatible()}, these 
results are considered final.
+*
+* @param newNestedSerializers the new nested serializers to check for 
compatibility.
+* @param nestedSerializerSnapshots the associated nested serializers' 
snapshots.
+*
+* @return the intermediate compatibility result of the new nested 
serializers.
+*/
+   public static  IntermediateCompatibilityResult 
constructIntermediateCompatibilityResult(
 
 Review comment:
   Do you think it makes senes to add some unit tests for this?
   For example few basic cases like:
   - StringSerializer and LongSerializer with String and Long snapshots
   - StringSerializer and LongSerializer with String and String snapshot (hence 
incompatible)
   - DummySerializer with a DummySnapshot that we can set in the setup phase of 
a test to return whatever compatibility we want.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] igalshilman commented on a change in pull request #7759: [FLINK-11485][FLINK-10897] POJO state schema evolution / migrate PojoSerializer to use new compatibility APIs

2019-02-20 Thread GitBox
igalshilman commented on a change in pull request #7759: 
[FLINK-11485][FLINK-10897] POJO state schema evolution / migrate PojoSerializer 
to use new compatibility APIs
URL: https://github.com/apache/flink/pull/7759#discussion_r258410550
 
 

 ##
 File path: flink-core/src/main/java/org/apache/flink/util/OptionalMap.java
 ##
 @@ -36,7 +36,7 @@
  * An OptionalMap is an order preserving map (like {@link LinkedHashMap}) 
where keys have a unique string name, but are
  * optionally present, and the values are optional.
  */
-final class OptionalMap {
+public final class OptionalMap {
 
 Review comment:
   Should we mark it as a `@Internal`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] klion26 commented on a change in pull request #7766: [FLINK-11334][core] Migrate EnumValueSerializer to use new serialization compatibility abstractions

2019-02-20 Thread GitBox
klion26 commented on a change in pull request #7766: [FLINK-11334][core] 
Migrate EnumValueSerializer to use new serialization compatibility abstractions
URL: https://github.com/apache/flink/pull/7766#discussion_r258408891
 
 

 ##
 File path: 
flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/ScalaEnumSerializerSnapshot.scala
 ##
 @@ -73,7 +73,8 @@ class ScalaEnumSerializerSnapshot[E <: Enumeration]
   }
 
   override def restoreSerializer(): TypeSerializer[E#Value] = {
-enumClass.newInstance().asInstanceOf[TypeSerializer[E#Value]]
+val enumObject = enumClass.getField("MODULE$").get(null).asInstanceOf[E]
 
 Review comment:
   Should we add `Preconditions.checkState(enumClass != null)` in function 
`restoreSerializer()` and other places because we have the default constructor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11675) Add an initial support for running batch jobs with streaming runtime

2019-02-20 Thread Kurt Young (JIRA)
Kurt Young created FLINK-11675:
--

 Summary: Add an initial support for running batch jobs with 
streaming runtime
 Key: FLINK-11675
 URL: https://issues.apache.org/jira/browse/FLINK-11675
 Project: Flink
  Issue Type: Sub-task
Reporter: Kurt Young


cc [~pnowojski]

This is an umbrella issue to add an initial support for running batch jobs with 
streaming runtime. It includes tasks like:
 * Add some necessary extension to StreamTransformation to meet the requirement 
of batch job
 * Make StreamTransformation, StreamGraph and StreamTask to support running 
batch jobs
 * other related necessary changes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] klion26 commented on issue #7765: [FLINK-11334][core] Migrate enum serializers to use new serialization compatibility abstractions

2019-02-20 Thread GitBox
klion26 commented on issue #7765: [FLINK-11334][core] Migrate enum serializers 
to use new serialization compatibility abstractions
URL: https://github.com/apache/flink/pull/7765#issuecomment-465509887
 
 
   Thank you @aljoscha, the code looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hadoop-yetus commented on issue #7609: [FLINK-11642] [flink-s3-fs-hadoop] Add mirrored config key for s3 endpoint

2019-02-20 Thread GitBox
hadoop-yetus commented on issue #7609: [FLINK-11642] [flink-s3-fs-hadoop] Add 
mirrored config key for s3 endpoint
URL: https://github.com/apache/flink/pull/7609#issuecomment-465513062
 
 
   It's broader than just china; its any endpoint which is v4 signing only: AWS 
frankfurt, Seoul, & some others. Also, moving from the classic "central" 
endpoint gives you better availability -removes it as a SPOF on your app"


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] steveloughran commented on issue #7609: [FLINK-11642] [flink-s3-fs-hadoop] Add mirrored config key for s3 endpoint

2019-02-20 Thread GitBox
steveloughran commented on issue #7609: [FLINK-11642] [flink-s3-fs-hadoop] Add 
mirrored config key for s3 endpoint
URL: https://github.com/apache/flink/pull/7609#issuecomment-465513380
 
 
   It's broader than just china; it's any endpoint which is v4 signing only: 
AWS frankfurt, Seoul, & some others. Also, moving from the classic "central" 
endpoint gives you better availability -removes it as a SPOF on your app"


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hadoop-yetus removed a comment on issue #7609: [FLINK-11642] [flink-s3-fs-hadoop] Add mirrored config key for s3 endpoint

2019-02-20 Thread GitBox
hadoop-yetus removed a comment on issue #7609: [FLINK-11642] 
[flink-s3-fs-hadoop] Add mirrored config key for s3 endpoint
URL: https://github.com/apache/flink/pull/7609#issuecomment-465513062
 
 
   It's broader than just china; its any endpoint which is v4 signing only: AWS 
frankfurt, Seoul, & some others. Also, moving from the classic "central" 
endpoint gives you better availability -removes it as a SPOF on your app"


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] igalshilman commented on a change in pull request #7759: [FLINK-11485][FLINK-10897] POJO state schema evolution / migrate PojoSerializer to use new compatibility APIs

2019-02-20 Thread GitBox
igalshilman commented on a change in pull request #7759: 
[FLINK-11485][FLINK-10897] POJO state schema evolution / migrate PojoSerializer 
to use new compatibility APIs
URL: https://github.com/apache/flink/pull/7759#discussion_r258415584
 
 

 ##
 File path: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/runtime/PojoSerializerSnapshotData.java
 ##
 @@ -0,0 +1,288 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils.runtime;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.common.typeutils.TypeSerializerSnapshot;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataOutputView;
+import org.apache.flink.util.InstantiationUtil;
+import org.apache.flink.util.LinkedOptionalMap;
+import org.apache.flink.util.function.BiConsumerWithException;
+import org.apache.flink.util.function.BiFunctionWithException;
+import org.apache.flink.util.function.FunctionWithException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+
+import static org.apache.flink.util.LinkedOptionalMap.optionalMapOf;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * This class holds the snapshot content for the {@link PojoSerializer}.
+ *
+ * Serialization Format
+ *
+ * The serialization format defined by this class is as follows:
+ *
+ * {@code
+ * 
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * |POJO class name
  |
+ * 
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * |  number of fields  |(field name, field serializer 
snapshot) |
+ * ||pairs 
  |
+ * 
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | number of  |   (registered subclass name, subclass 
serializer snapshot) |
+ * |   registered subclasses|pairs 
  |
+ * 
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | number of  | (non-registered subclass name, subclass 
serializer snapshot)   |
+ * | non-registered subclasses  |pairs 
  |
+ * 
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * }
+ */
+@Internal
+final class PojoSerializerSnapshotData {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(PojoSerializerSnapshotData.class);
+
+   // 
-
+   //  Factory methods
+   // 
-
+
+   /**
+* Creates a {@link PojoSerializerSnapshotData} from configuration of a 
{@link PojoSerializer}.
+*
+* This factory method is meant to be used in regular write paths, 
i.e. when taking a snapshot
+* of the {@link PojoSerializer}. All POJO fields, registered subclass 
classes, and non-registered
+* subclass classes are all present.
+*/
+   static  PojoSerializerSnapshotData createFrom(
+   Class pojoClass,
+   LinkedHashMap> 
fieldSerializers,
+   LinkedHashMap, TypeSerializer> 
registeredSubclassSerializers,
+   HashMap, TypeSerializer> 
nonRegisteredSubclassSerializers) {
+
+   LinkedHashMap> 
fieldSerializerSnapshots = new LinkedHashMap<>(fieldSerializers.size());
+   fieldSerializers.forEach((k, v) -> 
fieldSerializerSnapshots.pu

[jira] [Closed] (FLINK-11674) Add an initial Blink SQL code generator

2019-02-20 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-11674.
--
Resolution: Duplicate

> Add an initial Blink SQL code generator
> ---
>
> Key: FLINK-11674
> URL: https://issues.apache.org/jira/browse/FLINK-11674
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Kurt Young
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions]
>  
> This issue is an umbrella issue for tasks related to the code generator for 
> Blink SQL planner. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11674) Add an initial Blink SQL code generator

2019-02-20 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772859#comment-16772859
 ] 

Kurt Young commented on FLINK-11674:


It will be covered by FLINK-11488

> Add an initial Blink SQL code generator
> ---
>
> Key: FLINK-11674
> URL: https://issues.apache.org/jira/browse/FLINK-11674
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Kurt Young
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions]
>  
> This issue is an umbrella issue for tasks related to the code generator for 
> Blink SQL planner. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11675) Add an initial support for running batch jobs with streaming runtime

2019-02-20 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-11675.
--
Resolution: Duplicate

> Add an initial support for running batch jobs with streaming runtime
> 
>
> Key: FLINK-11675
> URL: https://issues.apache.org/jira/browse/FLINK-11675
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Kurt Young
>Priority: Major
>
> cc [~pnowojski]
> This is an umbrella issue to add an initial support for running batch jobs 
> with streaming runtime. It includes tasks like:
>  * Add some necessary extension to StreamTransformation to meet the 
> requirement of batch job
>  * Make StreamTransformation, StreamGraph and StreamTask to support running 
> batch jobs
>  * other related necessary changes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11675) Add an initial support for running batch jobs with streaming runtime

2019-02-20 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-11675:
---
Issue Type: Task  (was: Sub-task)
Parent: (was: FLINK-11439)

> Add an initial support for running batch jobs with streaming runtime
> 
>
> Key: FLINK-11675
> URL: https://issues.apache.org/jira/browse/FLINK-11675
> Project: Flink
>  Issue Type: Task
>Reporter: Kurt Young
>Priority: Major
>
> cc [~pnowojski]
> This is an umbrella issue to add an initial support for running batch jobs 
> with streaming runtime. It includes tasks like:
>  * Add some necessary extension to StreamTransformation to meet the 
> requirement of batch job
>  * Make StreamTransformation, StreamGraph and StreamTask to support running 
> batch jobs
>  * other related necessary changes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11675) Add an initial support for running batch jobs with streaming runtime

2019-02-20 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772860#comment-16772860
 ] 

Kurt Young commented on FLINK-11675:


It will be covered by FLINK-11490

> Add an initial support for running batch jobs with streaming runtime
> 
>
> Key: FLINK-11675
> URL: https://issues.apache.org/jira/browse/FLINK-11675
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Kurt Young
>Priority: Major
>
> cc [~pnowojski]
> This is an umbrella issue to add an initial support for running batch jobs 
> with streaming runtime. It includes tasks like:
>  * Add some necessary extension to StreamTransformation to meet the 
> requirement of batch job
>  * Make StreamTransformation, StreamGraph and StreamTask to support running 
> batch jobs
>  * other related necessary changes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11674) Add an initial Blink SQL code generator

2019-02-20 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-11674:
---
Issue Type: Task  (was: Sub-task)
Parent: (was: FLINK-11439)

> Add an initial Blink SQL code generator
> ---
>
> Key: FLINK-11674
> URL: https://issues.apache.org/jira/browse/FLINK-11674
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Kurt Young
>Priority: Major
>
> A more detailed description can be found in 
> [FLIP-32|https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions]
>  
> This issue is an umbrella issue for tasks related to the code generator for 
> Blink SQL planner. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] NicoK closed pull request #5923: [FLINK-9253][network] make the maximum floating buffers count channel-type independent

2019-02-20 Thread GitBox
NicoK closed pull request #5923: [FLINK-9253][network] make the maximum 
floating buffers count channel-type independent
URL: https://github.com/apache/flink/pull/5923
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dawidwys merged pull request #7761: [hotfix][docs] Fix typo in sourceSinks.md.

2019-02-20 Thread GitBox
dawidwys merged pull request #7761: [hotfix][docs] Fix typo in sourceSinks.md.
URL: https://github.com/apache/flink/pull/7761
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   >