[GitHub] [flink] flinkbot commented on issue #10130: [FLINK-14642] [types] Add support for copying null values to the TupleSeriali…

2019-11-07 Thread GitBox
flinkbot commented on issue #10130: [FLINK-14642] [types] Add support for 
copying null values to the TupleSeriali…
URL: https://github.com/apache/flink/pull/10130#issuecomment-551424774
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 982f9e337018ccc1bc2b36553fff60dd2918bf8c (Fri Nov 08 
07:56:54 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-14642).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] jiasheng55 opened a new pull request #10130: [FLINK-14642] [types] Add support for copying null values to the TupleSeriali…

2019-11-07 Thread GitBox
jiasheng55 opened a new pull request #10130: [FLINK-14642] [types] Add support 
for copying null values to the TupleSeriali…
URL: https://github.com/apache/flink/pull/10130
 
 
   …zer and CaseClassSerializer
   
   ## What is the purpose of the change
   
   This PR makes the `copy` method of the TupleSerializer and 
CaseClassSerializer to handle NULL values properly, so elements of these types 
can be passed between chained operators.   
   Without this PR, the following code may throw NPE in case of `map` function 
returns null, which may confuse the user.
   
   ```
   stream.map(xxx).filter(_ != null).xxx //the return type of the map function 
is Tuple and it may return null
   ```
   
   ## Brief change log
   
 - return **null** if the element itself is null.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (**yes**)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344022126
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureRequestCoordinatorTest.java
 ##
 @@ -0,0 +1,311 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.backpressure;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.execution.ExecutionState;
+import org.apache.flink.runtime.executiongraph.Execution;
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+import org.apache.flink.runtime.executiongraph.ExecutionVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.messages.TaskBackPressureResponse;
+import org.apache.flink.util.TestLogger;
+
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.mockito.ArgumentMatchers;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests for the {@link BackPressureRequestCoordinator}.
+ */
+public class BackPressureRequestCoordinatorTest extends TestLogger {
+
+   private static final long requestTimeout = 1;
+   private static final double backPressureRatio = 0.5;
+
+   private static ScheduledExecutorService executorService;
+   private BackPressureRequestCoordinator coord;
+
+   @BeforeClass
+   public static void setUp() throws Exception {
+   executorService = new ScheduledThreadPoolExecutor(1);
+   }
+
+   @AfterClass
+   public static void tearDown() throws Exception {
+   if (executorService != null) {
+   executorService.shutdown();
+   }
+   }
+
+   @Before
+   public void init() throws Exception {
+   coord = new BackPressureRequestCoordinator(executorService, 
requestTimeout);
 
 Review comment:
   also add `@After` for `coord.shutdown()`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344021494
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureRequestCoordinatorTest.java
 ##
 @@ -0,0 +1,311 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.backpressure;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.execution.ExecutionState;
+import org.apache.flink.runtime.executiongraph.Execution;
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+import org.apache.flink.runtime.executiongraph.ExecutionVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.messages.TaskBackPressureResponse;
+import org.apache.flink.util.TestLogger;
+
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.mockito.ArgumentMatchers;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests for the {@link BackPressureRequestCoordinator}.
+ */
+public class BackPressureRequestCoordinatorTest extends TestLogger {
+
+   private static final long requestTimeout = 1;
+   private static final double backPressureRatio = 0.5;
+
+   private static ScheduledExecutorService executorService;
+   private BackPressureRequestCoordinator coord;
+
+   @BeforeClass
+   public static void setUp() throws Exception {
+   executorService = new ScheduledThreadPoolExecutor(1);
+   }
+
+   @AfterClass
+   public static void tearDown() throws Exception {
+   if (executorService != null) {
+   executorService.shutdown();
+   }
+   }
+
+   @Before
+   public void init() throws Exception {
+   coord = new BackPressureRequestCoordinator(executorService, 
requestTimeout);
+   }
+
+   /** Tests simple request of task back pressure stats. */
+   @Test(timeout = 1L)
+   public void testTriggerBackPressureRequest() throws Exception {
+   ExecutionVertex[] vertices = 
createExecutionVertices(ExecutionState.RUNNING, CompletionType.SUCCESSFULLY);
+
+   CompletableFuture requestFuture = 
coord.triggerBackPressureRequest(vertices);
+   BackPressureStats backPressureStats = requestFuture.get();
+
+   // verify the request result
+   assertEquals(0, backPressureStats.getRequestId());
+   assertTrue(backPressureStats.getEndTime() >= 
backPressureStats.getStartTime());
+
+   Map tracesByTask = 
backPressureStats.getBackPressureRatios();
 
 Review comment:
   tracesByTask -> backPressureRatios


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344021592
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureRequestCoordinatorTest.java
 ##
 @@ -0,0 +1,311 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.backpressure;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.execution.ExecutionState;
+import org.apache.flink.runtime.executiongraph.Execution;
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+import org.apache.flink.runtime.executiongraph.ExecutionVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.messages.TaskBackPressureResponse;
+import org.apache.flink.util.TestLogger;
+
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.mockito.ArgumentMatchers;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests for the {@link BackPressureRequestCoordinator}.
+ */
+public class BackPressureRequestCoordinatorTest extends TestLogger {
+
+   private static final long requestTimeout = 1;
+   private static final double backPressureRatio = 0.5;
+
+   private static ScheduledExecutorService executorService;
+   private BackPressureRequestCoordinator coord;
+
+   @BeforeClass
+   public static void setUp() throws Exception {
+   executorService = new ScheduledThreadPoolExecutor(1);
+   }
+
+   @AfterClass
+   public static void tearDown() throws Exception {
+   if (executorService != null) {
+   executorService.shutdown();
+   }
+   }
+
+   @Before
+   public void init() throws Exception {
+   coord = new BackPressureRequestCoordinator(executorService, 
requestTimeout);
+   }
+
+   /** Tests simple request of task back pressure stats. */
+   @Test(timeout = 1L)
+   public void testTriggerBackPressureRequest() throws Exception {
+   ExecutionVertex[] vertices = 
createExecutionVertices(ExecutionState.RUNNING, CompletionType.SUCCESSFULLY);
+
+   CompletableFuture requestFuture = 
coord.triggerBackPressureRequest(vertices);
+   BackPressureStats backPressureStats = requestFuture.get();
+
+   // verify the request result
+   assertEquals(0, backPressureStats.getRequestId());
+   assertTrue(backPressureStats.getEndTime() >= 
backPressureStats.getStartTime());
 
 Review comment:
   Actually this check is already covered while constructing `BackPressureStats`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9981: [FLINK-13195][sql-client] Add create table support for SqlClient

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #9981: [FLINK-13195][sql-client] Add create 
table support for SqlClient
URL: https://github.com/apache/flink/pull/9981#issuecomment-545714180
 
 
   
   ## CI report:
   
   * 52a10e0283f21047ac18d6668b71cfa4f6fc21a9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133304303)
   * 95a77097c1b5ba17a8738cf7ab6efda214fcff19 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133381205)
   * 538b3d96ac79fcc093b4a65ceaa1fb64787204f7 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135595098)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344021232
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureStats.java
 ##
 @@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.backpressure;
+
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+
+import java.util.Collections;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Back pressure stats for one or more tasks.
+ *
+ * The stats are collected by request triggered in {@link 
BackPressureRequestCoordinator}.
+ */
+public class BackPressureStats {
+
+   /** ID of the request (unique per job). */
+   private final int requestId;
+
+   /** Time stamp, when the request was triggered. */
+   private final long startTime;
+
+   /** Time stamp, when all back pressure stats were collected at the 
JobManager. */
+   private final long endTime;
+
+   /** Map of back pressure ratio by execution ID. */
+   private final Map backPressureRatios;
+
+   public BackPressureStats(
+   int requestId,
+   long startTime,
+   long endTime,
+   Map backPressureRatios) {
+
+   checkArgument(requestId >= 0, "Negative request ID.");
+   checkArgument(startTime >= 0, "Negative start time.");
+   checkArgument(endTime >= startTime, "End time before start 
time.");
 
 Review comment:
   I wonder it exists the case that `endTime = startTime`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on issue #9433: [FLINK-13708] [table-planner-blink] transformations should be cleared after execution in blink planner

2019-11-07 Thread GitBox
zjffdu commented on issue #9433: [FLINK-13708] [table-planner-blink] 
transformations should be cleared after execution in blink planner
URL: https://github.com/apache/flink/pull/9433#issuecomment-551420856
 
 
   No problem, thanks for the fix. @godfreyhe 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10128: [FLINK-14641][docs] Fix description of metric `fullRestarts`

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10128: [FLINK-14641][docs] Fix description 
of metric `fullRestarts`
URL: https://github.com/apache/flink/pull/10128#issuecomment-551413601
 
 
   
   ## CI report:
   
   * 409604f05255e81a4dd943042c5b18c18ac16a49 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135595029)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10126: [FLINK-14590][python] Unify the working directory of Java process and Python process when submitting python jobs via "flink run -py"

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10126: [FLINK-14590][python] Unify the 
working directory of Java process and Python process when submitting python 
jobs via "flink run -py"
URL: https://github.com/apache/flink/pull/10126#issuecomment-551413525
 
 
   
   ## CI report:
   
   * ea5abdfce3ab26a0a196fd7d73a78de109d71bc3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135594994)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344019865
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureRequestCoordinatorTest.java
 ##
 @@ -0,0 +1,311 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.backpressure;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.execution.ExecutionState;
+import org.apache.flink.runtime.executiongraph.Execution;
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+import org.apache.flink.runtime.executiongraph.ExecutionVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.messages.TaskBackPressureResponse;
+import org.apache.flink.util.TestLogger;
+
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.mockito.ArgumentMatchers;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+/**
+ * Tests for the {@link BackPressureRequestCoordinator}.
+ */
+public class BackPressureRequestCoordinatorTest extends TestLogger {
+
+   private static final long requestTimeout = 1;
+   private static final double backPressureRatio = 0.5;
+
+   private static ScheduledExecutorService executorService;
+   private BackPressureRequestCoordinator coord;
 
 Review comment:
   coord -> coordinator


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10123: [FLINK-14665][table-planner-blink] 
Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#issuecomment-551369912
 
 
   
   ## CI report:
   
   * 102bce9b12cb5853f0fe2b53ae4120434813d7c2 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135567444)
   * f9cb7dcc064d289b3960508cff61f696bfcb4803 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135569491)
   * 377622cd49569d6e73d8d8d9f85a17aa3437d343 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135594950)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10127: [FLINK-14654] Fix the arguments number mismatching with placeholders in log statements

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10127: [FLINK-14654] Fix the arguments 
number mismatching with placeholders in log statements
URL: https://github.com/apache/flink/pull/10127#issuecomment-551413577
 
 
   
   ## CI report:
   
   * 02df4747bebc7f7601bce39dea46ea285c46 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135595001)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10129: [FLINK-14642][runtime] Deprecate metric `fullRestarts`

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10129: [FLINK-14642][runtime] Deprecate 
metric `fullRestarts`
URL: https://github.com/apache/flink/pull/10129#issuecomment-551413624
 
 
   
   ## CI report:
   
   * 72f509341e6a5743c88f7fe960da86ba170990eb : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135595056)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on issue #9433: [FLINK-13708] [table-planner-blink] transformations should be cleared after execution in blink planner

2019-11-07 Thread GitBox
godfreyhe commented on issue #9433: [FLINK-13708] [table-planner-blink] 
transformations should be cleared after execution in blink planner
URL: https://github.com/apache/flink/pull/9433#issuecomment-551420055
 
 
   @zjffdu sorry for late update


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10098: [FLINK-14326][FLINK-14324][table] Support to apply watermark assigner in planner

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10098: [FLINK-14326][FLINK-14324][table] 
Support to apply watermark assigner in planner
URL: https://github.com/apache/flink/pull/10098#issuecomment-550174677
 
 
   
   ## CI report:
   
   * a3db95a7340322421d1fa826dcac850116011687 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135182714)
   * 6db8b32408ffd954c96d1829c36d04542825a068 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135450723)
   * 31635a192b64660cebe93afee586249bf1df9b13 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135461602)
   * 229ede5e47a1d7346edc79449a39cf3c56858604 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135569483)
   * a31fdb8792a506e6a79ce108f35b7123b389e84d : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135588353)
   * ed717ffe4bdbdcd42c88d51cf13b05bcd12f7995 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344018872
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleableTask.java
 ##
 @@ -22,13 +22,13 @@
 import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
 
 /**
- * Task interface used by {@link StackTraceSampleService} for back pressure 
tracking.
+ * Task interface used by {@link BackPressureSampleService} for back pressure 
tracking.
  */
-interface StackTraceSampleableTask {
+public interface BackPressureSampleableTask {
 
boolean isRunning();
 
-   StackTraceElement[] getStackTrace();
+   boolean isBackPressured();
 
ExecutionAttemptID getExecutionId();
 
 Review comment:
   This interface method is also not very necessary, and the only usage atm is 
for throwing exception. I guess we can remove it as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10074: [FLINK-14499][Runtime / Metrics]MetricRegistry#getMetricQueryServiceGatewayRpcAddress is Nonnull

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10074: [FLINK-14499][Runtime / 
Metrics]MetricRegistry#getMetricQueryServiceGatewayRpcAddress is Nonnull
URL: https://github.com/apache/flink/pull/10074#issuecomment-549322411
 
 
   
   ## CI report:
   
   * 4807c59ae20273d5b4611395ca5467217d7d0ca3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134842985)
   * fedd3f2abd43a67d6461b895145964fcdf98af0d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134961727)
   * 5c077aaf9166fbbd068f80eb61294d9cfea2eddc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135207803)
   * 8d5a6b460941e97a1a4fd2f432680721fd2f2289 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135239336)
   * 40936c4b23e94ecdf8d925ed6eae917ec3181584 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135567432)
   * 070ca5c4166edea55b93764c6d65b46e7ae9c938 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135592818)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10022: [FLINK-14135][orc] Introduce OrcColumnarRowInputFormat

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10022: [FLINK-14135][orc] Introduce 
OrcColumnarRowInputFormat
URL: https://github.com/apache/flink/pull/10022#issuecomment-547278354
 
 
   
   ## CI report:
   
   * 73e9fbb7c4edb8628b93e9a5b17271e57a8b8f14 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133947525)
   * b86c5db1ab9a081f0fec52f15fdad3b4f4d61939 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134797047)
   * 77aa3cb348f97c8e2999bafcee9e7169b96e983e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135586640)
   * 3e65b8dfcdbcb91902d7c9b122b3bdb36333b227 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14653) Job-related errors in snapshotState do not result in job failure

2019-11-07 Thread Victor Wong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969910#comment-16969910
 ] 

Victor Wong commented on FLINK-14653:
-

[~mxm], if `CheckpointConfig#setTolerableCheckpointFailureNumber` is set to 0 
(which is the default value), a checkpoint failure will result in the job 
failure.

 

> Job-related errors in snapshotState do not result in job failure
> 
>
> Key: FLINK-14653
> URL: https://issues.apache.org/jira/browse/FLINK-14653
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Maximilian Michels
>Priority: Minor
>
> When users override {{snapshoteState}}, they might include logic there which 
> is crucial for the correctness of their application, e.g. finalizing a 
> transaction and buffering the results of that transaction, or flushing events 
> to an external store. Exceptions occurring should lead to failing the job.
> Currently, users must make sure to throw a {{Throwable}} because any 
> {{Exception}} will be caught by the task and reported as checkpointing error, 
> when it could be an application error.
> It would be helpful to update the documentation and introduce a special 
> exception that can be thrown for job-related failures, e.g. 
> {{ApplicationError}} or similar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344015484
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java
 ##
 @@ -495,6 +496,11 @@ public boolean isCanceledOrFailed() {
executionState == ExecutionState.FAILED;
}
 
+   @Override
 
 Review comment:
   This can be removed as previously commented. This logic can be covered by 
above `isBackPressured`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344015308
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java
 ##
 @@ -461,16 +462,16 @@ AbstractInvokable getInvokable() {
return invokable;
}
 
-   public StackTraceElement[] getStackTraceOfExecutingThread() {
-   final AbstractInvokable invokable = this.invokable;
-
-   if (invokable == null) {
-   return new StackTraceElement[0];
+   @Override
+   public boolean isBackPressured() {
+   if (invokable == null || 
consumableNotifyingPartitionWriters.length == 0) {
+   return true;
}
-
-   return invokable.getExecutingThread()
-   .orElse(executingThread)
-   .getStackTrace();
+   final CompletableFuture[] outputFutures = new 
CompletableFuture[consumableNotifyingPartitionWriters.length];
+   for (int i = 0; i < outputFutures.length; ++i) {
+   outputFutures[i] = 
consumableNotifyingPartitionWriters[i].isAvailable();
+   }
+   return CompletableFuture.allOf(outputFutures).isDone();
 
 Review comment:
   The return value is reverse. If all the outputs are available, then it 
should return false. The return value `true` means back pressured.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9981: [FLINK-13195][sql-client] Add create table support for SqlClient

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #9981: [FLINK-13195][sql-client] Add create 
table support for SqlClient
URL: https://github.com/apache/flink/pull/9981#issuecomment-545714180
 
 
   
   ## CI report:
   
   * 52a10e0283f21047ac18d6668b71cfa4f6fc21a9 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/133304303)
   * 95a77097c1b5ba17a8738cf7ab6efda214fcff19 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133381205)
   * 538b3d96ac79fcc093b4a65ceaa1fb64787204f7 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344014732
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java
 ##
 @@ -461,16 +462,16 @@ AbstractInvokable getInvokable() {
return invokable;
}
 
-   public StackTraceElement[] getStackTraceOfExecutingThread() {
-   final AbstractInvokable invokable = this.invokable;
-
-   if (invokable == null) {
-   return new StackTraceElement[0];
+   @Override
+   public boolean isBackPressured() {
+   if (invokable == null || 
consumableNotifyingPartitionWriters.length == 0) {
+   return true;
 
 Review comment:
   should return false?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10129: [FLINK-14642][runtime] Deprecate metric `fullRestarts`

2019-11-07 Thread GitBox
flinkbot commented on issue #10129: [FLINK-14642][runtime] Deprecate metric 
`fullRestarts`
URL: https://github.com/apache/flink/pull/10129#issuecomment-551413624
 
 
   
   ## CI report:
   
   * 72f509341e6a5743c88f7fe960da86ba170990eb : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344013614
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskExecutor.java
 ##
 @@ -273,7 +276,11 @@ public TaskExecutor(
this.resourceManagerConnection = null;
this.currentRegistrationTimeoutId = null;
 
-   this.stackTraceSampleService = new 
StackTraceSampleService(rpcService.getScheduledExecutor());
+   final Configuration config = 
taskManagerConfiguration.getConfiguration();
+   this.taskBackPressureSampleService = new 
BackPressureSampleService(
 
 Review comment:
   I suggest constructing the `BackPressureSampleService` out of the 
constructor to make it clean, also we could avoid 
`taskManagerConfiguration.getConfiguration()` from outside call. It is elegant 
to prepare everything outside and then pass them directly into the constructor. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10128: [FLINK-14641][docs] Fix description of metric `fullRestarts`

2019-11-07 Thread GitBox
flinkbot commented on issue #10128: [FLINK-14641][docs] Fix description of 
metric `fullRestarts`
URL: https://github.com/apache/flink/pull/10128#issuecomment-551413601
 
 
   
   ## CI report:
   
   * 409604f05255e81a4dd943042c5b18c18ac16a49 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10126: [FLINK-14590][python] Unify the working directory of Java process and Python process when submitting python jobs via "flink run -py"

2019-11-07 Thread GitBox
flinkbot commented on issue #10126: [FLINK-14590][python] Unify the working 
directory of Java process and Python process when submitting python jobs via 
"flink run -py"
URL: https://github.com/apache/flink/pull/10126#issuecomment-551413525
 
 
   
   ## CI report:
   
   * ea5abdfce3ab26a0a196fd7d73a78de109d71bc3 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10123: [FLINK-14665][table-planner-blink] 
Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#issuecomment-551369912
 
 
   
   ## CI report:
   
   * 102bce9b12cb5853f0fe2b53ae4120434813d7c2 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135567444)
   * f9cb7dcc064d289b3960508cff61f696bfcb4803 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135569491)
   * 377622cd49569d6e73d8d8d9f85a17aa3437d343 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10127: [FLINK-14654] Fix the arguments number mismatching with placeholders in log statements

2019-11-07 Thread GitBox
flinkbot commented on issue #10127: [FLINK-14654] Fix the arguments number 
mismatching with placeholders in log statements
URL: https://github.com/apache/flink/pull/10127#issuecomment-551413577
 
 
   
   ## CI report:
   
   * 02df4747bebc7f7601bce39dea46ea285c46 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14669) All hadoop-2.4.1 related nightly end-to-end tests failed on travis

2019-11-07 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-14669:
--
Description: 
As titled, all hadoop 2.4.1 tests failed in build 
[https://travis-ci.org/apache/flink/builds/608709634]
 !image-2019-11-08-15-02-31-268.png|width=609,height=205,vspace=2!

>From the log it seems to be timed out when downloading dependencies
{noformat}
/home/travis/flink_cache/40913/flink/docs/concepts/runtime.zh.md
/home/travis/flink_cache/40913/flink/docs/_config_dev_en.yml
/home/travis/flink_cache/40913/\n...
changes detected, packing new archive
uploading 
master/cache--linux-xenial-98cdbf5c3ae4db3a919a6366e5a96e33ecd8f19b8853a50e8f545bd43fc8164e--jdk-openjdk8.tgz
cache uploaded
travis_time:end:1b949da8:start=1573163710718943370,finish=1573163798721501362,duration=88002557992,event=cache
travis_fold:end:cache.2


Done. Your build exited with 1.
{noformat}
[https://api.travis-ci.org/v3/job/608709640/log.txt]

  was:
As titled, all hadoop 2.4.1 tests failed in build 
https://travis-ci.org/apache/flink/builds/608709634
 !image-2019-11-08-15-02-31-268.png! 

>From the log it seems to be timed out when downloading dependencies
{noformat}
/home/travis/flink_cache/40913/flink/docs/concepts/runtime.zh.md
/home/travis/flink_cache/40913/flink/docs/_config_dev_en.yml
/home/travis/flink_cache/40913/\n...
changes detected, packing new archive
uploading 
master/cache--linux-xenial-98cdbf5c3ae4db3a919a6366e5a96e33ecd8f19b8853a50e8f545bd43fc8164e--jdk-openjdk8.tgz
cache uploaded
travis_time:end:1b949da8:start=1573163710718943370,finish=1573163798721501362,duration=88002557992,event=cache
travis_fold:end:cache.2


Done. Your build exited with 1.
{noformat}
https://api.travis-ci.org/v3/job/608709640/log.txt


> All hadoop-2.4.1 related nightly end-to-end tests failed on travis
> --
>
> Key: FLINK-14669
> URL: https://issues.apache.org/jira/browse/FLINK-14669
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Priority: Blocker
>  Labels: test-stability
> Attachments: image-2019-11-08-15-02-31-268.png
>
>
> As titled, all hadoop 2.4.1 tests failed in build 
> [https://travis-ci.org/apache/flink/builds/608709634]
>  !image-2019-11-08-15-02-31-268.png|width=609,height=205,vspace=2!
> From the log it seems to be timed out when downloading dependencies
> {noformat}
> /home/travis/flink_cache/40913/flink/docs/concepts/runtime.zh.md
> /home/travis/flink_cache/40913/flink/docs/_config_dev_en.yml
> /home/travis/flink_cache/40913/\n...
> changes detected, packing new archive
> uploading 
> master/cache--linux-xenial-98cdbf5c3ae4db3a919a6366e5a96e33ecd8f19b8853a50e8f545bd43fc8164e--jdk-openjdk8.tgz
> cache uploaded
> travis_time:end:1b949da8:start=1573163710718943370,finish=1573163798721501362,duration=88002557992,event=cache
> travis_fold:end:cache.2
> 
> Done. Your build exited with 1.
> {noformat}
> [https://api.travis-ci.org/v3/job/608709640/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] guoweiM commented on a change in pull request #10076: [FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class loader

2019-11-07 Thread GitBox
guoweiM commented on a change in pull request #10076: [FLINK-14465][runtime] 
Let `StandaloneJobClusterEntrypoint` use user code class loader
URL: https://github.com/apache/flink/pull/10076#discussion_r344013007
 
 

 ##
 File path: 
flink-container/src/main/java/org/apache/flink/container/entrypoint/StandaloneJobClusterEntryPoint.java
 ##
 @@ -74,10 +79,20 @@ private StandaloneJobClusterEntryPoint(
}
 
@Override
-   protected DispatcherResourceManagerComponentFactory 
createDispatcherResourceManagerComponentFactory(Configuration configuration) {
+   protected DispatcherResourceManagerComponentFactory 
createDispatcherResourceManagerComponentFactory(Configuration configuration) 
throws IOException {
+   final String flinkHomeDir = System.getenv(ENV_FLINK_HOME_DIR);
 
 Review comment:
   ok. I could open a Jira to follow this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] guoweiM commented on a change in pull request #10076: [FLINK-14465][runtime] Let `StandaloneJobClusterEntrypoint` use user code class loader

2019-11-07 Thread GitBox
guoweiM commented on a change in pull request #10076: [FLINK-14465][runtime] 
Let `StandaloneJobClusterEntrypoint` use user code class loader
URL: https://github.com/apache/flink/pull/10076#discussion_r344012889
 
 

 ##
 File path: 
flink-container/src/main/java/org/apache/flink/container/entrypoint/ClassPathJobGraphRetriever.java
 ##
 @@ -133,9 +138,18 @@ private String getJobClassNameOrScanClassPath() throws 
FlinkException {
}
 
private String scanClassPathForJobJar() throws IOException {
-   LOG.info("Scanning class path for job JAR");
-   JarFileWithEntryClass jobJar = 
JarManifestParser.findOnlyEntryClass(jarsOnClassPath.get());
-
+   final JarFileWithEntryClass jobJar;
+   if (getUserClassPaths().isEmpty()) {
+   LOG.info("Scanning system class path for job JAR");
+   jobJar = 
JarManifestParser.findOnlyEntryClass(jarsOnClassPath.get());
+   } else {
+   LOG.info("Scanning user class path for job JAR");
+   final List userJars = getUserClassPaths()
+   .stream()
+   .map(url -> new File(url.getFile()))
+   .collect(Collectors.toList());
+   jobJar = JarManifestParser.findOnlyEntryClass(userJars);
+   }
 
 Review comment:
   thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10074: [FLINK-14499][Runtime / Metrics]MetricRegistry#getMetricQueryServiceGatewayRpcAddress is Nonnull

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10074: [FLINK-14499][Runtime / 
Metrics]MetricRegistry#getMetricQueryServiceGatewayRpcAddress is Nonnull
URL: https://github.com/apache/flink/pull/10074#issuecomment-549322411
 
 
   
   ## CI report:
   
   * 4807c59ae20273d5b4611395ca5467217d7d0ca3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134842985)
   * fedd3f2abd43a67d6461b895145964fcdf98af0d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134961727)
   * 5c077aaf9166fbbd068f80eb61294d9cfea2eddc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135207803)
   * 8d5a6b460941e97a1a4fd2f432680721fd2f2289 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135239336)
   * 40936c4b23e94ecdf8d925ed6eae917ec3181584 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135567432)
   * 070ca5c4166edea55b93764c6d65b46e7ae9c938 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135592818)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344011851
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleableTask.java
 ##
 @@ -22,13 +22,13 @@
 import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
 
 /**
- * Task interface used by {@link StackTraceSampleService} for back pressure 
tracking.
+ * Task interface used by {@link BackPressureSampleService} for back pressure 
tracking.
  */
-interface StackTraceSampleableTask {
+public interface BackPressureSampleableTask {
 
boolean isRunning();
 
 Review comment:
   It seems a bit strange to have this method in the interface. This state 
should be covered in the specific implementation of `boolean 
isBackPressured()`. That means if the task is not running, it indicates non 
back pressure. So we do not need to explicitly judge the running state outside.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14669) All hadoop-2.4.1 related nightly end-to-end tests failed on travis

2019-11-07 Thread Yu Li (Jira)
Yu Li created FLINK-14669:
-

 Summary: All hadoop-2.4.1 related nightly end-to-end tests failed 
on travis
 Key: FLINK-14669
 URL: https://issues.apache.org/jira/browse/FLINK-14669
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.10.0
Reporter: Yu Li
 Attachments: image-2019-11-08-15-02-31-268.png

As titled, all hadoop 2.4.1 tests failed in build 
https://travis-ci.org/apache/flink/builds/608709634
 !image-2019-11-08-15-02-31-268.png! 

>From the log it seems to be timed out when downloading dependencies
{noformat}
/home/travis/flink_cache/40913/flink/docs/concepts/runtime.zh.md
/home/travis/flink_cache/40913/flink/docs/_config_dev_en.yml
/home/travis/flink_cache/40913/\n...
changes detected, packing new archive
uploading 
master/cache--linux-xenial-98cdbf5c3ae4db3a919a6366e5a96e33ecd8f19b8853a50e8f545bd43fc8164e--jdk-openjdk8.tgz
cache uploaded
travis_time:end:1b949da8:start=1573163710718943370,finish=1573163798721501362,duration=88002557992,event=cache
travis_fold:end:cache.2


Done. Your build exited with 1.
{noformat}
https://api.travis-ci.org/v3/job/608709640/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344011455
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleService.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Samples whether a task is back pressured multi times. The total number of 
samples
+ * divided by the number of back pressure samples reaches the back pressure 
ratio.
+ */
+public class BackPressureSampleService {
+
+   /** Number of samples to take when determining the back pressure of a 
task. */
+   private final int numSamples;
+
+   /** Time to wait between samples when determining the back pressure of 
a task. */
+   private final Time delayBetweenSamples;
+
+   /** Executor to run sample tasks. */
+   private final ScheduledExecutor scheduledExecutor;
+
+   BackPressureSampleService(
+   int numSamples,
+   Time delayBetweenSamples,
+   ScheduledExecutor scheduledExecutor) {
+
+   checkArgument(numSamples >= 1, "Illegal number of samples: " + 
numSamples);
+
+   this.numSamples = numSamples;
+   this.delayBetweenSamples = checkNotNull(delayBetweenSamples);
+   this.scheduledExecutor = checkNotNull(scheduledExecutor, "The 
scheduledExecutor must not be null.");
+   }
+
+   /**
+* Returns a future that completes with the back pressure ratio of a 
task.
+*
+* @param task The task to be sampled.
+* @return A future of the task back pressure ratio.
+*/
+   public CompletableFuture 
sampleTaskBackPressure(BackPressureSampleableTask task) {
+   return sampleTaskBackPressure(
+   checkNotNull(task),
+   numSamples,
+   delayBetweenSamples,
+   new ArrayList<>(numSamples),
+   new CompletableFuture<>());
+   }
+
+   private CompletableFuture sampleTaskBackPressure(
+   final BackPressureSampleableTask task,
+   final int remainingNumSamples,
+   final Time delayBetweenSamples,
+   final List taskBackPressureSamples,
+   final CompletableFuture resultFuture) {
+
+   if (task.isRunning()) {
 
 Review comment:
   If task state is not running, `task. isBackPressured()` should return false 
in specific implementation. Then we could remove this condition and interface 
method `BackPressureSampleableTask#isRunning`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10022: [FLINK-14135][orc] Introduce OrcColumnarRowInputFormat

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10022: [FLINK-14135][orc] Introduce 
OrcColumnarRowInputFormat
URL: https://github.com/apache/flink/pull/10022#issuecomment-547278354
 
 
   
   ## CI report:
   
   * 73e9fbb7c4edb8628b93e9a5b17271e57a8b8f14 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133947525)
   * b86c5db1ab9a081f0fec52f15fdad3b4f4d61939 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134797047)
   * 77aa3cb348f97c8e2999bafcee9e7169b96e983e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135586640)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14374) Enable RegionFailoverITCase to pass with scheduler NG

2019-11-07 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969885#comment-16969885
 ] 

Yun Tang commented on FLINK-14374:
--

[~gjy], adding {{FailingRestartStrategy}} is to verify FLINK-13452. If we are 
sure that case would not happen with scheduler NG, we could just remove 
{{FailingRestartStrategy}} directly to leave {{RegionFailoverITCase}} to verify 
region failover in integrated cases as it previous did.

> Enable RegionFailoverITCase to pass with scheduler NG
> -
>
> Key: FLINK-14374
> URL: https://issues.apache.org/jira/browse/FLINK-14374
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.10.0
>Reporter: Zhu Zhu
>Assignee: Gary Yao
>Priority: Major
> Fix For: 1.10.0
>
>
> RegionFailoverITCase currently fails with scheduler NG.
> The failure cause is that it's using {{FailingRestartStrategy}} which is not 
> supported in scheduler NG.
> However, the usage of  {{FailingRestartStrategy}} seems not to be necessary. 
> It's for verifying a special case(see FLINK-13452) of legacy scheduler which 
> are less likely to happen in the future.
> I'd propose to drop to the usage of {{FailingRestartStrategy}} in 
> {{RegionFailoverITCase}} to make {{RegionFailoverITCase}} a simple test for 
> streaming job on region failover.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344010610
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleService.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Samples whether a task is back pressured multi times. The total number of 
samples
+ * divided by the number of back pressure samples reaches the back pressure 
ratio.
+ */
+public class BackPressureSampleService {
+
+   /** Number of samples to take when determining the back pressure of a 
task. */
+   private final int numSamples;
+
+   /** Time to wait between samples when determining the back pressure of 
a task. */
+   private final Time delayBetweenSamples;
+
+   /** Executor to run sample tasks. */
+   private final ScheduledExecutor scheduledExecutor;
+
+   BackPressureSampleService(
+   int numSamples,
+   Time delayBetweenSamples,
+   ScheduledExecutor scheduledExecutor) {
+
+   checkArgument(numSamples >= 1, "Illegal number of samples: " + 
numSamples);
+
+   this.numSamples = numSamples;
+   this.delayBetweenSamples = checkNotNull(delayBetweenSamples);
+   this.scheduledExecutor = checkNotNull(scheduledExecutor, "The 
scheduledExecutor must not be null.");
+   }
+
+   /**
+* Returns a future that completes with the back pressure ratio of a 
task.
+*
+* @param task The task to be sampled.
+* @return A future of the task back pressure ratio.
+*/
+   public CompletableFuture 
sampleTaskBackPressure(BackPressureSampleableTask task) {
+   return sampleTaskBackPressure(
+   checkNotNull(task),
+   numSamples,
+   delayBetweenSamples,
+   new ArrayList<>(numSamples),
+   new CompletableFuture<>());
+   }
+
+   private CompletableFuture sampleTaskBackPressure(
+   final BackPressureSampleableTask task,
+   final int remainingNumSamples,
+   final Time delayBetweenSamples,
+   final List taskBackPressureSamples,
+   final CompletableFuture resultFuture) {
+
+   if (task.isRunning()) {
+   taskBackPressureSamples.add(task.isBackPressured());
+   } else if (!taskBackPressureSamples.isEmpty()) {
+   
resultFuture.complete(calculateTaskBackPressureRatio(taskBackPressureSamples));
+   return resultFuture;
+   } else {
+   throw new IllegalStateException(String.format("Cannot 
sample task %s. " +
+   "Because the task is not running.", 
task.getExecutionId()));
+   }
+
+   if (remainingNumSamples > 1) {
+   scheduledExecutor.schedule(
+   () -> sampleTaskBackPressure(
+   task,
+   remainingNumSamples - 1,
+   delayBetweenSamples,
+   taskBackPressureSamples,
+   resultFuture),
+   delayBetweenSamples.getSize(),
+   delayBetweenSamples.getUnit());
+   } else {
+   
resultFuture.complete(calculateTaskBackPressureRatio(taskBackPressureSamples));
+

[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344010528
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleService.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Samples whether a task is back pressured multi times. The total number of 
samples
+ * divided by the number of back pressure samples reaches the back pressure 
ratio.
+ */
+public class BackPressureSampleService {
+
+   /** Number of samples to take when determining the back pressure of a 
task. */
+   private final int numSamples;
+
+   /** Time to wait between samples when determining the back pressure of 
a task. */
+   private final Time delayBetweenSamples;
+
+   /** Executor to run sample tasks. */
+   private final ScheduledExecutor scheduledExecutor;
+
+   BackPressureSampleService(
+   int numSamples,
+   Time delayBetweenSamples,
+   ScheduledExecutor scheduledExecutor) {
+
+   checkArgument(numSamples >= 1, "Illegal number of samples: " + 
numSamples);
+
+   this.numSamples = numSamples;
+   this.delayBetweenSamples = checkNotNull(delayBetweenSamples);
+   this.scheduledExecutor = checkNotNull(scheduledExecutor, "The 
scheduledExecutor must not be null.");
+   }
+
+   /**
+* Returns a future that completes with the back pressure ratio of a 
task.
+*
+* @param task The task to be sampled.
+* @return A future of the task back pressure ratio.
+*/
+   public CompletableFuture 
sampleTaskBackPressure(BackPressureSampleableTask task) {
+   return sampleTaskBackPressure(
+   checkNotNull(task),
+   numSamples,
+   delayBetweenSamples,
+   new ArrayList<>(numSamples),
+   new CompletableFuture<>());
+   }
+
+   private CompletableFuture sampleTaskBackPressure(
+   final BackPressureSampleableTask task,
+   final int remainingNumSamples,
+   final Time delayBetweenSamples,
+   final List taskBackPressureSamples,
+   final CompletableFuture resultFuture) {
+
+   if (task.isRunning()) {
+   taskBackPressureSamples.add(task.isBackPressured());
+   } else if (!taskBackPressureSamples.isEmpty()) {
+   
resultFuture.complete(calculateTaskBackPressureRatio(taskBackPressureSamples));
+   return resultFuture;
+   } else {
+   throw new IllegalStateException(String.format("Cannot 
sample task %s. " +
+   "Because the task is not running.", 
task.getExecutionId()));
+   }
+
+   if (remainingNumSamples > 1) {
+   scheduledExecutor.schedule(
+   () -> sampleTaskBackPressure(
+   task,
+   remainingNumSamples - 1,
+   delayBetweenSamples,
+   taskBackPressureSamples,
+   resultFuture),
+   delayBetweenSamples.getSize(),
+   delayBetweenSamples.getUnit());
+   } else {
+   
resultFuture.complete(calculateTaskBackPressureRatio(taskBackPressureSamples));
+

[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344009866
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleService.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Samples whether a task is back pressured multi times. The total number of 
samples
+ * divided by the number of back pressure samples reaches the back pressure 
ratio.
+ */
+public class BackPressureSampleService {
+
+   /** Number of samples to take when determining the back pressure of a 
task. */
+   private final int numSamples;
+
+   /** Time to wait between samples when determining the back pressure of 
a task. */
+   private final Time delayBetweenSamples;
+
+   /** Executor to run sample tasks. */
+   private final ScheduledExecutor scheduledExecutor;
+
+   BackPressureSampleService(
+   int numSamples,
+   Time delayBetweenSamples,
+   ScheduledExecutor scheduledExecutor) {
+
+   checkArgument(numSamples >= 1, "Illegal number of samples: " + 
numSamples);
+
+   this.numSamples = numSamples;
+   this.delayBetweenSamples = checkNotNull(delayBetweenSamples);
+   this.scheduledExecutor = checkNotNull(scheduledExecutor, "The 
scheduledExecutor must not be null.");
+   }
+
+   /**
+* Returns a future that completes with the back pressure ratio of a 
task.
+*
+* @param task The task to be sampled.
+* @return A future of the task back pressure ratio.
+*/
+   public CompletableFuture 
sampleTaskBackPressure(BackPressureSampleableTask task) {
+   return sampleTaskBackPressure(
+   checkNotNull(task),
+   numSamples,
+   delayBetweenSamples,
+   new ArrayList<>(numSamples),
+   new CompletableFuture<>());
+   }
+
+   private CompletableFuture sampleTaskBackPressure(
+   final BackPressureSampleableTask task,
+   final int remainingNumSamples,
+   final Time delayBetweenSamples,
+   final List taskBackPressureSamples,
+   final CompletableFuture resultFuture) {
+
+   if (task.isRunning()) {
+   taskBackPressureSamples.add(task.isBackPressured());
+   } else if (!taskBackPressureSamples.isEmpty()) {
+   
resultFuture.complete(calculateTaskBackPressureRatio(taskBackPressureSamples));
 
 Review comment:
   There seems some logic bug here. If the `remainingNumSamples` has not 
reached 0, then we should not complete the future, otherwise the results would 
not be accurate?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344009531
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleService.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Samples whether a task is back pressured multi times. The total number of 
samples
+ * divided by the number of back pressure samples reaches the back pressure 
ratio.
+ */
+public class BackPressureSampleService {
+
+   /** Number of samples to take when determining the back pressure of a 
task. */
+   private final int numSamples;
+
+   /** Time to wait between samples when determining the back pressure of 
a task. */
+   private final Time delayBetweenSamples;
+
+   /** Executor to run sample tasks. */
+   private final ScheduledExecutor scheduledExecutor;
+
+   BackPressureSampleService(
+   int numSamples,
+   Time delayBetweenSamples,
+   ScheduledExecutor scheduledExecutor) {
+
+   checkArgument(numSamples >= 1, "Illegal number of samples: " + 
numSamples);
+
+   this.numSamples = numSamples;
+   this.delayBetweenSamples = checkNotNull(delayBetweenSamples);
+   this.scheduledExecutor = checkNotNull(scheduledExecutor, "The 
scheduledExecutor must not be null.");
+   }
+
+   /**
+* Returns a future that completes with the back pressure ratio of a 
task.
+*
+* @param task The task to be sampled.
+* @return A future of the task back pressure ratio.
+*/
+   public CompletableFuture 
sampleTaskBackPressure(BackPressureSampleableTask task) {
+   return sampleTaskBackPressure(
+   checkNotNull(task),
+   numSamples,
+   delayBetweenSamples,
+   new ArrayList<>(numSamples),
+   new CompletableFuture<>());
+   }
+
+   private CompletableFuture sampleTaskBackPressure(
+   final BackPressureSampleableTask task,
+   final int remainingNumSamples,
+   final Time delayBetweenSamples,
+   final List taskBackPressureSamples,
+   final CompletableFuture resultFuture) {
+
+   if (task.isRunning()) {
+   taskBackPressureSamples.add(task.isBackPressured());
+   } else if (!taskBackPressureSamples.isEmpty()) {
+   
resultFuture.complete(calculateTaskBackPressureRatio(taskBackPressureSamples));
+   return resultFuture;
+   } else {
+   throw new IllegalStateException(String.format("Cannot 
sample task %s. " +
+   "Because the task is not running.", 
task.getExecutionId()));
+   }
 
 Review comment:
   These three conditions seem no obvious relationship, then it is better to 
make them separate. 
   Also it should not throw `IllegalStateException` if the task is not running, 
maybe we could regard non-running task as non-backpressure state?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10129: [FLINK-14642][runtime] Deprecate metric `fullRestarts`

2019-11-07 Thread GitBox
flinkbot commented on issue #10129: [FLINK-14642][runtime] Deprecate metric 
`fullRestarts`
URL: https://github.com/apache/flink/pull/10129#issuecomment-551408678
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 72f509341e6a5743c88f7fe960da86ba170990eb (Fri Nov 08 
06:53:19 UTC 2019)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-14642).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhuzhurk opened a new pull request #10129: [FLINK-14642][runtime] Deprecate metric `fullRestarts`

2019-11-07 Thread GitBox
zhuzhurk opened a new pull request #10129: [FLINK-14642][runtime] Deprecate 
metric `fullRestarts`
URL: https://github.com/apache/flink/pull/10129
 
 
   
   ## What is the purpose of the change
   
   FLINK-14164 introduces a metric 'numberOfRestarts' that counts all kinds of 
restarts.
   The metric 'fullRestarts' is superseded and this PR is to deprecate it for 
future removal.
   
   ## Brief change log
   
 - *Marked NumberOfFullRestartsGauge as deprecated*
 - *Adjust description doc of 'numberOfRestarts'*
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   Manually verified locally.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14642) Flink TupleSerializer and CaseClassSerializer shoud support copy NULL values

2019-11-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14642:
---
Labels: pull-request-available  (was: )

> Flink TupleSerializer and CaseClassSerializer shoud support copy NULL values
> 
>
> Key: FLINK-14642
> URL: https://issues.apache.org/jira/browse/FLINK-14642
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.9.1
>Reporter: Victor Wong
>Priority: Major
>  Labels: pull-request-available
>
> Currently, TupleSerializer and CaseCassSerializer do not support serialize 
> NULL values, which I think is acceptable. But not supporting copy NULL values 
> will cause the following codes to throw an exception, which I think is not 
> matched with users' expectations and prone to error.
> *codes:*
> {code:java}
> stream.map(xxx).filter(_ != null).xxx //the return type of the map function 
> is Tuple and it may return null{code}
>  
> *exception info:*
>  
> {code:java}
> Caused by: java.lang.NullPointerException 
>   at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:92)
>  
>   at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
>  
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:635)
> {code}
>  
> *suggestion:*
> Can we make the `copy` method of TupleSerializer/CaseClassSerializer to 
> handle NULL values? e.g.
> {code:java}
> // org.apache.flink.api.scala.typeutils.CaseClassSerializer#copy
> def copy(from: T): T = {
>   // handle NULL values.
>   if(from == null) {
> return from
>   }
>   initArray()
>   var i = 0
>   while (i < arity) {
> fields(i) = 
> fieldSerializers(i).copy(from.productElement(i).asInstanceOf[AnyRef])
> i += 1
>   }
>   createInstance(fields)
> }
> {code}
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10128: [FLINK-14641][docs] Fix description of metric `fullRestarts`

2019-11-07 Thread GitBox
flinkbot commented on issue #10128: [FLINK-14641][docs] Fix description of 
metric `fullRestarts`
URL: https://github.com/apache/flink/pull/10128#issuecomment-551407824
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 409604f05255e81a4dd943042c5b18c18ac16a49 (Fri Nov 08 
06:49:43 UTC 2019)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14641) Fix description of metric `fullRestarts`

2019-11-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14641:
---
Labels: pull-request-available  (was: )

> Fix description of metric `fullRestarts`
> 
>
> Key: FLINK-14641
> URL: https://issues.apache.org/jira/browse/FLINK-14641
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0, 1.9.2
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.9.2
>
>
> The metric `fullRestarts` counts both full restarts and fine grained restarts 
> since 1.9.2.
> We should update the metric description doc accordingly.
> We need to pointing out the the metric counts full restarts in 1.9.1 or 
> earlier versions, and turned to count all kinds of restarts since 1.9.2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk opened a new pull request #10128: [FLINK-14641][docs] Fix description of metric `fullRestarts`

2019-11-07 Thread GitBox
zhuzhurk opened a new pull request #10128: [FLINK-14641][docs] Fix description 
of metric `fullRestarts`
URL: https://github.com/apache/flink/pull/10128
 
 
   
   ## What is the purpose of the change
   
   The metric `fullRestarts` counts both full restarts and fine grained 
restarts since 1.9.2.
   This PR is to update the metric description doc accordingly.
   
   ## Brief change log
   
 - *Fixed descriptions doc of metric `fullRestarts`*
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14666) support multiple catalog in flink table sql

2019-11-07 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969879#comment-16969879
 ] 

Kurt Young commented on FLINK-14666:


Can you check on the latest master?

Previous, all temporary object are registered in default catalog and default 
database, no matter which is your current one. But after FLIP-64, things might 
been changed. 

cc [~dwysakowicz]

> support multiple catalog in flink table sql
> ---
>
> Key: FLINK-14666
> URL: https://issues.apache.org/jira/browse/FLINK-14666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: yuemeng
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently, calcite will only use the current catalog as schema path to 
> validate sql node,
> maybe this is not reasonable
> {code}
> tableEnvironment.useCatalog("user_catalog");
> tableEnvironment.useDatabase("user_db");
>  Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt 
> from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' 
> SECOND)"); tableEnvironment.registerTable("v1", table);
> Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
> tableEnvironment.registerTable("v2", t2);
> tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT 
> action, os,cast (cnt as BIGINT) as cnt from v2");
> {code}
> suppose source table music_queue_3  and sink table kafka_table_test1 both in 
> user_catalog
>  catalog 
>  but some temp table or view such as v1, v2,v3 will register in default 
> catalog.
> when we select temp table v2 and insert it into our own catalog table 
> database2.kafka_table_test1 
> it always failed with sql node validate, because of schema path in
> catalog reader is the current catalog without default catalog,the temp table 
> or view will never be Identified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344006383
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleService.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Samples whether a task is back pressured multi times. The total number of 
samples
+ * divided by the number of back pressure samples reaches the back pressure 
ratio.
+ */
+public class BackPressureSampleService {
+
+   /** Number of samples to take when determining the back pressure of a 
task. */
+   private final int numSamples;
+
+   /** Time to wait between samples when determining the back pressure of 
a task. */
+   private final Time delayBetweenSamples;
+
+   /** Executor to run sample tasks. */
+   private final ScheduledExecutor scheduledExecutor;
+
+   BackPressureSampleService(
+   int numSamples,
+   Time delayBetweenSamples,
+   ScheduledExecutor scheduledExecutor) {
+
+   checkArgument(numSamples >= 1, "Illegal number of samples: " + 
numSamples);
+
+   this.numSamples = numSamples;
+   this.delayBetweenSamples = checkNotNull(delayBetweenSamples);
+   this.scheduledExecutor = checkNotNull(scheduledExecutor, "The 
scheduledExecutor must not be null.");
+   }
+
+   /**
+* Returns a future that completes with the back pressure ratio of a 
task.
+*
+* @param task The task to be sampled.
+* @return A future of the task back pressure ratio.
+*/
+   public CompletableFuture 
sampleTaskBackPressure(BackPressureSampleableTask task) {
+   return sampleTaskBackPressure(
+   checkNotNull(task),
+   numSamples,
+   delayBetweenSamples,
 
 Review comment:
   no need to pass `delayBetweenSamples`, it can be access within class.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344006278
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleService.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Samples whether a task is back pressured multi times. The total number of 
samples
+ * divided by the number of back pressure samples reaches the back pressure 
ratio.
+ */
+public class BackPressureSampleService {
+
+   /** Number of samples to take when determining the back pressure of a 
task. */
+   private final int numSamples;
+
+   /** Time to wait between samples when determining the back pressure of 
a task. */
+   private final Time delayBetweenSamples;
+
+   /** Executor to run sample tasks. */
+   private final ScheduledExecutor scheduledExecutor;
+
+   BackPressureSampleService(
+   int numSamples,
+   Time delayBetweenSamples,
+   ScheduledExecutor scheduledExecutor) {
+
+   checkArgument(numSamples >= 1, "Illegal number of samples: " + 
numSamples);
+
+   this.numSamples = numSamples;
+   this.delayBetweenSamples = checkNotNull(delayBetweenSamples);
+   this.scheduledExecutor = checkNotNull(scheduledExecutor, "The 
scheduledExecutor must not be null.");
+   }
+
+   /**
+* Returns a future that completes with the back pressure ratio of a 
task.
+*
+* @param task The task to be sampled.
+* @return A future of the task back pressure ratio.
+*/
+   public CompletableFuture 
sampleTaskBackPressure(BackPressureSampleableTask task) {
+   return sampleTaskBackPressure(
+   checkNotNull(task),
+   numSamples,
+   delayBetweenSamples,
+   new ArrayList<>(numSamples),
+   new CompletableFuture<>());
+   }
+
+   private CompletableFuture sampleTaskBackPressure(
+   final BackPressureSampleableTask task,
 
 Review comment:
   remove final for all the arguments, also for the below method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10098: [FLINK-14326][FLINK-14324][table] Support to apply watermark assigner in planner

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10098: [FLINK-14326][FLINK-14324][table] 
Support to apply watermark assigner in planner
URL: https://github.com/apache/flink/pull/10098#issuecomment-550174677
 
 
   
   ## CI report:
   
   * a3db95a7340322421d1fa826dcac850116011687 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135182714)
   * 6db8b32408ffd954c96d1829c36d04542825a068 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135450723)
   * 31635a192b64660cebe93afee586249bf1df9b13 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135461602)
   * 229ede5e47a1d7346edc79449a39cf3c56858604 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135569483)
   * a31fdb8792a506e6a79ce108f35b7123b389e84d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135588353)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10127: [FLINK-14654] Fix the arguments number mismatching with placeholders in log statements

2019-11-07 Thread GitBox
flinkbot commented on issue #10127: [FLINK-14654] Fix the arguments number 
mismatching with placeholders in log statements
URL: https://github.com/apache/flink/pull/10127#issuecomment-551405829
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 02df4747bebc7f7601bce39dea46ea285c46 (Fri Nov 08 
06:40:54 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14641) Fix description of metric `fullRestarts`

2019-11-07 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969877#comment-16969877
 ] 

Zhu Zhu commented on FLINK-14641:
-

[~chesnay] not pretty sure whether we can apply this fix to 1.9.2 only? Or 
should we also do it in master and base FLINK-14643 on it?

> Fix description of metric `fullRestarts`
> 
>
> Key: FLINK-14641
> URL: https://issues.apache.org/jira/browse/FLINK-14641
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0, 1.9.2
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Critical
> Fix For: 1.10.0, 1.9.2
>
>
> The metric `fullRestarts` counts both full restarts and fine grained restarts 
> since 1.9.2.
> We should update the metric description doc accordingly.
> We need to pointing out the the metric counts full restarts in 1.9.1 or 
> earlier versions, and turned to count all kinds of restarts since 1.9.2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14654) Fix the arguments number mismatching with placeholders in log statements

2019-11-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14654:
---
Labels: pull-request-available  (was: )

> Fix the arguments number mismatching with placeholders in log statements
> 
>
> Key: FLINK-14654
> URL: https://issues.apache.org/jira/browse/FLINK-14654
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.9.1
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> As official Flink [java code 
> style|https://flink.apache.org/contributing/code-style-and-quality-java.html#preconditions-and-log-statements]
>  suggested, we should use correct log statement format. However, there 
> existed 13 files within current master branch that the arguments number 
> mismatch with placeholders in log statements.
> The error looks like:
> {code:java}
> LOG.warn("Failed to read native metric %s from RocksDB", property, e);
> {code}
> and the correct format should be
> {code:java}
> LOG.warn("Failed to read native metric {} from RocksDB.", property, e);
> {code}
> The other errors look like
> {code:java}
> LOG.warn("Could not find method implementations in the shaded jar. Exception: 
> {}", e);
> {code}
> and the correct format should be
> {code:java}
> LOG.warn("Could not find method implementations in the shaded jar.", e);
> {code}
> Below is the full list of files have problems in log statements.
> {code:java}
> flink-contrib/flink-connector-wikiedits/src/main/java/org/apache/flink/streaming/connectors/wikiedits/WikipediaEditEventIrcStream.java
> flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
> flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/PipelineErrorHandler.java
> flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java
> flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DatadogHttpReporter.java
> flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/ParquetPojoInputFormat.java
> flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/ParquetTableSource.java
> flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/functions/SqlFunctionUtils.java
> flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/values/ValuesInputFormat.java
> flink-end-to-end-tests/flink-connector-gcp-pubsub-emulator-tests/src/test/java/org/apache/flink/streaming/connectors/gcp/pubsub/emulator/GCloudEmulatorManager.java
> flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java
> flink-runtime/src/main/java/org/apache/flink/runtime/metrics/ReporterSetup.java
> flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBNativeMetricMonitor.java{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10074: [FLINK-14499][Runtime / Metrics]MetricRegistry#getMetricQueryServiceGatewayRpcAddress is Nonnull

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10074: [FLINK-14499][Runtime / 
Metrics]MetricRegistry#getMetricQueryServiceGatewayRpcAddress is Nonnull
URL: https://github.com/apache/flink/pull/10074#issuecomment-549322411
 
 
   
   ## CI report:
   
   * 4807c59ae20273d5b4611395ca5467217d7d0ca3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134842985)
   * fedd3f2abd43a67d6461b895145964fcdf98af0d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/134961727)
   * 5c077aaf9166fbbd068f80eb61294d9cfea2eddc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135207803)
   * 8d5a6b460941e97a1a4fd2f432680721fd2f2289 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135239336)
   * 40936c4b23e94ecdf8d925ed6eae917ec3181584 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135567432)
   * 070ca5c4166edea55b93764c6d65b46e7ae9c938 : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] Myasuka opened a new pull request #10127: [FLINK-14654] Fix the arguments number mismatching with placeholders in log statements

2019-11-07 Thread GitBox
Myasuka opened a new pull request #10127: [FLINK-14654] Fix the arguments 
number mismatching with placeholders in log statements
URL: https://github.com/apache/flink/pull/10127
 
 
   ## What is the purpose of the change
   As official Flink [java code 
style](https://flink.apache.org/contributing/code-style-and-quality-java.html#preconditions-and-log-statements)
 suggested, we should use correct log statement format. However, there existed 
13 files within current master branch that the arguments number mismatch with 
placeholders in log statements.
   
   ## Brief change log
   
 - Correct the log format in all 13 files.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   You could verify whether there exists any more files with log formatting 
problem [within 
Intellij](https://www.jetbrains.com/help/idea/running-inspections.html) by 
"Number of placeholders does not match arguments in log calling" inspection.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344005130
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleService.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Samples whether a task is back pressured multi times. The total number of 
samples
+ * divided by the number of back pressure samples reaches the back pressure 
ratio.
+ */
+public class BackPressureSampleService {
+
+   /** Number of samples to take when determining the back pressure of a 
task. */
+   private final int numSamples;
+
+   /** Time to wait between samples when determining the back pressure of 
a task. */
+   private final Time delayBetweenSamples;
+
+   /** Executor to run sample tasks. */
+   private final ScheduledExecutor scheduledExecutor;
+
+   BackPressureSampleService(
+   int numSamples,
+   Time delayBetweenSamples,
+   ScheduledExecutor scheduledExecutor) {
+
+   checkArgument(numSamples >= 1, "Illegal number of samples: " + 
numSamples);
+
+   this.numSamples = numSamples;
+   this.delayBetweenSamples = checkNotNull(delayBetweenSamples);
+   this.scheduledExecutor = checkNotNull(scheduledExecutor, "The 
scheduledExecutor must not be null.");
+   }
+
+   /**
+* Returns a future that completes with the back pressure ratio of a 
task.
 
 Review comment:
   Schedules to sample the task back pressure and returns a future


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-14668) LocalExecutor#getOrCreateExecutionContext not working as expected

2019-11-07 Thread zhangwei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangwei closed FLINK-14668.

Resolution: Fixed

> LocalExecutor#getOrCreateExecutionContext not working as expected
> -
>
> Key: FLINK-14668
> URL: https://issues.apache.org/jira/browse/FLINK-14668
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.9.0, 1.9.1
>Reporter: zhangwei
>Priority: Critical
> Fix For: 1.11.0
>
>
> ExecutionContext 
> [copy|[https://github.com/apache/flink/blob/b0a9afdd24fb70131b1e80d46d0ca101235a4a36/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java#L134]]
>  a new SessionContext in the constructor,  but SessionContext members do not 
> always redo equals, so 
> {code:java}
> executionContext.getSessionContext().equals(session)
> {code}
>   is always false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14668) LocalExecutor#getOrCreateExecutionContext not working as expected

2019-11-07 Thread zhangwei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969876#comment-16969876
 ] 

zhangwei commented on FLINK-14668:
--

I read it wrong. I'm sorry

> LocalExecutor#getOrCreateExecutionContext not working as expected
> -
>
> Key: FLINK-14668
> URL: https://issues.apache.org/jira/browse/FLINK-14668
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.9.0, 1.9.1
>Reporter: zhangwei
>Priority: Critical
> Fix For: 1.11.0
>
>
> ExecutionContext 
> [copy|[https://github.com/apache/flink/blob/b0a9afdd24fb70131b1e80d46d0ca101235a4a36/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java#L134]]
>  a new SessionContext in the constructor,  but SessionContext members do not 
> always redo equals, so 
> {code:java}
> executionContext.getSessionContext().equals(session)
> {code}
>   is always false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344004540
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleService.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Samples whether a task is back pressured multi times. The total number of 
samples
+ * divided by the number of back pressure samples reaches the back pressure 
ratio.
+ */
+public class BackPressureSampleService {
+
+   /** Number of samples to take when determining the back pressure of a 
task. */
+   private final int numSamples;
+
+   /** Time to wait between samples when determining the back pressure of 
a task. */
+   private final Time delayBetweenSamples;
+
+   /** Executor to run sample tasks. */
+   private final ScheduledExecutor scheduledExecutor;
+
+   BackPressureSampleService(
+   int numSamples,
+   Time delayBetweenSamples,
+   ScheduledExecutor scheduledExecutor) {
+
+   checkArgument(numSamples >= 1, "Illegal number of samples: " + 
numSamples);
+
+   this.numSamples = numSamples;
+   this.delayBetweenSamples = checkNotNull(delayBetweenSamples);
+   this.scheduledExecutor = checkNotNull(scheduledExecutor, "The 
scheduledExecutor must not be null.");
+   }
+
+   /**
+* Returns a future that completes with the back pressure ratio of a 
task.
+*
+* @param task The task to be sampled.
+* @return A future of the task back pressure ratio.
 
 Review comment:
   A future contains the task back pressure ratio


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344004177
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleService.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Samples whether a task is back pressured multi times. The total number of 
samples
+ * divided by the number of back pressure samples reaches the back pressure 
ratio.
+ */
+public class BackPressureSampleService {
+
+   /** Number of samples to take when determining the back pressure of a 
task. */
+   private final int numSamples;
+
+   /** Time to wait between samples when determining the back pressure of 
a task. */
+   private final Time delayBetweenSamples;
+
+   /** Executor to run sample tasks. */
+   private final ScheduledExecutor scheduledExecutor;
+
+   BackPressureSampleService(
+   int numSamples,
+   Time delayBetweenSamples,
+   ScheduledExecutor scheduledExecutor) {
+
+   checkArgument(numSamples >= 1, "Illegal number of samples: " + 
numSamples);
+
+   this.numSamples = numSamples;
+   this.delayBetweenSamples = checkNotNull(delayBetweenSamples);
+   this.scheduledExecutor = checkNotNull(scheduledExecutor, "The 
scheduledExecutor must not be null.");
 
 Review comment:
   checkNotNull(scheduledExecutor) as above, the comment message seems useless.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10126: [FLINK-14590][python] Unify the working directory of Java process and Python process when submitting python jobs via "flink run -py"

2019-11-07 Thread GitBox
flinkbot commented on issue #10126: [FLINK-14590][python] Unify the working 
directory of Java process and Python process when submitting python jobs via 
"flink run -py"
URL: https://github.com/apache/flink/pull/10126#issuecomment-551403753
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit ea5abdfce3ab26a0a196fd7d73a78de109d71bc3 (Fri Nov 08 
06:32:01 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-14590).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14667) flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory (TableSourceFactory) for the kafka

2019-11-07 Thread chun111111 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chun11 updated FLINK-14667:
---
Description: 
[root@mj flink-1.9.1]# ./bin/flink run -m yarn-cluster 
/root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar

2019-11-07 16:48:57,616 INFO  org.apache.hadoop.yarn.client.RMProxy 
    - Connecting to ResourceManager at /0.0.0.0:8032

2019-11-07 16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli 
    - No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar

2019-11-07 16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli 
    - No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar

2019-11-07 16:48:57,986 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Cluster 
specification: ClusterSpecification\{masterMemoryMB=1024, 
taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}

2019-11-07 16:48:58,657 WARN  
org.apache.flink.yarn.AbstractYarnClusterDescriptor   - The 
configuration directory ('/opt/app/flink-1.9.1/conf') contains both LOG4J and 
Logback configuration files. Please delete or rename one of them.

2019-11-07 16:49:00,954 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Submitting 
application master application_1573090964983_0039

2019-11-07 16:49:00,986 INFO  
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted 
application application_1573090964983_0039

2019-11-07 16:49:00,986 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Waiting for the 
cluster to be allocated

2019-11-07 16:49:00,988 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor   - Deploying 
cluster, current state ACCEPTED

2019-11-07 16:49:06,534 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor   - YARN 
application has been deployed successfully.

Starting execution of program

 



 The program finished with the following exception:

 

org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: findAndCreateTableSource failed.

   at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)

   at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)

   at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)

   at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)

   at 
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)

   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)

   at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)

   at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)

   at java.security.AccessController.doPrivileged(Native Method)

   at javax.security.auth.Subject.doAs(Subject.java:422)

   at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)

   at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)

   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)

Caused by: org.apache.flink.table.api.TableException: findAndCreateTableSource 
failed.

   at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67)

   at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54)

   at 
org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69)

   at 
com.streaming.activity.task.PaymentViewIndex.registerKafkaTable(PaymentViewIndex.java:205)

   at 
com.streaming.activity.task.PaymentViewIndex.registerSourceTable(PaymentViewIndex.java:106)

   at 
com.streaming.activity.task.PaymentViewIndex.main(PaymentViewIndex.java:68)

   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

   at java.lang.reflect.Method.invoke(Method.java:498)

   at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)

   ... 12 more

{color:#de350b}*Caused by: 
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' in*{color}

{color:#de350b}*the classpath.*{color}

 

Reason: No cont

[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344004012
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/BackPressureSampleService.java
 ##
 @@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.runtime.taskexecutor;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Samples whether a task is back pressured multi times. The total number of 
samples
+ * divided by the number of back pressure samples reaches the back pressure 
ratio.
+ */
+public class BackPressureSampleService {
+
+   /** Number of samples to take when determining the back pressure of a 
task. */
+   private final int numSamples;
+
+   /** Time to wait between samples when determining the back pressure of 
a task. */
+   private final Time delayBetweenSamples;
+
+   /** Executor to run sample tasks. */
 
 Review comment:
   to run sample back pressures


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14668) LocalExecutor#getOrCreateExecutionContext not working as expected

2019-11-07 Thread zhangwei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangwei updated FLINK-14668:
-
Description: 
ExecutionContext 
[copy|[https://github.com/apache/flink/blob/b0a9afdd24fb70131b1e80d46d0ca101235a4a36/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java#L134]]
 a new SessionContext in the constructor,  but SessionContext members do not 
always redo equals, so 
{code:java}
executionContext.getSessionContext().equals(session)
{code}
  is always false.

  was:```ExecutionContext``` 
[copy](https://github.com/apache/flink/blob/b0a9afdd24fb70131b1e80d46d0ca101235a4a36/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java#L134)
 a new ```SessionContext``` in the constructor,  but ```SessionContext``` 
members do not always redo ```equals```,so 
```executionContext.getSessionContext().equals(session)``` is always false.


> LocalExecutor#getOrCreateExecutionContext not working as expected
> -
>
> Key: FLINK-14668
> URL: https://issues.apache.org/jira/browse/FLINK-14668
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.9.0, 1.9.1
>Reporter: zhangwei
>Priority: Critical
> Fix For: 1.11.0
>
>
> ExecutionContext 
> [copy|[https://github.com/apache/flink/blob/b0a9afdd24fb70131b1e80d46d0ca101235a4a36/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java#L134]]
>  a new SessionContext in the constructor,  but SessionContext members do not 
> always redo equals, so 
> {code:java}
> executionContext.getSessionContext().equals(session)
> {code}
>   is always false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14590) Unify the working directory of Java process and Python process when submitting python jobs via "flink run -py"

2019-11-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14590:
---
Labels: pull-request-available  (was: )

> Unify the working directory of Java process and Python process when 
> submitting python jobs via "flink run -py"
> --
>
> Key: FLINK-14590
> URL: https://issues.apache.org/jira/browse/FLINK-14590
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Wei Zhong
>Priority: Minor
>  Labels: pull-request-available
>
> Assume we enter this flink directory with following structure:
> {code:java}
> flink/
>   bin/
>   flink
>   pyflink-shell.sh
>   python-gateway-server.sh
>   ...
>   bad_case/
>word_count.py
>data.txt
>   lib/...
>   opt/...{code}
>  And the word_count.py has such a piece of code:
> {code:java}
> t_config = TableConfig()
> env = StreamExecutionEnvironment.get_execution_environment()
> t_env = StreamTableEnvironment.create(env, t_config)
> env._j_stream_execution_environment.registerCachedFile("data", 
> "bad_case/data.txt")
> with open("bad_case/data.txt", "r") as f:
> content = f.read()
> elements = [(word, 1) for word in content.split(" ")]
> t_env.from_elements(elements, ["word", "count"]){code}
> Then we enter the "flink" directory and run:
> {code:java}
> bin/flink run -py bad_case/word_count.py
> {code}
> The program will fail at the line of "with open("bad_case/data.txt", "r") as 
> f:".
> It is because the working directory of Java process is current directory but 
> the working directory of Python process is a temporary directory.
> So there is no problem when relative path is used in the api call to java 
> process. But if relative path is used in other place such as native file 
> access, it will fail, because the working directory of python process has 
> been change to a temporary directory that is not known to users.
> I think it will cause some confusion for users, especially after we support 
> dependency management. It will be great if we unify the working directory of 
> Java process and Python process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] WeiZhong94 opened a new pull request #10126: [FLINK-14590][python] Unify the working directory of Java process and Python process when submitting python jobs via "flink run -py"

2019-11-07 Thread GitBox
WeiZhong94 opened a new pull request #10126: [FLINK-14590][python] Unify the 
working directory of Java process and Python process when submitting python 
jobs via "flink run -py"
URL: https://github.com/apache/flink/pull/10126
 
 
   ## What is the purpose of the change
   
   *This pull request unifies the working directory of Java process and Python 
process when submitting python jobs via "flink run -py".*
   
   
   ## Brief change log
   
 - *Remove working directory setting when creating python process.*
 - *Adapt PYTHONPATH creation to support run python process in java working 
directory.*
   
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
*PythonEnvUtilsTest*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14668) LocalExecutor#getOrCreateExecutionContext not working as expected

2019-11-07 Thread zhangwei (Jira)
zhangwei created FLINK-14668:


 Summary: LocalExecutor#getOrCreateExecutionContext not working as 
expected
 Key: FLINK-14668
 URL: https://issues.apache.org/jira/browse/FLINK-14668
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.9.1, 1.9.0
Reporter: zhangwei
 Fix For: 1.11.0


```ExecutionContext``` 
[copy](https://github.com/apache/flink/blob/b0a9afdd24fb70131b1e80d46d0ca101235a4a36/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ExecutionContext.java#L134)
 a new ```SessionContext``` in the constructor,  but ```SessionContext``` 
members do not always redo ```equals```,so 
```executionContext.getSessionContext().equals(session)``` is always false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344003590
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/OperatorBackPressureStats.java
 ##
 @@ -25,40 +25,36 @@
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
- * Back pressure statistics of multiple tasks.
- *
- * Statistics are gathered by sampling stack traces of running tasks. The
- * back pressure ratio denotes the ratio of traces indicating back pressure
- * to the total number of sampled traces.
+ * Back pressure statistics of multiple tasks generated by {@link 
BackPressureStatsTrackerImpl}.
  */
 public class OperatorBackPressureStats implements Serializable {
 
private static final long serialVersionUID = 1L;
 
-   /** ID of the corresponding sample. */
-   private final int sampleId;
+   /** ID of the corresponding request. */
+   private final int requestId;
 
-   /** End time stamp of the corresponding sample. */
+   /** End time stamp when all subtask back pressure ratios were collected 
at the JobManager. */
 
 Review comment:
   when all the responses of back pressure request were collected in 
BackPressureRequestCoordinator


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14667) flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory (TableSourceFactory) for the kafka

2019-11-07 Thread chun111111 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969870#comment-16969870
 ] 

chun11 edited comment on FLINK-14667 at 11/8/19 6:24 AM:
-

1. flink version :flink182,flink191 

2. My program is a fat jar with all flink lib.

3. the META-INF/services/org.apache.flink.table.TableFactory contains only 
"org.apache.flink.formats.json.JsonRowFormatFactory " .

The Kafka011TableSourceSinkFactory is packaged in the fat jar ,but it dones't 
work. :)

 


was (Author: chun11):
1. flink version :flink182,flink191 

2. My program is a fat jar with all flink lib.

The Kafka011TableSourceSinkFactory is packaged in the fat jar ,but it dones't 
work. :)

 

> flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory 
> (TableSourceFactory) for the kafka 
> --
>
> Key: FLINK-14667
> URL: https://issues.apache.org/jira/browse/FLINK-14667
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.8.2, 1.9.1
>Reporter: chun11
>Priority: Major
>
> [root@mj flink-1.9.1]# ./bin/flink run -m yarn-cluster 
> /root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar[root@mj 
> flink-1.9.1]# ./bin/flink run -m yarn-cluster 
> /root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar2019-11-07 
> 16:48:57,616 INFO  org.apache.hadoop.yarn.client.RMProxy                      
>    - Connecting to ResourceManager at /0.0.0.0:80322019-11-07 16:48:57,789 
> INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                 - No path 
> for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 
> 16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>    - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 
> 16:48:57,986 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor        
>    - Cluster specification: ClusterSpecification\{masterMemoryMB=1024, 
> taskManagerMemoryMB=1024, numberTaskManagers=1, 
> slotsPerTaskManager=1}2019-11-07 16:48:58,657 WARN  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - The 
> configuration directory ('/opt/app/flink-1.9.1/conf') contains both LOG4J and 
> Logback configuration files. Please delete or rename one of them.2019-11-07 
> 16:49:00,954 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor        
>    - Submitting application master application_1573090964983_00392019-11-07 
> 16:49:00,986 INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl      
>    - Submitted application application_1573090964983_00392019-11-07 
> 16:49:00,986 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor        
>    - Waiting for the cluster to be allocated2019-11-07 16:49:00,988 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Deploying 
> cluster, current state ACCEPTED2019-11-07 16:49:06,534 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - YARN 
> application has been deployed successfully.Starting execution of program
>  The program 
> finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: findAndCreateTableSource failed. at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) 
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) 
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) 
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083) 
> at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)Caused 
> by: org.apache.flink.table.api.TableException: findAndCreateTableSource 
> failed. at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:6

[GitHub] [flink] klion26 commented on issue #10111: [FLINK-13969] Resuming Externalized Checkpoint (rocks, incremental, scale down) end-to-end test fails on Travis

2019-11-07 Thread GitBox
klion26 commented on issue #10111: [FLINK-13969] Resuming Externalized 
Checkpoint (rocks, incremental, scale down) end-to-end test fails on Travis
URL: https://github.com/apache/flink/pull/10111#issuecomment-551402058
 
 
   @tillrohrmann thanks for the review and merging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi edited a comment on issue #10022: [FLINK-14135][orc] Introduce OrcColumnarRowInputFormat

2019-11-07 Thread GitBox
JingsongLi edited a comment on issue #10022: [FLINK-14135][orc] Introduce 
OrcColumnarRowInputFormat
URL: https://github.com/apache/flink/pull/10022#issuecomment-551388324
 
 
   @KurtYoung @lirui-apache Hope you can take a look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344001601
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureStatsTrackerImpl.java
 ##
 @@ -250,25 +179,18 @@ public void shutDown() {
}
 
/**
-* Invalidates the cache (irrespective of clean up interval).
+* Callback on completed task back pressure request.
 */
-   void invalidateOperatorStatsCache() {
-   operatorStatsCache.invalidateAll();
-   }
-
-   /**
-* Callback on completed stack trace sample.
-*/
-   class StackTraceSampleCompletionCallback implements 
BiFunction {
+   class BackPressureRequestCompletionCallback implements 
BiFunction {
 
 Review comment:
   private class


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344001624
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureStatsTrackerImpl.java
 ##
 @@ -250,25 +179,18 @@ public void shutDown() {
}
 
/**
-* Invalidates the cache (irrespective of clean up interval).
+* Callback on completed task back pressure request.
 */
-   void invalidateOperatorStatsCache() {
-   operatorStatsCache.invalidateAll();
-   }
-
-   /**
-* Callback on completed stack trace sample.
-*/
-   class StackTraceSampleCompletionCallback implements 
BiFunction {
+   class BackPressureRequestCompletionCallback implements 
BiFunction {
 
private final ExecutionJobVertex vertex;
 
-   public StackTraceSampleCompletionCallback(ExecutionJobVertex 
vertex) {
+   public BackPressureRequestCompletionCallback(ExecutionJobVertex 
vertex) {
 
 Review comment:
   remove `public`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14667) flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory (TableSourceFactory) for the kafka

2019-11-07 Thread chun111111 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969870#comment-16969870
 ] 

chun11 edited comment on FLINK-14667 at 11/8/19 6:15 AM:
-

1. flink version :flink182,flink191 

2. My program is a fat jar with all flink lib.

The Kafka011TableSourceSinkFactory is packaged in the fat jar ,but it dones't 
work. :)

 


was (Author: chun11):
The Kafka011TableSourceSinkFactory is packaged in the fat jar . 

> flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory 
> (TableSourceFactory) for the kafka 
> --
>
> Key: FLINK-14667
> URL: https://issues.apache.org/jira/browse/FLINK-14667
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.8.2, 1.9.1
>Reporter: chun11
>Priority: Major
>
> [root@mj flink-1.9.1]# ./bin/flink run -m yarn-cluster 
> /root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar[root@mj 
> flink-1.9.1]# ./bin/flink run -m yarn-cluster 
> /root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar2019-11-07 
> 16:48:57,616 INFO  org.apache.hadoop.yarn.client.RMProxy                      
>    - Connecting to ResourceManager at /0.0.0.0:80322019-11-07 16:48:57,789 
> INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                 - No path 
> for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 
> 16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>    - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 
> 16:48:57,986 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor        
>    - Cluster specification: ClusterSpecification\{masterMemoryMB=1024, 
> taskManagerMemoryMB=1024, numberTaskManagers=1, 
> slotsPerTaskManager=1}2019-11-07 16:48:58,657 WARN  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - The 
> configuration directory ('/opt/app/flink-1.9.1/conf') contains both LOG4J and 
> Logback configuration files. Please delete or rename one of them.2019-11-07 
> 16:49:00,954 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor        
>    - Submitting application master application_1573090964983_00392019-11-07 
> 16:49:00,986 INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl      
>    - Submitted application application_1573090964983_00392019-11-07 
> 16:49:00,986 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor        
>    - Waiting for the cluster to be allocated2019-11-07 16:49:00,988 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Deploying 
> cluster, current state ACCEPTED2019-11-07 16:49:06,534 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - YARN 
> application has been deployed successfully.Starting execution of program
>  The program 
> finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: findAndCreateTableSource failed. at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) 
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) 
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) 
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083) 
> at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)Caused 
> by: org.apache.flink.table.api.TableException: findAndCreateTableSource 
> failed. at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67)
>  at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54)
>  at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69)
>  at 
> com.

[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344000979
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureStatsTrackerImpl.java
 ##
 @@ -157,70 +112,44 @@ public long getCleanUpInterval() {
synchronized (lock) {
final OperatorBackPressureStats stats = 
operatorStatsCache.getIfPresent(vertex);
if (stats == null || backPressureStatsRefreshInterval 
<= System.currentTimeMillis() - stats.getEndTimestamp()) {
-   triggerStackTraceSampleInternal(vertex);
+   triggerBackPressureRequestInternal(vertex);
}
return Optional.ofNullable(stats);
}
}
 
/**
-* Triggers a stack trace sample for a operator to gather the back 
pressure
-* statistics. If there is a sample in progress for the operator, the 
call
+* Triggers a back pressure request for a operator to gather the back 
pressure
 
 Review comment:
   operator -> vertex


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10098: [FLINK-14326][FLINK-14324][table] Support to apply watermark assigner in planner

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10098: [FLINK-14326][FLINK-14324][table] 
Support to apply watermark assigner in planner
URL: https://github.com/apache/flink/pull/10098#issuecomment-550174677
 
 
   
   ## CI report:
   
   * a3db95a7340322421d1fa826dcac850116011687 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135182714)
   * 6db8b32408ffd954c96d1829c36d04542825a068 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135450723)
   * 31635a192b64660cebe93afee586249bf1df9b13 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135461602)
   * 229ede5e47a1d7346edc79449a39cf3c56858604 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135569483)
   * a31fdb8792a506e6a79ce108f35b7123b389e84d : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14667) flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory (TableSourceFactory) for the kafka

2019-11-07 Thread chun111111 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969870#comment-16969870
 ] 

chun11 commented on FLINK-14667:


The Kafka011TableSourceSinkFactory is packaged in the fat jar . 

> flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory 
> (TableSourceFactory) for the kafka 
> --
>
> Key: FLINK-14667
> URL: https://issues.apache.org/jira/browse/FLINK-14667
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.8.2, 1.9.1
>Reporter: chun11
>Priority: Major
>
> [root@mj flink-1.9.1]# ./bin/flink run -m yarn-cluster 
> /root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar[root@mj 
> flink-1.9.1]# ./bin/flink run -m yarn-cluster 
> /root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar2019-11-07 
> 16:48:57,616 INFO  org.apache.hadoop.yarn.client.RMProxy                      
>    - Connecting to ResourceManager at /0.0.0.0:80322019-11-07 16:48:57,789 
> INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                 - No path 
> for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 
> 16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>    - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 
> 16:48:57,986 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor        
>    - Cluster specification: ClusterSpecification\{masterMemoryMB=1024, 
> taskManagerMemoryMB=1024, numberTaskManagers=1, 
> slotsPerTaskManager=1}2019-11-07 16:48:58,657 WARN  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - The 
> configuration directory ('/opt/app/flink-1.9.1/conf') contains both LOG4J and 
> Logback configuration files. Please delete or rename one of them.2019-11-07 
> 16:49:00,954 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor        
>    - Submitting application master application_1573090964983_00392019-11-07 
> 16:49:00,986 INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl      
>    - Submitted application application_1573090964983_00392019-11-07 
> 16:49:00,986 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor        
>    - Waiting for the cluster to be allocated2019-11-07 16:49:00,988 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Deploying 
> cluster, current state ACCEPTED2019-11-07 16:49:06,534 INFO  
> org.apache.flink.yarn.AbstractYarnClusterDescriptor           - YARN 
> application has been deployed successfully.Starting execution of program
>  The program 
> finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: findAndCreateTableSource failed. at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) 
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) 
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) 
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083) 
> at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)Caused 
> by: org.apache.flink.table.api.TableException: findAndCreateTableSource 
> failed. at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67)
>  at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54)
>  at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69)
>  at 
> com.streaming.activity.task.PaymentViewIndex.registerKafkaTable(PaymentViewIndex.java:205)
>  at 
> com.streaming.activity.task.PaymentViewIndex.registerSourceTable(PaymentViewIndex.java:106)
>  at 
> com.streaming.activity.task.PaymentViewIndex.main(Payment

[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r344000581
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureStatsTrackerImpl.java
 ##
 @@ -47,108 +47,63 @@
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
- * Back pressure statistics tracker.
- *
- * Back pressure is determined by sampling running tasks. If a task is
- * slowed down by back pressure it will be stuck in memory requests to a
- * {@link org.apache.flink.runtime.io.network.buffer.LocalBufferPool}.
- *
- * The back pressured stack traces look like this:
- *
- * 
- * java.lang.Object.wait(Native Method)
- * o.a.f.[...].LocalBufferPool.requestBuffer(LocalBufferPool.java:163)
- * o.a.f.[...].LocalBufferPool.requestBufferBlocking(LocalBufferPool.java:133) 
<--- BLOCKING
- * request
- * [...]
- * 
+ * Back pressure statistics tracker. See {@link 
org.apache.flink.runtime.taskexecutor.BackPressureSampleService}
+ * for more details about how back pressure ratio of a task is calculated.
  */
 public class BackPressureStatsTrackerImpl implements BackPressureStatsTracker {
 
private static final Logger LOG = 
LoggerFactory.getLogger(BackPressureStatsTrackerImpl.class);
 
-   /** Maximum stack trace depth for samples. */
-   static final int MAX_STACK_TRACE_DEPTH = 8;
-
-   /** Expected class name for back pressure indicating stack trace 
element. */
-   static final String EXPECTED_CLASS_NAME = 
"org.apache.flink.runtime.io.network.buffer.LocalBufferPool";
-
-   /** Expected method name for back pressure indicating stack trace 
element. */
-   static final String EXPECTED_METHOD_NAME = 
"requestBufferBuilderBlocking";
-
/** Lock guarding trigger operations. */
private final Object lock = new Object();
 
-   /* Stack trace sample coordinator. */
-   private final StackTraceSampleCoordinator coordinator;
+   private final BackPressureRequestCoordinator coordinator;
 
/**
 * Completed stats. Important: Job vertex IDs need to be scoped by job 
ID,
-* because they are potentially constant across runs messing up the 
cached
-* data.
+* because they are potentially constant across runs which may mess up 
the
+* cached data.
 */
private final Cache 
operatorStatsCache;
 
-   /** Pending in progress stats. Important: Job vertex IDs need to be 
scoped
-* by job ID, because they are potentially constant across runs messing 
up
-* the cached data.*/
+   /**
+* Pending in progress stats. Important: Job vertex IDs need to be 
scoped
+* by job ID, because they are potentially constant across runs which 
may
+* mess up the cached data.
+*/
private final Set pendingStats = new HashSet<>();
 
-   /** Cleanup interval for completed stats cache. */
-   private final int cleanUpInterval;
-
-   private final int numSamples;
-
private final int backPressureStatsRefreshInterval;
 
-   private final Time delayBetweenSamples;
-
/** Flag indicating whether the stats tracker has been shut down. */
+   @GuardedBy("lock")
private boolean shutDown;
 
/**
 * Creates a back pressure statistics tracker.
 *
 * @param cleanUpInterval Clean up interval for completed stats.
-* @param numSamples  Number of stack trace samples when 
determining back pressure.
-* @param delayBetweenSamples Delay between samples when determining 
back pressure.
+* @param backPressureStatsRefreshInterval
 */
public BackPressureStatsTrackerImpl(
-   StackTraceSampleCoordinator coordinator,
+   BackPressureRequestCoordinator coordinator,
int cleanUpInterval,
-   int numSamples,
-   int backPressureStatsRefreshInterval,
-   Time delayBetweenSamples) {
-
-   this.coordinator = checkNotNull(coordinator, "Stack trace 
sample coordinator");
-
-   checkArgument(cleanUpInterval >= 0, "Clean up interval");
-   this.cleanUpInterval = cleanUpInterval;
+   int backPressureStatsRefreshInterval) {
 
-   checkArgument(numSamples >= 1, "Number of samples");
-   this.numSamples = numSamples;
+   checkArgument(backPressureStatsRefreshInterval >= 0,
+   "The back pressure stats refresh interval must be 
greater than or equal to 0.");
 
-   checkArgument(
-   backPressureStatsRefreshInterval >= 0,
-   "backPressureStatsRefreshInterval must be greater than 
or equal to 0");
+

[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r343999496
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureStatsTrackerImpl.java
 ##
 @@ -47,108 +47,63 @@
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
- * Back pressure statistics tracker.
- *
- * Back pressure is determined by sampling running tasks. If a task is
- * slowed down by back pressure it will be stuck in memory requests to a
- * {@link org.apache.flink.runtime.io.network.buffer.LocalBufferPool}.
- *
- * The back pressured stack traces look like this:
- *
- * 
- * java.lang.Object.wait(Native Method)
- * o.a.f.[...].LocalBufferPool.requestBuffer(LocalBufferPool.java:163)
- * o.a.f.[...].LocalBufferPool.requestBufferBlocking(LocalBufferPool.java:133) 
<--- BLOCKING
- * request
- * [...]
- * 
+ * Back pressure statistics tracker. See {@link 
org.apache.flink.runtime.taskexecutor.BackPressureSampleService}
+ * for more details about how back pressure ratio of a task is calculated.
  */
 public class BackPressureStatsTrackerImpl implements BackPressureStatsTracker {
 
private static final Logger LOG = 
LoggerFactory.getLogger(BackPressureStatsTrackerImpl.class);
 
-   /** Maximum stack trace depth for samples. */
-   static final int MAX_STACK_TRACE_DEPTH = 8;
-
-   /** Expected class name for back pressure indicating stack trace 
element. */
-   static final String EXPECTED_CLASS_NAME = 
"org.apache.flink.runtime.io.network.buffer.LocalBufferPool";
-
-   /** Expected method name for back pressure indicating stack trace 
element. */
-   static final String EXPECTED_METHOD_NAME = 
"requestBufferBuilderBlocking";
-
/** Lock guarding trigger operations. */
private final Object lock = new Object();
 
-   /* Stack trace sample coordinator. */
-   private final StackTraceSampleCoordinator coordinator;
+   private final BackPressureRequestCoordinator coordinator;
 
/**
 * Completed stats. Important: Job vertex IDs need to be scoped by job 
ID,
-* because they are potentially constant across runs messing up the 
cached
-* data.
+* because they are potentially constant across runs which may mess up 
the
+* cached data.
 */
private final Cache 
operatorStatsCache;
 
-   /** Pending in progress stats. Important: Job vertex IDs need to be 
scoped
-* by job ID, because they are potentially constant across runs messing 
up
-* the cached data.*/
+   /**
+* Pending in progress stats. Important: Job vertex IDs need to be 
scoped
+* by job ID, because they are potentially constant across runs which 
may
+* mess up the cached data.
+*/
private final Set pendingStats = new HashSet<>();
 
-   /** Cleanup interval for completed stats cache. */
-   private final int cleanUpInterval;
-
-   private final int numSamples;
-
private final int backPressureStatsRefreshInterval;
 
-   private final Time delayBetweenSamples;
-
/** Flag indicating whether the stats tracker has been shut down. */
+   @GuardedBy("lock")
private boolean shutDown;
 
/**
 * Creates a back pressure statistics tracker.
 *
 * @param cleanUpInterval Clean up interval for completed stats.
-* @param numSamples  Number of stack trace samples when 
determining back pressure.
-* @param delayBetweenSamples Delay between samples when determining 
back pressure.
+* @param backPressureStatsRefreshInterval
 
 Review comment:
   add one more param for `coordinator` and give some comments for 
`backPressureStatsRefreshInterval`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14667) flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory (TableSourceFactory) for the kafka

2019-11-07 Thread chun111111 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chun11 updated FLINK-14667:
---
Description: 
[root@mj flink-1.9.1]# ./bin/flink run -m yarn-cluster 
/root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar[root@mj 
flink-1.9.1]# ./bin/flink run -m yarn-cluster 
/root/cp19/streaming-view-0.0.1-SNAPSHOT-jar-with-dependencies.jar2019-11-07 
16:48:57,616 INFO  org.apache.hadoop.yarn.client.RMProxy                        
 - Connecting to ResourceManager at /0.0.0.0:80322019-11-07 16:48:57,789 INFO  
org.apache.flink.yarn.cli.FlinkYarnSessionCli                 - No path for the 
flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 
16:48:57,789 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                
 - No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2019-11-07 
16:48:57,986 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor          
 - Cluster specification: ClusterSpecification\{masterMemoryMB=1024, 
taskManagerMemoryMB=1024, numberTaskManagers=1, 
slotsPerTaskManager=1}2019-11-07 16:48:58,657 WARN  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - The 
configuration directory ('/opt/app/flink-1.9.1/conf') contains both LOG4J and 
Logback configuration files. Please delete or rename one of them.2019-11-07 
16:49:00,954 INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor          
 - Submitting application master application_1573090964983_00392019-11-07 
16:49:00,986 INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        
 - Submitted application application_1573090964983_00392019-11-07 16:49:00,986 
INFO  org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Waiting 
for the cluster to be allocated2019-11-07 16:49:00,988 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Deploying 
cluster, current state ACCEPTED2019-11-07 16:49:06,534 INFO  
org.apache.flink.yarn.AbstractYarnClusterDescriptor           - YARN 
application has been deployed successfully.Starting execution of program
 The program 
finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: findAndCreateTableSource failed. at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
 at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
 at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274) 
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746) 
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273) at 
org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205) at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010) 
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083) 
at java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
 at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
 at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)Caused 
by: org.apache.flink.table.api.TableException: findAndCreateTableSource failed. 
at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67)
 at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54)
 at 
org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69)
 at 
com.streaming.activity.task.PaymentViewIndex.registerKafkaTable(PaymentViewIndex.java:205)
 at 
com.streaming.activity.task.PaymentViewIndex.registerSourceTable(PaymentViewIndex.java:106)
 at com.streaming.activity.task.PaymentViewIndex.main(PaymentViewIndex.java:68) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
 ... 12 moreCaused by: 
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' inthe classpath.
Reason: No context matches.
The following properties are 
requested:connector.properties.0.key=key.deserializerconnector.properties.0.value=org.apache.kafka.comm

[GitHub] [flink] flinkbot edited a comment on issue #10022: [FLINK-14135][orc] Introduce OrcColumnarRowInputFormat

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10022: [FLINK-14135][orc] Introduce 
OrcColumnarRowInputFormat
URL: https://github.com/apache/flink/pull/10022#issuecomment-547278354
 
 
   
   ## CI report:
   
   * 73e9fbb7c4edb8628b93e9a5b17271e57a8b8f14 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133947525)
   * b86c5db1ab9a081f0fec52f15fdad3b4f4d61939 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134797047)
   * 77aa3cb348f97c8e2999bafcee9e7169b96e983e : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/135586640)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r343999496
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureStatsTrackerImpl.java
 ##
 @@ -47,108 +47,63 @@
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
 /**
- * Back pressure statistics tracker.
- *
- * Back pressure is determined by sampling running tasks. If a task is
- * slowed down by back pressure it will be stuck in memory requests to a
- * {@link org.apache.flink.runtime.io.network.buffer.LocalBufferPool}.
- *
- * The back pressured stack traces look like this:
- *
- * 
- * java.lang.Object.wait(Native Method)
- * o.a.f.[...].LocalBufferPool.requestBuffer(LocalBufferPool.java:163)
- * o.a.f.[...].LocalBufferPool.requestBufferBlocking(LocalBufferPool.java:133) 
<--- BLOCKING
- * request
- * [...]
- * 
+ * Back pressure statistics tracker. See {@link 
org.apache.flink.runtime.taskexecutor.BackPressureSampleService}
+ * for more details about how back pressure ratio of a task is calculated.
  */
 public class BackPressureStatsTrackerImpl implements BackPressureStatsTracker {
 
private static final Logger LOG = 
LoggerFactory.getLogger(BackPressureStatsTrackerImpl.class);
 
-   /** Maximum stack trace depth for samples. */
-   static final int MAX_STACK_TRACE_DEPTH = 8;
-
-   /** Expected class name for back pressure indicating stack trace 
element. */
-   static final String EXPECTED_CLASS_NAME = 
"org.apache.flink.runtime.io.network.buffer.LocalBufferPool";
-
-   /** Expected method name for back pressure indicating stack trace 
element. */
-   static final String EXPECTED_METHOD_NAME = 
"requestBufferBuilderBlocking";
-
/** Lock guarding trigger operations. */
private final Object lock = new Object();
 
-   /* Stack trace sample coordinator. */
-   private final StackTraceSampleCoordinator coordinator;
+   private final BackPressureRequestCoordinator coordinator;
 
/**
 * Completed stats. Important: Job vertex IDs need to be scoped by job 
ID,
-* because they are potentially constant across runs messing up the 
cached
-* data.
+* because they are potentially constant across runs which may mess up 
the
+* cached data.
 */
private final Cache 
operatorStatsCache;
 
-   /** Pending in progress stats. Important: Job vertex IDs need to be 
scoped
-* by job ID, because they are potentially constant across runs messing 
up
-* the cached data.*/
+   /**
+* Pending in progress stats. Important: Job vertex IDs need to be 
scoped
+* by job ID, because they are potentially constant across runs which 
may
+* mess up the cached data.
+*/
private final Set pendingStats = new HashSet<>();
 
-   /** Cleanup interval for completed stats cache. */
-   private final int cleanUpInterval;
-
-   private final int numSamples;
-
private final int backPressureStatsRefreshInterval;
 
-   private final Time delayBetweenSamples;
-
/** Flag indicating whether the stats tracker has been shut down. */
+   @GuardedBy("lock")
private boolean shutDown;
 
/**
 * Creates a back pressure statistics tracker.
 *
 * @param cleanUpInterval Clean up interval for completed stats.
-* @param numSamples  Number of stack trace samples when 
determining back pressure.
-* @param delayBetweenSamples Delay between samples when determining 
back pressure.
+* @param backPressureStatsRefreshInterval
 
 Review comment:
   wrong param, it should be `coordinator`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-14667) flink1.9,1.8.2 run flinkSql fat jar ,can't load the right tableFactory (TableSourceFactory) for the kafka

2019-11-07 Thread chun111111 (Jira)
chun11 created FLINK-14667:
--

 Summary: flink1.9,1.8.2 run flinkSql fat jar ,can't load the right 
tableFactory (TableSourceFactory) for the kafka 
 Key: FLINK-14667
 URL: https://issues.apache.org/jira/browse/FLINK-14667
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.9.1, 1.8.2
Reporter: chun11






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10022: [FLINK-14135][orc] Introduce OrcColumnarRowInputFormat

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10022: [FLINK-14135][orc] Introduce 
OrcColumnarRowInputFormat
URL: https://github.com/apache/flink/pull/10022#issuecomment-547278354
 
 
   
   ## CI report:
   
   * 73e9fbb7c4edb8628b93e9a5b17271e57a8b8f14 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/133947525)
   * b86c5db1ab9a081f0fec52f15fdad3b4f4d61939 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/134797047)
   * 77aa3cb348f97c8e2999bafcee9e7169b96e983e : UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10098: [FLINK-14326][FLINK-14324][table] Support to apply watermark assigner in planner

2019-11-07 Thread GitBox
wuchong commented on a change in pull request #10098: 
[FLINK-14326][FLINK-14324][table] Support to apply watermark assigner in planner
URL: https://github.com/apache/flink/pull/10098#discussion_r343993759
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/resources/org/apache/flink/table/planner/plan/stream/sql/TableScanTest.xml
 ##
 @@ -16,36 +16,63 @@ See the License for the specific language governing 
permissions and
 limitations under the License.
 -->
 
+  
+
+  
+
+
+
+  
+
+
+
+  
+
+
+  
   
 
   
+
 
 
   
+
 
 
   
+
 
 Review comment:
   It is related. We added a plan test in `TableScanTest`. So the corresponding 
XML will have one more entry. And as I said before, this file should be 
re-generated by the framework, not updated manually. These blank lines are 
generated by the framework and of course should be in a single commit. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-14666) support multiple catalog in flink table sql

2019-11-07 Thread yuemeng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969842#comment-16969842
 ] 

yuemeng edited comment on FLINK-14666 at 11/8/19 5:27 AM:
--

[~ykt836][~jark]
can you check this issue, if some of you agree this is an issue
can you assign it to me, I had already fixed it on the latest version


was (Author: yuemeng):
[~ykt836][~jark]
can you check this issue, thanks

> support multiple catalog in flink table sql
> ---
>
> Key: FLINK-14666
> URL: https://issues.apache.org/jira/browse/FLINK-14666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: yuemeng
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently, calcite will only use the current catalog as schema path to 
> validate sql node,
> maybe this is not reasonable
> {code}
> tableEnvironment.useCatalog("user_catalog");
> tableEnvironment.useDatabase("user_db");
>  Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt 
> from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' 
> SECOND)"); tableEnvironment.registerTable("v1", table);
> Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
> tableEnvironment.registerTable("v2", t2);
> tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT 
> action, os,cast (cnt as BIGINT) as cnt from v2");
> {code}
> suppose source table music_queue_3  and sink table kafka_table_test1 both in 
> user_catalog
>  catalog 
>  but some temp table or view such as v1, v2,v3 will register in default 
> catalog.
> when we select temp table v2 and insert it into our own catalog table 
> database2.kafka_table_test1 
> it always failed with sql node validate, because of schema path in
> catalog reader is the current catalog without default catalog,the temp table 
> or view will never be Identified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #10098: [FLINK-14326][FLINK-14324][table] Support to apply watermark assigner in planner

2019-11-07 Thread GitBox
wuchong commented on a change in pull request #10098: 
[FLINK-14326][FLINK-14324][table] Support to apply watermark assigner in planner
URL: https://github.com/apache/flink/pull/10098#discussion_r343993020
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/calcite/SqlExprToRexConverterImpl.java
 ##
 @@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.calcite;
+
+import org.apache.flink.table.planner.plan.FlinkCalciteCatalogReader;
+
+import org.apache.calcite.config.CalciteConnectionConfig;
+import org.apache.calcite.config.CalciteConnectionConfigImpl;
+import org.apache.calcite.config.CalciteConnectionProperty;
+import org.apache.calcite.jdbc.CalciteSchema;
+import org.apache.calcite.jdbc.CalciteSchemaBuilder;
+import org.apache.calcite.plan.RelOptCluster;
+import org.apache.calcite.prepare.CalciteCatalogReader;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.logical.LogicalProject;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeFactory;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.calcite.schema.impl.AbstractTable;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.tools.FrameworkConfig;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Properties;
+
+/**
+ * Standard implementation of {@link SqlExprToRexConverter}.
+ */
+public class SqlExprToRexConverterImpl implements SqlExprToRexConverter {
+
+   private static final String TEMPORARY_TABLE_NAME = "__temp_table__";
+   private static final String QUERY_FORMAT = "SELECT %s FROM " + 
TEMPORARY_TABLE_NAME;
+
+   private final FlinkPlannerImpl planner;
+
+   public SqlExprToRexConverterImpl(
+   FrameworkConfig config,
+   FlinkTypeFactory typeFactory,
+   RelOptCluster cluster,
+   RelDataType tableRowType) {
+   this.planner = new FlinkPlannerImpl(
+   config,
+   (isLenient) -> 
createSingleTableCatalogReader(isLenient, config, typeFactory, tableRowType),
+   typeFactory,
+   cluster
+   );
+   }
+
+   @Override
+   public RexNode convertToRexNode(String expr) {
+   return convertToRexNodes(new String[]{expr})[0];
+   }
+
+   @Override
+   public RexNode[] convertToRexNodes(String[] exprs) {
+   String query = String.format(QUERY_FORMAT, String.join(",", 
exprs));
+   SqlNode parsed = planner.parser().parse(query);
+   SqlNode validated = planner.validate(parsed);
+   RelNode rel = planner.rel(validated).rel;
+   if (rel instanceof LogicalProject) {
 
 Review comment:
   OK, I will add the check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10123: [FLINK-14665][table-planner-blink] Support computed column for create…

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10123: [FLINK-14665][table-planner-blink] 
Support computed column for create…
URL: https://github.com/apache/flink/pull/10123#issuecomment-551369912
 
 
   
   ## CI report:
   
   * 102bce9b12cb5853f0fe2b53ae4120434813d7c2 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/135567444)
   * f9cb7dcc064d289b3960508cff61f696bfcb4803 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/135569491)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xintongsong commented on issue #10124: [FLINK-14637] Introduce framework off heap memory config

2019-11-07 Thread GitBox
xintongsong commented on issue #10124: [FLINK-14637] Introduce framework off 
heap memory config
URL: https://github.com/apache/flink/pull/10124#issuecomment-551389358
 
 
   @azagrebin, please help review this PR.
   Travis passed: https://travis-ci.org/xintongsong/flink/builds/609050197


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10125: [FLINK-14666] support multiple catalog in flink table sql

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10125: [FLINK-14666] support multiple 
catalog in flink table sql
URL: https://github.com/apache/flink/pull/10125#issuecomment-551384456
 
 
   
   ## CI report:
   
   * 0837b423c8da72c27b32e72030274b16122749a7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135573489)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-14666) support multiple catalog in flink table sql

2019-11-07 Thread yuemeng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16969842#comment-16969842
 ] 

yuemeng commented on FLINK-14666:
-

[~ykt836][~jark]
can you check this issue, thanks

> support multiple catalog in flink table sql
> ---
>
> Key: FLINK-14666
> URL: https://issues.apache.org/jira/browse/FLINK-14666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.8.0, 1.8.2, 1.9.0, 1.9.1
>Reporter: yuemeng
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently, calcite will only use the current catalog as schema path to 
> validate sql node,
> maybe this is not reasonable
> {code}
> tableEnvironment.useCatalog("user_catalog");
> tableEnvironment.useDatabase("user_db");
>  Table table = tableEnvironment.sqlQuery("SELECT action, os,count(*) as cnt 
> from music_queue_3 group by action, os,tumble(proctime, INTERVAL '10' 
> SECOND)"); tableEnvironment.registerTable("v1", table);
> Table t2 = tableEnvironment.sqlQuery("select action, os, 1 as cnt from v1");
> tableEnvironment.registerTable("v2", t2);
> tableEnvironment.sqlUpdate("INSERT INTO database2.kafka_table_test1 SELECT 
> action, os,cast (cnt as BIGINT) as cnt from v2");
> {code}
> suppose source table music_queue_3  and sink table kafka_table_test1 both in 
> user_catalog
>  catalog 
>  but some temp table or view such as v1, v2,v3 will register in default 
> catalog.
> when we select temp table v2 and insert it into our own catalog table 
> database2.kafka_table_test1 
> it always failed with sql node validate, because of schema path in
> catalog reader is the current catalog without default catalog,the temp table 
> or view will never be Identified



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10098: [FLINK-14326][FLINK-14324][table] Support to apply watermark assigner in planner

2019-11-07 Thread GitBox
flinkbot edited a comment on issue #10098: [FLINK-14326][FLINK-14324][table] 
Support to apply watermark assigner in planner
URL: https://github.com/apache/flink/pull/10098#issuecomment-550174677
 
 
   
   ## CI report:
   
   * a3db95a7340322421d1fa826dcac850116011687 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135182714)
   * 6db8b32408ffd954c96d1829c36d04542825a068 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135450723)
   * 31635a192b64660cebe93afee586249bf1df9b13 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135461602)
   * 229ede5e47a1d7346edc79449a39cf3c56858604 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/135569483)
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hzyuemeng1 closed pull request #10125: [FLINK-14666] support multiple catalog in flink table sql

2019-11-07 Thread GitBox
hzyuemeng1 closed pull request #10125: [FLINK-14666] support multiple catalog 
in flink table sql
URL: https://github.com/apache/flink/pull/10125
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #10022: [FLINK-14135][orc] Introduce OrcColumnarRowInputFormat

2019-11-07 Thread GitBox
JingsongLi commented on issue #10022: [FLINK-14135][orc] Introduce 
OrcColumnarRowInputFormat
URL: https://github.com/apache/flink/pull/10022#issuecomment-551388324
 
 
   @KurtYoung Hope you can take a look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r343990958
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureStats.java
 ##
 @@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.backpressure;
+
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+
+import java.util.Collections;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Back pressure stats for one or more tasks.
+ *
+ * The stats are collected by request triggered in {@link 
BackPressureRequestCoordinator}.
+ */
+public class BackPressureStats {
+
+   /** ID of the request (unique per job). */
+   private final int requestId;
+
+   /** Time stamp, when the request was triggered. */
+   private final long startTime;
+
+   /** Time stamp, when all back pressure stats were collected at the 
JobManager. */
+   private final long endTime;
+
+   /** Map of back pressure ratio by execution ID. */
+   private final Map backPressureRatios;
+
+   public BackPressureStats(
+   int requestId,
+   long startTime,
+   long endTime,
+   Map backPressureRatios) {
+
+   checkArgument(requestId >= 0, "Negative request ID.");
+   checkArgument(startTime >= 0, "Negative start time.");
+   checkArgument(endTime >= startTime, "End time before start 
time.");
+
+   this.requestId = requestId;
+   this.startTime = startTime;
+   this.endTime = endTime;
+   this.backPressureRatios = 
Collections.unmodifiableMap(backPressureRatios);
 
 Review comment:
   Collections.unmodifiableMap(checkNotNull(backPressureRatios))


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r343990718
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureStats.java
 ##
 @@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.backpressure;
+
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+
+import java.util.Collections;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Back pressure stats for one or more tasks.
+ *
+ * The stats are collected by request triggered in {@link 
BackPressureRequestCoordinator}.
+ */
+public class BackPressureStats {
+
+   /** ID of the request (unique per job). */
+   private final int requestId;
+
+   /** Time stamp, when the request was triggered. */
+   private final long startTime;
+
+   /** Time stamp, when all back pressure stats were collected at the 
JobManager. */
+   private final long endTime;
+
+   /** Map of back pressure ratio by execution ID. */
 
 Review comment:
   ratio -> ratios


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #10083: [FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.

2019-11-07 Thread GitBox
zhijiangW commented on a change in pull request #10083: 
[FLINK-14472][runtime]Implement back-pressure monitor with non-blocking outputs.
URL: https://github.com/apache/flink/pull/10083#discussion_r343990575
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/BackPressureStats.java
 ##
 @@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.backpressure;
+
+import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
+
+import java.util.Collections;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Back pressure stats for one or more tasks.
+ *
+ * The stats are collected by request triggered in {@link 
BackPressureRequestCoordinator}.
+ */
+public class BackPressureStats {
+
+   /** ID of the request (unique per job). */
+   private final int requestId;
+
+   /** Time stamp, when the request was triggered. */
+   private final long startTime;
+
+   /** Time stamp, when all back pressure stats were collected at the 
JobManager. */
 
 Review comment:
   JobManager -> BackPressureRequestCoordinator


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   7   >