[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:25 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON't DO THIS* until you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourselves. That is why it works fine in every others' and 
ci environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version is managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourselves), *DON't DO THIS* until 
you have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:27 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON'T DO THIS* until you 
have full knowledge of npm and node package, it would mess up the isolated env 
of node and npm. 
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON't DO THIS* until you 
have full knowledge of npm and node package.
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:28 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why *it works fine* in every other' and 
ci environment except yours.(just imagine that you have installed many 
different versions of maven and want them to work together in the same command 
line, which is impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON'T DO THIS* until you 
have full knowledge of npm and node package, it would mess up the isolated env 
of node and npm. 
 # plz have a try to remove all your `flink-runtime-web/web-dashboard/node` and 
`flink-runtime-web/web-dashboard/node_modules` caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON'T DO THIS* until you 
have full knowledge of npm and node package, it would mess up the isolated env 
of node and npm. 
 # plz have a try to remove all your `flink-runtime-web/web-dashboard/node` and 
`flink-runtime-web/web-dashboard/node_modules` caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136307#comment-17136307
 ] 

Yadong Xie edited comment on FLINK-18288 at 6/16/20, 5:28 AM:
--

[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON'T DO THIS* until you 
have full knowledge of npm and node package, it would mess up the isolated env 
of node and npm. 
 # plz have a try to remove all your `flink-runtime-web/web-dashboard/node` and 
`flink-runtime-web/web-dashboard/node_modules` caches in the 
flink-runtime-web/web-dashboard and have a try again.

 


was (Author: vthinkxie):
[~appleyuchi]

plz note that :
 # `node-sass/v4.11.0/linux-x64-72_binding.node` is not the static file, the 
file position and name is determined by your npm and node version dynamically, 
which means that it will always have an error if you have installed *unmatched* 
node or npm version yourself. That is why it works fine in every other' and ci 
environment(just imagine that you have installed many different versions of 
maven and want them to work together in the same command line, which is 
impossible.)
 # the node and npm version are managed by maven-frontend-plugin in the 
flink-runtime-web/pom.xml in isolated 
folders(flink-runtime-web/web-dashboard/node and 
flink-runtime-web/web-dashboard/node_modules), if you want to manage it 
yourself(try to run npm ci or npm install yourself), *DON'T DO THIS* until you 
have full knowledge of npm and node package, it would mess up the isolated env 
of node and npm. 
 # plz have a try to remove all your node and node_modules caches in the 
flink-runtime-web/web-dashboard and have a try again.

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #12664: [FLINK-18238][checkpoint] Emit CancelCheckpointMarker downstream on checkpointState in sync phase of checkpoint on task side

2020-06-15 Thread GitBox


zhijiangW commented on a change in pull request #12664:
URL: https://github.com/apache/flink/pull/12664#discussion_r440597301



##
File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorTest.java
##
@@ -218,6 +230,67 @@ public void testNotifyCheckpointAbortedBeforeAsyncPhase() 
throws Exception {
assertEquals(0, 
subtaskCheckpointCoordinator.getAsyncCheckpointRunnableSize());
}
 
+   @Test
+   public void 
testDownstreamReceiveCancelCheckpointMarkerOnUpstreamAbortedInSyncPhase() 
throws Exception {
+   final OneInputStreamTaskTestHarness testHarness 
=
+   new OneInputStreamTaskTestHarness<>(
+   OneInputStreamTask::new,
+   1, 1,
+   BasicTypeInfo.STRING_TYPE_INFO,
+   BasicTypeInfo.STRING_TYPE_INFO);
+
+   testHarness.setupOutputForSingletonOperatorChain();
+   StreamConfig streamConfig = testHarness.getStreamConfig();
+   streamConfig.setStreamOperator(new MapOperator());
+
+   testHarness.invoke();
+   testHarness.waitForTaskRunning();
+
+   TestTaskStateManager stateManager = new TestTaskStateManager();
+   MockEnvironment mockEnvironment = 
MockEnvironment.builder().setTaskStateManager(stateManager).build();
+   SubtaskCheckpointCoordinatorImpl subtaskCheckpointCoordinator = 
(SubtaskCheckpointCoordinatorImpl) new MockSubtaskCheckpointCoordinatorBuilder()
+   .setEnvironment(mockEnvironment)
+   .setUnalignedCheckpointEnabled(true)
+   .build();
+
+   final TestPooledBufferProvider bufferProvider = new 
TestPooledBufferProvider(Integer.MAX_VALUE, 4096);
+   ArrayList recordOrEvents = new ArrayList<>();
+   StreamElementSerializer stringStreamElementSerializer = 
new StreamElementSerializer<>(StringSerializer.INSTANCE);
+   RecordOrEventCollectingResultPartitionWriter 
resultPartitionWriter = new 
RecordOrEventCollectingResultPartitionWriter<>(recordOrEvents, bufferProvider, 
stringStreamElementSerializer);
+   
mockEnvironment.addOutputs(Collections.singletonList(resultPartitionWriter));
+
+   OneInputStreamTask task = testHarness.getTask();
+   final OperatorChain> operatorChain = new OperatorChain<>(task, 
StreamTask.createRecordWriterDelegate(streamConfig, mockEnvironment));
+   long checkpointId = 42L;
+   // notify checkpoint aborted before execution.
+   
subtaskCheckpointCoordinator.notifyCheckpointAborted(checkpointId, 
operatorChain, () -> true);
+   
subtaskCheckpointCoordinator.getChannelStateWriter().start(checkpointId, 
CheckpointOptions.forCheckpointWithDefaultLocation());
+   subtaskCheckpointCoordinator.checkpointState(
+   new CheckpointMetaData(checkpointId, 
System.currentTimeMillis()),
+   CheckpointOptions.forCheckpointWithDefaultLocation(),
+   new CheckpointMetrics(),
+   operatorChain,
+   () -> true);
+
+   assertEquals(1, recordOrEvents.size());
+   Object recordOrEvent = recordOrEvents.get(0);
+   // ensure CancelCheckpointMarker is broadcast downstream.
+   assertTrue(recordOrEvent instanceof CancelCheckpointMarker);
+   assertEquals(checkpointId, ((CancelCheckpointMarker) 
recordOrEvent).getCheckpointId());
+   }

Review comment:
   We should make sure the internal task thread inside 
`StreamTaskTestHarness` exit at last to avoid remaining thread after test 
finishes?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12655: [FLINK-18300][sql-client] SQL Client doesn't support ALTER VIEW

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12655:
URL: https://github.com/apache/flink/pull/12655#issuecomment-644085772


   
   ## CI report:
   
   * e9f9d2e5bd744d8443b78e9e3712bd718efa6b0d Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3511)
 
   * 35f7ea66eefa4f2931461df26624511847d39a8a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on pull request #12510: [FLINK-18039]

2020-06-15 Thread GitBox


zhijiangW commented on pull request #12510:
URL: https://github.com/apache/flink/pull/12510#issuecomment-644543716


   @becketqin FYI: this PR has conflicts now to be resolved. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18088) Umbrella for testing features in release-1.11.0

2020-06-15 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-18088:
-
Priority: Critical  (was: Blocker)

> Umbrella for testing features in release-1.11.0 
> 
>
> Key: FLINK-18088
> URL: https://issues.apache.org/jira/browse/FLINK-18088
> Project: Flink
>  Issue Type: Test
>Affects Versions: 1.11.0
>Reporter: Zhijiang
>Assignee: Zhijiang
>Priority: Critical
>  Labels: release-testing
> Fix For: 1.11.0
>
>
> This is the umbrella issue for tracing the testing progress of all the 
> related features in release-1.11.0, either the way of e2e or manually testing 
> in cluster, to confirm the features work in practice with good quality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16795) End to end tests timeout on Azure

2020-06-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136330#comment-17136330
 ] 

Robert Metzger commented on FLINK-16795:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pnowojski commented on a change in pull request #12575: [FLINK-18094][checkpointing] Unifies the creation of BarrierHandlers and CheckpointedInputGate.

2020-06-15 Thread GitBox


pnowojski commented on a change in pull request #12575:
URL: https://github.com/apache/flink/pull/12575#discussion_r440595683



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/UnionInputGate.java
##
@@ -105,11 +105,11 @@ public UnionInputGate(IndexedInputGate... inputGates) {
inputChannelToInputGateIndex = new 
int[totalNumberOfInputChannels];
 
int currentNumberOfInputChannels = 0;
-   for (final IndexedInputGate inputGate : inputGates) {
-   inputGateChannelIndexOffsets[inputGate.getGateIndex()] 
= currentNumberOfInputChannels;
+   for (int index = 0; index < inputGates.length; index++) {

Review comment:
   This seems to be going in the opposite direction - we are replacing 
indexing based on the real IDs with those based on the order?

##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/InputProcessorUtil.java
##
@@ -71,108 +69,55 @@ public static CheckpointedInputGate 
createCheckpointedInputGate(
String taskName,
List... inputGates) {
 
-   IntStream numberOfInputChannelsPerGate =
-   Arrays
-   .stream(inputGates)
-   .flatMap(collection -> collection.stream())
-   
.sorted(Comparator.comparingInt(IndexedInputGate::getGateIndex))
-   .mapToInt(InputGate::getNumberOfInputChannels);
-
-   Map inputGateToChannelIndexOffset = 
generateInputGateToChannelIndexOffsetMap(unionedInputGates);
-   // Note that numberOfInputChannelsPerGate and 
inputGateToChannelIndexOffset have a bit different
-   // indexing and purposes.
-   //
-   // The numberOfInputChannelsPerGate is indexed based on 
flattened input gates, and sorted based on GateIndex,
-   // so that it can be used in combination with InputChannelInfo 
class.
-   //
-   // The inputGateToChannelIndexOffset is based upon unioned 
input gates and it's use for translating channel
-   // indexes from perspective of UnionInputGate to perspective of 
SingleInputGate.
-
+   IndexedInputGate[] sortedInputGates = Arrays.stream(inputGates)
+   .flatMap(Collection::stream)
+   
.sorted(Comparator.comparing(IndexedInputGate::getGateIndex))
+   .toArray(IndexedInputGate[]::new);
CheckpointBarrierHandler barrierHandler = 
createCheckpointBarrierHandler(
config,
-   numberOfInputChannelsPerGate,
+   sortedInputGates,
checkpointCoordinator,
taskName,
-   generateChannelIndexToInputGateMap(unionedInputGates),
-   inputGateToChannelIndexOffset,
toNotifyOnCheckpoint);
registerCheckpointMetrics(taskIOMetricGroup, barrierHandler);
 
+   InputGate[] unionedInputGates = Arrays.stream(inputGates)

Review comment:
   It's a bit confusing that above we have `sortedInputGates` and those 
input gates here are not sorted. It means we end up with a confusing state, 
where `CheckpointBarrierHandler#inputGates` can be accessed via 
`inputGateIndex` while `UnionInputGate#inputGates` can not be.
   
   I understand why is it so, first one is flattened structure of all input 
gates, while the other has only a subset of gates. Maybe we can keep it as it 
is for now, as this commit is already simplifying things, but maybe we should 
replace `UnionInputGate#inputGates` array with a map?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18290) Tests are crashing with exit code 239

2020-06-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136335#comment-17136335
 ] 

Robert Metzger commented on FLINK-18290:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129

> Tests are crashing with exit code 239
> -
>
> Key: FLINK-18290
> URL: https://issues.apache.org/jira/browse/FLINK-18290
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Roman Khachatryan
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3467&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8]
> Kafka011ProducerExactlyOnceITCase
>  
> {code:java}
> 2020-06-15T03:24:28.4677649Z [WARNING] The requested profile 
> "skip-webui-build" could not be activated because it does not exist.
> 2020-06-15T03:24:28.4692049Z [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test 
> (integration-tests) on project flink-connector-kafka-0.11_2.11: There are 
> test failures.
> 2020-06-15T03:24:28.4692585Z [ERROR] 
> 2020-06-15T03:24:28.4693170Z [ERROR] Please refer to 
> /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target/surefire-reports 
> for the individual test results.
> 2020-06-15T03:24:28.4693928Z [ERROR] Please refer to dump files (if any 
> exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2020-06-15T03:24:28.4694423Z [ERROR] ExecutionException The forked VM 
> terminated without properly saying goodbye. VM crash or System.exit called?
> 2020-06-15T03:24:28.4696762Z [ERROR] Command was /bin/sh -c cd 
> /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target && 
> /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dlog4j.configurationFile=log4j2-test.properties -Dmvn.forkNumber=2 
> -XX:-UseGCOverheadLimit -jar 
> /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target/surefire/surefirebooter617700788970993266.jar
>  /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target/surefire 
> 2020-06-15T03-07-01_381-jvmRun2 surefire2676050245109796726tmp 
> surefire_602825791089523551074tmp
> 2020-06-15T03:24:28.4698486Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-06-15T03:24:28.4699066Z [ERROR] Process Exit Code: 239
> 2020-06-15T03:24:28.4699458Z [ERROR] Crashed tests:
> 2020-06-15T03:24:28.4699960Z [ERROR] 
> org.apache.flink.streaming.connectors.kafka.Kafka011ProducerExactlyOnceITCase
> 2020-06-15T03:24:28.4700849Z [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 2020-06-15T03:24:28.4703760Z [ERROR] Command was /bin/sh -c cd 
> /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target && 
> /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dlog4j.configurationFile=log4j2-test.properties -Dmvn.forkNumber=2 
> -XX:-UseGCOverheadLimit -jar 
> /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target/surefire/surefirebooter617700788970993266.jar
>  /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target/surefire 
> 2020-06-15T03-07-01_381-jvmRun2 surefire2676050245109796726tmp 
> surefire_602825791089523551074tmp
> 2020-06-15T03:24:28.4705501Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-06-15T03:24:28.4706297Z [ERROR] Process Exit Code: 239
> 2020-06-15T03:24:28.4706592Z [ERROR] Crashed tests:
> 2020-06-15T03:24:28.4706895Z [ERROR] 
> org.apache.flink.streaming.connectors.kafka.Kafka011ProducerExactlyOnceITCase
> 2020-06-15T03:24:28.4707386Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
> 2020-06-15T03:24:28.4708053Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457)
> 2020-06-15T03:24:28.4708908Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298)
> 2020-06-15T03:24:28.4709720Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
> 2020-06-15T03:24:28.4710497Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 2020-06-15T03:24:28.4711448Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 2020-06-15T03:24:28.4712395Z [ERROR] at 

[GitHub] [flink] senegalo commented on pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-15 Thread GitBox


senegalo commented on pull request #12056:
URL: https://github.com/apache/flink/pull/12056#issuecomment-644549028


   awesome .. will look into the changes and hopefully be done by them next 
weekend.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12655: [FLINK-18300][sql-client] SQL Client doesn't support ALTER VIEW

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12655:
URL: https://github.com/apache/flink/pull/12655#issuecomment-644085772


   
   ## CI report:
   
   * e9f9d2e5bd744d8443b78e9e3712bd718efa6b0d Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3511)
 
   * 35f7ea66eefa4f2931461df26624511847d39a8a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3560)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18211) Dynamic properties setting 'pipeline.jars' will be overwritten

2020-06-15 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136336#comment-17136336
 ] 

Yang Wang commented on FLINK-18211:
---

[~kkl0u] Is {{pipeline.jars}} designed to cover this use case? When the users 
want to submit a Flink job to existing standalone/Yarn/K8s session, it could be 
used to ship dependencies via Flink distributed cache(aka blob storage). 
Currently, we could only do this via {{env.registerCachedFile}} in main code.

> Dynamic properties setting 'pipeline.jars' will be overwritten
> --
>
> Key: FLINK-18211
> URL: https://issues.apache.org/jira/browse/FLINK-18211
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Echo Lee
>Assignee: Echo Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> When we submit the application through "flink run 
> -Dpipeline.jars='/user1.jar, user2.jar'..." command,  configuration will 
> include 'pipeline.jars', But ExecutionConfigAccessor#fromProgramOptions will 
> be reset this property, So the property set by the user is invalid.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16795) End to end tests timeout on Azure

2020-06-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136337#comment-17136337
 ] 

Robert Metzger commented on FLINK-16795:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=931b3127-d6ee-5f94-e204-48d51cd1c334
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=375367d9-d72e-5c21-3be0-b45149130f6b

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-16795) End to end tests timeout on Azure

2020-06-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136337#comment-17136337
 ] 

Robert Metzger edited comment on FLINK-16795 at 6/16/20, 6:02 AM:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=931b3127-d6ee-5f94-e204-48d51cd1c334
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=375367d9-d72e-5c21-3be0-b45149130f6b
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3541&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5


was (Author: rmetzger):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=931b3127-d6ee-5f94-e204-48d51cd1c334
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3542&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=375367d9-d72e-5c21-3be0-b45149130f6b

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18290) Tests are crashing with exit code 239

2020-06-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136340#comment-17136340
 ] 

Robert Metzger commented on FLINK-18290:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3541&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=a6e0f756-5bb9-5ea8-a468-5f60db442a29

> Tests are crashing with exit code 239
> -
>
> Key: FLINK-18290
> URL: https://issues.apache.org/jira/browse/FLINK-18290
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Roman Khachatryan
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3467&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8]
> Kafka011ProducerExactlyOnceITCase
>  
> {code:java}
> 2020-06-15T03:24:28.4677649Z [WARNING] The requested profile 
> "skip-webui-build" could not be activated because it does not exist.
> 2020-06-15T03:24:28.4692049Z [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test 
> (integration-tests) on project flink-connector-kafka-0.11_2.11: There are 
> test failures.
> 2020-06-15T03:24:28.4692585Z [ERROR] 
> 2020-06-15T03:24:28.4693170Z [ERROR] Please refer to 
> /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target/surefire-reports 
> for the individual test results.
> 2020-06-15T03:24:28.4693928Z [ERROR] Please refer to dump files (if any 
> exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2020-06-15T03:24:28.4694423Z [ERROR] ExecutionException The forked VM 
> terminated without properly saying goodbye. VM crash or System.exit called?
> 2020-06-15T03:24:28.4696762Z [ERROR] Command was /bin/sh -c cd 
> /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target && 
> /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dlog4j.configurationFile=log4j2-test.properties -Dmvn.forkNumber=2 
> -XX:-UseGCOverheadLimit -jar 
> /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target/surefire/surefirebooter617700788970993266.jar
>  /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target/surefire 
> 2020-06-15T03-07-01_381-jvmRun2 surefire2676050245109796726tmp 
> surefire_602825791089523551074tmp
> 2020-06-15T03:24:28.4698486Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-06-15T03:24:28.4699066Z [ERROR] Process Exit Code: 239
> 2020-06-15T03:24:28.4699458Z [ERROR] Crashed tests:
> 2020-06-15T03:24:28.4699960Z [ERROR] 
> org.apache.flink.streaming.connectors.kafka.Kafka011ProducerExactlyOnceITCase
> 2020-06-15T03:24:28.4700849Z [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 2020-06-15T03:24:28.4703760Z [ERROR] Command was /bin/sh -c cd 
> /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target && 
> /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dlog4j.configurationFile=log4j2-test.properties -Dmvn.forkNumber=2 
> -XX:-UseGCOverheadLimit -jar 
> /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target/surefire/surefirebooter617700788970993266.jar
>  /__w/2/s/flink-connectors/flink-connector-kafka-0.11/target/surefire 
> 2020-06-15T03-07-01_381-jvmRun2 surefire2676050245109796726tmp 
> surefire_602825791089523551074tmp
> 2020-06-15T03:24:28.4705501Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-06-15T03:24:28.4706297Z [ERROR] Process Exit Code: 239
> 2020-06-15T03:24:28.4706592Z [ERROR] Crashed tests:
> 2020-06-15T03:24:28.4706895Z [ERROR] 
> org.apache.flink.streaming.connectors.kafka.Kafka011ProducerExactlyOnceITCase
> 2020-06-15T03:24:28.4707386Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
> 2020-06-15T03:24:28.4708053Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457)
> 2020-06-15T03:24:28.4708908Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298)
> 2020-06-15T03:24:28.4709720Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
> 2020-06-15T03:24:28.4710497Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 2020-06-15T03:24:28.4711448Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 2020-06-15T03:24:28.4712395Z [ERROR] at 

[jira] [Issue Comment Deleted] (FLINK-18236) flink elasticsearch IT test ElasticsearchSinkTestBase.runElasticsearchSink* verify it not right

2020-06-15 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-18236:
-
Comment: was deleted

(was: hi [~dwysakowicz] , could you please also review this issue and PR?)

> flink elasticsearch IT test ElasticsearchSinkTestBase.runElasticsearchSink* 
> verify it not right
> ---
>
> Key: FLINK-18236
> URL: https://issues.apache.org/jira/browse/FLINK-18236
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.10.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0, 1.12.0
>
>
> we can see there are diffirent tests
> runElasticsearchSinkTest
> runElasticsearchSinkCborTest
> runElasticsearchSinkSmileTest
> runElasticSearchSinkTest
> etc.
> And use SourceSinkDataTestKit.verifyProducedSinkData(client, index) to ensure 
> the correctness of results. But all of them use the same index.
> That is to say, if the second unit test sink doesn't send successfully. they 
> are also equal when use verifyProducedSinkData
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18236) flink elasticsearch IT test ElasticsearchSinkTestBase.runElasticsearchSink* verify it not right

2020-06-15 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136346#comment-17136346
 ] 

jackylau commented on FLINK-18236:
--

Hi [~jark] could you please spend some of time reviewing this issue and PR?

> flink elasticsearch IT test ElasticsearchSinkTestBase.runElasticsearchSink* 
> verify it not right
> ---
>
> Key: FLINK-18236
> URL: https://issues.apache.org/jira/browse/FLINK-18236
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.10.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0, 1.12.0
>
>
> we can see there are diffirent tests
> runElasticsearchSinkTest
> runElasticsearchSinkCborTest
> runElasticsearchSinkSmileTest
> runElasticSearchSinkTest
> etc.
> And use SourceSinkDataTestKit.verifyProducedSinkData(client, index) to ensure 
> the correctness of results. But all of them use the same index.
> That is to say, if the second unit test sink doesn't send successfully. they 
> are also equal when use verifyProducedSinkData
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin commented on pull request #12510: [FLINK-18039]

2020-06-15 Thread GitBox


becketqin commented on pull request #12510:
URL: https://github.com/apache/flink/pull/12510#issuecomment-644554936


   Merged to master. 
   24201cc1b6c46c689abbff8635a01cec7b088983
   e01cab2f713802f6fb92f7472a258c07f2c18af7
   
   Cherry-picked to release-1.11
   05db48562c215417f4faa038689cdc18a4582479
   0326daf6e97b02dcb18e479ce88a50bc4b644295



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin closed pull request #12510: [FLINK-18039]

2020-06-15 Thread GitBox


becketqin closed pull request #12510:
URL: https://github.com/apache/flink/pull/12510


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin closed pull request #12507: [FLINK-18162][connector/common] Serialize the splits in the AddSplitsEvent

2020-06-15 Thread GitBox


becketqin closed pull request #12507:
URL: https://github.com/apache/flink/pull/12507


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin commented on pull request #12507: [FLINK-18162][connector/common] Serialize the splits in the AddSplitsEvent

2020-06-15 Thread GitBox


becketqin commented on pull request #12507:
URL: https://github.com/apache/flink/pull/12507#issuecomment-644555205


   Merged to master:
   f883f1190132f9dd6b37f1e5c8ae0e0d25f78333
   
   Cherry-picked to release-1.11:
   6cf60fda000d25baddfd7a2c3725eeecba77f886



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sdlcwangsong commented on a change in pull request #12642: [FLINK-18282][docs-zh] retranslate the documentation home page

2020-06-15 Thread GitBox


sdlcwangsong commented on a change in pull request #12642:
URL: https://github.com/apache/flink/pull/12642#discussion_r440610076



##
File path: docs/index.zh.md
##
@@ -23,53 +23,71 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+
+Apache Flink 是一个在无界和有界数据流上进行状态计算的框架和分布式处理引擎。 Flink 已经可以在所有常见的集群环境中运行,并以 
in-memory 的速度和任意的规模进行计算。
+
 
-本文档适用于 Apache Flink {{ site.version_title}} 版本。本页面最近更新于 {% build_time %}.
+
+
 
-Apache Flink 是一个分布式流批一体化的开源平台。Flink 的核心是一个提供数据分发、通信以及自动容错的流计算引擎。Flink 
在流计算之上构建批处理,并且原生的支持迭代计算,内存管理以及程序优化。
+### 试用 Flink
 
-## 初步印象
+如果您有兴趣使用 Flink, 可以试试我们的教程:
 
-* **代码练习**: 跟随分步指南通过 Flink API 实现简单应用或查询。
-  * [实现 DataStream 应用]({% link try-flink/datastream_api.zh.md %})
-  * [书写 Table API 查询]({% link try-flink/table_api.zh.md %})
+* [DataStream API 进行欺诈检测]({% link try-flink/datastream_api.md %})
+* [Table API 构建实时报表]({% link try-flink/table_api.md %})
+* [Python API 教程]({% link try-flink/python_table_api.md %})
+* [Flink 游乐场]({% link try-flink/flink-operations-playground.md %})
 
-* **Docker 游乐场**: 你只需花几分钟搭建 Flink 沙盒环境,就可以探索和使用 Flink 了。
-  * [运行与管理 Flink 流处理应用]({% link try-flink/flink-operations-playground.zh.md %})
+### 学习 Flink
 
-* **概念**: 学习 Flink 的基本概念能更好地理解文档。
-  * [有状态流处理](concepts/stateful-stream-processing.html)
-  * [实时流处理](concepts/timely-stream-processing.html)
-  * [Flink 架构](concepts/flink-architecture.html)
-  * [术语表](concepts/glossary.html)
+* [操作培训]({% link learn-flink/index.md %}) 包含了一系列的课程和练习,逐步介绍了,帮助你深入学习 Flink。
 
-## API 参考
+* [概念透析]({% link concepts/index.md %}) 介绍了在浏览参考文档之前你需要了解的 Flink 知识。
 
-API 参考列举并解释了 Flink API 的所有功能。
+### 获取 Flink 帮助
 
-* [DataStream API](dev/datastream_api.html)
-* [DataSet API](dev/batch/index.html)
-* [Table API & SQL](dev/table/index.html)
+如果你被困住了, 可以在 [社区](https://flink.apache.org/community.html)寻求帮助。 值得一提的是,Apache 
Flink 的用户邮件列表一直是最活跃的 Apache 项目之一,也是一个快速获得帮助的好途径。

Review comment:
   yes, I made some modify. @libenchao 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on a change in pull request #12653: [FLINK-18294][e2e] Log java processes and disk usage

2020-06-15 Thread GitBox


rmetzger commented on a change in pull request #12653:
URL: https://github.com/apache/flink/pull/12653#discussion_r440610507



##
File path: flink-end-to-end-tests/test-scripts/test-runner-common.sh
##
@@ -105,6 +107,13 @@ function post_test_validation {
 fi
 }
 
+function log_environment_info {
+echo "Jps"
+jps

Review comment:
   Yes, pstree would be a lot of output. You could consider logging it to a 
file. But that's up to you





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache opened a new pull request #12666: [FLINK-18313][doc] Hive dialect doc should mention that views created…

2020-06-15 Thread GitBox


lirui-apache opened a new pull request #12666:
URL: https://github.com/apache/flink/pull/12666


   … in Flink cannot be used in Hive
   
   
   
   ## What is the purpose of the change
   
   Update hive dialect doc to mention the views created in Flink cannot be 
queried in Hive.
   
   
   ## Brief change log
   
 - Update doc
   
   
   ## Verifying this change
   
   NA
   
   ## Does this pull request potentially affect one of the following parts:
   
   NA
   
   ## Documentation
   
   NA
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18313) Hive dialect doc should mention that views created in Flink cannot be used in Hive

2020-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18313:
---
Labels: pull-request-available  (was: )

> Hive dialect doc should mention that views created in Flink cannot be used in 
> Hive
> --
>
> Key: FLINK-18313
> URL: https://issues.apache.org/jira/browse/FLINK-18313
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18119) Fix unlimitedly growing state for time range bounded over aggregate

2020-06-15 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136348#comment-17136348
 ] 

Benchao Li commented on FLINK-18119:


[~hyeonseop] Which planner are you using? I've checked 
{{RowTimeRangeUnboundedPrecedingFunction}} in blink planner in 1.11, it's 
implemented using processing timers. Hence it does not have the problems you 
described.

> Fix unlimitedly growing state for time range bounded over aggregate
> ---
>
> Key: FLINK-18119
> URL: https://issues.apache.org/jira/browse/FLINK-18119
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.1
>Reporter: Hyeonseop Lee
>Priority: Major
>
> For time range bounded over aggregation in streaming query, like below,
> {code:java}
> table
>   .window(Over.partitionBy 'a orderBy 'rowtime preceding 1.hour as 'w)
>   .groupBy('w)
>   .select('a, aggregateFunction('b))
> {code}
> the operator must hold incoming records over the preceding time range in the 
> state, but older records are no longer required and can be cleaned up.
> Current implementation cleans the old records up only when newer records come 
> in and so the operator knows that enough time has passed. However, the clean 
> up never happens unless a new record with the same key comes in and this 
> causes a state that perhaps will never be cleaned up, which leads to an 
> unlimitedly growing state especially when the keyspace mutates over time.
> Since aggregate over bounded preceding time interval doesn't require old 
> records by its nature, we can improve this by adding a timer that notifies 
> the operator to clean up old records, resulting in no changes in query result 
> or severe performance degrade.
> This is a distinct feature from state retention: state retention is to forget 
> some states that are expected to be less important to reduce state memory, so 
> it possibly changes query results. Enabling and disabling state retention 
> both make sense with this change.
> This issue applies to both row time range bound and proc time range bound. 
> That is, we are going to have changes in both 
> RowTimeRangeBoundedPrecedingFunction and 
> ProcTimeRangeBoundedPrecedingFunction in flink-table-runtime-blink. I already 
> have a running-in-production version with this change and would be glad to 
> contribute.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12666: [FLINK-18313][doc] Hive dialect doc should mention that views created…

2020-06-15 Thread GitBox


flinkbot commented on pull request #12666:
URL: https://github.com/apache/flink/pull/12666#issuecomment-644558781


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 30d17d142c22ddd8e0e52951969486ee4132d40b (Tue Jun 16 
06:27:05 UTC 2020)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-18313).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on pull request #12369: [FLINK-17678][Connectors/HBase]Support fink-sql-connector-hbase

2020-06-15 Thread GitBox


leonardBang commented on pull request #12369:
URL: https://github.com/apache/flink/pull/12369#issuecomment-644561250


   > @wuchong Sorry, due to my personal physical reasons, I don't have enough 
time to finish this work recently.You can assign others to finish this. I'll be 
involved in community work later when my back pain recovery.
   
   Sorry hear that, please take care of you first @DashShen . 
   I'd like to take over the next work and thanks for the great effort of the 
previous work.
   Hope you recover soon.
   Best regard.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12656: [FLINK-17666][table-planner-blink] Insert into partitioned table can …

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12656:
URL: https://github.com/apache/flink/pull/12656#issuecomment-644099955


   
   ## CI report:
   
   * 35454e38388a139ba65943e1c876cbcfb9d9e87c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3548)
 
   * 8cec6fea4e3f8fe1edab5aab8484b8d315feae11 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3557)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12642: [FLINK-18282][docs-zh] retranslate the documentation home page

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12642:
URL: https://github.com/apache/flink/pull/12642#issuecomment-643624564


   
   ## CI report:
   
   * d13e1778d93d1aa4f365057bda967ed708555962 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3486)
 
   * 2885aeb9578583a92bc995535af537f927662fbc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3556)
 
   * 1e96d186e59d0c0c5ae2b34f1f2e330d8407fc8d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12661: [FLINK-18299][Formats(Json)]Add option in json format to parse timestamp in different standard

2020-06-15 Thread GitBox


flinkbot edited a comment on pull request #12661:
URL: https://github.com/apache/flink/pull/12661#issuecomment-644211738


   
   ## CI report:
   
   * c1314ccf8e399484f78f3cea57d0229cb6a5d79b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3551)
 
   * 9994520727018f9f9583ce9b91d221c69346f0d2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12666: [FLINK-18313][doc] Hive dialect doc should mention that views created…

2020-06-15 Thread GitBox


flinkbot commented on pull request #12666:
URL: https://github.com/apache/flink/pull/12666#issuecomment-644563760


   
   ## CI report:
   
   * 30d17d142c22ddd8e0e52951969486ee4132d40b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136356#comment-17136356
 ] 

appleyuchi commented on FLINK-18288:


Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official develop,*

*so it's not affected by my npm/nodejs.*

 

while yes,I'm not sure *the sub-version linux-x64-72_binding* of node-sass.

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18311) StreamingKafkaITCase stalls indefinitely

2020-06-15 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-18311:
---
Priority: Blocker  (was: Critical)

> StreamingKafkaITCase stalls indefinitely
> 
>
> Key: FLINK-18311
> URL: https://issues.apache.org/jira/browse/FLINK-18311
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> CI: 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3537&view=logs&s=ae4f8708-9994-57d3-c2d7-b892156e7812&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee]
> {code}
> 020-06-15T21:01:59.0792207Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.10_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0793580Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.11_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0794931Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka_2.11:1.11-SNAPSHOT:jar already 
> exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0795686Z [INFO] 
> 2020-06-15T21:01:59.0796403Z [INFO] --- maven-surefire-plugin:2.22.1:test 
> (end-to-end-tests) @ flink-end-to-end-tests-common-kafka ---
> 2020-06-15T21:01:59.0869911Z [INFO] 
> 2020-06-15T21:01:59.0871981Z [INFO] 
> ---
> 2020-06-15T21:01:59.0874203Z [INFO]  T E S T S
> 2020-06-15T21:01:59.0875086Z [INFO] 
> ---
> 2020-06-15T21:02:00.0134000Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-06-15T21:45:33.4889677Z ##[error]The operation was canceled.
> 2020-06-15T21:45:33.4902658Z ##[section]Finishing: Run e2e tests
> 2020-06-15T21:45:33.5058601Z ##[section]Starting: Cache Maven local repo
> 2020-06-15T21:45:33.5164621Z 
> ==
> 2020-06-15T21:45:33.5164972Z Task : Cache
> 2020-06-15T21:45:33.5165250Z Description  : Cache files between runs
> 2020-06-15T21:45:33.5165497Z Version  : 2.0.1
> 2020-06-15T21:45:33.5165769Z Author   : Microsoft Corporation
> 2020-06-15T21:45:33.5166079Z Help : 
> https://aka.ms/pipeline-caching-docs
> 2020-06-15T21:45:33.5166442Z 
> ==
> 2020-06-15T21:45:34.0475096Z ##[section]Finishing: Cache Maven local repo
> 2020-06-15T21:45:34.0502436Z ##[section]Starting: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.0506976Z 
> ==
> 2020-06-15T21:45:34.0507297Z Task : Get sources
> 2020-06-15T21:45:34.0507642Z Description  : Get sources from a repository. 
> Supports Git, TfsVC, and SVN repositories.
> 2020-06-15T21:45:34.0507965Z Version  : 1.0.0
> 2020-06-15T21:45:34.0508198Z Author   : Microsoft
> 2020-06-15T21:45:34.0508559Z Help : [More 
> Information](https://go.microsoft.com/fwlink/?LinkId=798199)
> 2020-06-15T21:45:34.0508934Z 
> ==
> 2020-06-15T21:45:34.3924966Z Cleaning any cached credential from repository: 
> flink-ci/flink-mirror (GitHub)
> 2020-06-15T21:45:34.3990430Z ##[section]Finishing: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.4049857Z ##[section]Starting: Finalize Job
> 2020-06-15T21:45:34.4086754Z Cleaning up task key
> 2020-06-15T21:45:34.4087951Z Start cleaning up orphan processes.
> 2020-06-15T21:45:34.4481307Z Terminate orphan process: pid (11772) (java)
> 2020-06-15T21:45:34.4548480Z Terminate orphan process: pid (12132) (java)
> 2020-06-15T21:45:34.4632331Z Terminate orphan process: pid (30726) (bash)
> 2020-06-15T21:45:34.4660351Z Terminate orphan process: pid (30728) (bash)
> 2020-06-15T21:45:34.4710124Z Terminate orphan process: pid (68958) (java)
> 2020-06-15T21:45:34.4751577Z Terminate orphan process: pid (119102) (java)
> 2020-06-15T21:45:34.4800161Z Terminate orphan process: pid (129546) (sh)
> 2020-06-15T21:45:34.4830588Z Terminate orphan process: pid (129548) (java)
> 2020-06-15T21:45:34.4833955Z ##[section]Finishing: Finalize Job
> 2020-06-15T21:45:34.4877321Z ##[section]Finishing: e2e_ci
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger commented on pull request #12589: [FLINK-17327] Fix Kafka Producer Resource Leaks

2020-06-15 Thread GitBox


rmetzger commented on pull request #12589:
URL: https://github.com/apache/flink/pull/12589#issuecomment-644564287


   Note: The CI build for this PR was not passing.
   In the Azure UI, you can "Toggle Timestamps" next to the "View Raw Log" 
button. You'll then see that the test stalls for over 20 minutes until the VM 
gets killed
   ```
   2020-06-15T12:56:54.0056633Z [INFO] Running 
org.apache.flink.tests.util.kafka.StreamingKafkaITCase
   2020-06-15T13:23:53.9144859Z ##[error]The operation was canceled.
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136356#comment-17136356
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 6:40 AM:
--

Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official develop,*

*so it's not affected by my npm/nodejs.*

 

while yes,I'm not sure which affect *the sub-version number  
"linux-x64-72_binding"* of node-sass.

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 


was (Author: appleyuchi):
Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official develop,*

*so it's not affected by my npm/nodejs.*

 

while yes,I'm not sure *the sub-version linux-x64-72_binding* of node-sass.

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18311) StreamingKafkaITCase stalls indefinitely

2020-06-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136360#comment-17136360
 ] 

Robert Metzger commented on FLINK-18311:


All the builds are failing with this. Upgrading to blocker

> StreamingKafkaITCase stalls indefinitely
> 
>
> Key: FLINK-18311
> URL: https://issues.apache.org/jira/browse/FLINK-18311
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> CI: 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3537&view=logs&s=ae4f8708-9994-57d3-c2d7-b892156e7812&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee]
> {code}
> 020-06-15T21:01:59.0792207Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.10_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0793580Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka-0.11_2.11:1.11-SNAPSHOT:jar 
> already exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0794931Z [INFO] 
> org.apache.flink:flink-sql-connector-kafka_2.11:1.11-SNAPSHOT:jar already 
> exists in 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/target/dependencies
> 2020-06-15T21:01:59.0795686Z [INFO] 
> 2020-06-15T21:01:59.0796403Z [INFO] --- maven-surefire-plugin:2.22.1:test 
> (end-to-end-tests) @ flink-end-to-end-tests-common-kafka ---
> 2020-06-15T21:01:59.0869911Z [INFO] 
> 2020-06-15T21:01:59.0871981Z [INFO] 
> ---
> 2020-06-15T21:01:59.0874203Z [INFO]  T E S T S
> 2020-06-15T21:01:59.0875086Z [INFO] 
> ---
> 2020-06-15T21:02:00.0134000Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-06-15T21:45:33.4889677Z ##[error]The operation was canceled.
> 2020-06-15T21:45:33.4902658Z ##[section]Finishing: Run e2e tests
> 2020-06-15T21:45:33.5058601Z ##[section]Starting: Cache Maven local repo
> 2020-06-15T21:45:33.5164621Z 
> ==
> 2020-06-15T21:45:33.5164972Z Task : Cache
> 2020-06-15T21:45:33.5165250Z Description  : Cache files between runs
> 2020-06-15T21:45:33.5165497Z Version  : 2.0.1
> 2020-06-15T21:45:33.5165769Z Author   : Microsoft Corporation
> 2020-06-15T21:45:33.5166079Z Help : 
> https://aka.ms/pipeline-caching-docs
> 2020-06-15T21:45:33.5166442Z 
> ==
> 2020-06-15T21:45:34.0475096Z ##[section]Finishing: Cache Maven local repo
> 2020-06-15T21:45:34.0502436Z ##[section]Starting: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.0506976Z 
> ==
> 2020-06-15T21:45:34.0507297Z Task : Get sources
> 2020-06-15T21:45:34.0507642Z Description  : Get sources from a repository. 
> Supports Git, TfsVC, and SVN repositories.
> 2020-06-15T21:45:34.0507965Z Version  : 1.0.0
> 2020-06-15T21:45:34.0508198Z Author   : Microsoft
> 2020-06-15T21:45:34.0508559Z Help : [More 
> Information](https://go.microsoft.com/fwlink/?LinkId=798199)
> 2020-06-15T21:45:34.0508934Z 
> ==
> 2020-06-15T21:45:34.3924966Z Cleaning any cached credential from repository: 
> flink-ci/flink-mirror (GitHub)
> 2020-06-15T21:45:34.3990430Z ##[section]Finishing: Checkout 
> flink-ci/flink-mirror@release-1.11 to s
> 2020-06-15T21:45:34.4049857Z ##[section]Starting: Finalize Job
> 2020-06-15T21:45:34.4086754Z Cleaning up task key
> 2020-06-15T21:45:34.4087951Z Start cleaning up orphan processes.
> 2020-06-15T21:45:34.4481307Z Terminate orphan process: pid (11772) (java)
> 2020-06-15T21:45:34.4548480Z Terminate orphan process: pid (12132) (java)
> 2020-06-15T21:45:34.4632331Z Terminate orphan process: pid (30726) (bash)
> 2020-06-15T21:45:34.4660351Z Terminate orphan process: pid (30728) (bash)
> 2020-06-15T21:45:34.4710124Z Terminate orphan process: pid (68958) (java)
> 2020-06-15T21:45:34.4751577Z Terminate orphan process: pid (119102) (java)
> 2020-06-15T21:45:34.4800161Z Terminate orphan process: pid (129546) (sh)
> 2020-06-15T21:45:34.4830588Z Terminate orphan process: pid (129548) (java)
> 2020-06-15T21:45:34.4833955Z ##[section]Finishing: Finalize Job
> 2020-06-15T21:45:34.4877321Z ##[section]Finishing: e2e_ci
> {code}



--
This message was sent by Atlas

[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136356#comment-17136356
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 6:43 AM:
--

Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official developer,*

 

*the sub-version number  "linux-x64-72_binding"* of node-sass

is newest of node-sass in history.

 

*so they're not affected by my npm/nodejs.*

 

 

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 


was (Author: appleyuchi):
Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official develop,*

*so it's not affected by my npm/nodejs.*

 

while yes,I'm not sure which affect *the sub-version number  
"linux-x64-72_binding"* of node-sass.

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys commented on pull request #12649: [FLINK-18286] Implement type inference for GET/FLATTEN

2020-06-15 Thread GitBox


dawidwys commented on pull request #12649:
URL: https://github.com/apache/flink/pull/12649#issuecomment-644565539


   Let me explain what the changes are about. They are not trying to fix the 
nullability of the record type in all operators, but in my opinion it actually 
improves the behaviour described in the thread you posted (thank you for it, it 
is actually a good read).
   
   First of all we do very much want to respect the original nullability, 
that's the main purpose of those changes and also why we have a special 
handling in `FlinkTypeFactory:348`. We have that because Calcite actually 
modifies nested columns of a `ROW` type, if the type is nullable.
   
   In calcite a type `ROW` becomes `ROW` See 
`org.apache.calcite.rel.type.RelDataTypeFactoryImpl#createTypeWithNullability:326-338`.
 This behaviour is especially problematic for STRUCTURED types, which 
correspond to java's pojos. Let's take a look at an example:
   ```
   class Address {
 int id;
 String city;
   }
   ```
   such a pojo we would like to map to a `Address(id INT NOT NULL, city 
STRING)`, where the `Address` type itself (outer row) is nullable, but the 
nested `id` field is `NOT NULL`. This is not possible in Calcite by default. 
Thats why we have the changes in 
`org.apache.calcite.rel.type.RelDataTypeFactoryImpl#createTypeWithNullability:326-338`.
   
   On the other hand if we access a nested field of a nullable composite type 
we do want to adjust the nullability, cause accessing a field of a null row 
should obviously produce null values. If we do
   ```
   CREATE TABLE test (
 address Address
   );
   
   // the type should be nullable INT, if the outer row is null we can not 
produce a not null id
   SELECT t.address.id FROM test t;
   ```
   
   BTW this behaviour is already present in `SqlDotOperator#inferReturnType`, 
but as far as I can tell that method is not used.
   
   To summarize it again. I do very much want to respect the nullability of the 
record attributes. Thats the main goal of the changes. Calcite as of now does 
not respect the nullability of nested fields, which we had to fix in 
FLINK-16344. The changes here affect the return type of field accessors, not 
the record itself.
   
   I hope it gives a better overview what are the changes. If you have any 
questions let me know, I will try to elaborate more.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys edited a comment on pull request #12649: [FLINK-18286] Implement type inference for GET/FLATTEN

2020-06-15 Thread GitBox


dawidwys edited a comment on pull request #12649:
URL: https://github.com/apache/flink/pull/12649#issuecomment-644565539


   Let me explain what the changes are about. They are not trying to fix the 
nullability of the record type in all operators, but in my opinion it actually 
improves the behaviour described in the thread you posted (thank you for it, it 
is actually a good read).
   
   First of all we do very much want to respect the original nullability, 
that's the main purpose of those changes and also why we have a special 
handling in `FlinkTypeFactory:348`. We have that because Calcite actually 
modifies nested columns of a `ROW` type, if the type is nullable.
   
   In calcite a type `ROW` becomes `ROW` See 
`org.apache.calcite.rel.type.RelDataTypeFactoryImpl#createTypeWithNullability:326-338`.
 This behaviour is especially problematic for STRUCTURED types, which 
correspond to java's pojos. Let's take a look at an example:
   ```
   class Address {
 int id;
 String city;
   }
   ```
   such a pojo we would like to map to a `Address(id INT NOT NULL, city 
STRING)`, where the `Address` type itself (outer row) is nullable, but the 
nested `id` field is `NOT NULL`. This is not possible in Calcite by default. 
Thats why we have the changes in 
`org.apache.calcite.rel.type.RelDataTypeFactoryImpl#createTypeWithNullability:326-338`.
   
   On the other hand if we access a nested field of a nullable composite type 
we do want to adjust the nullability, cause accessing a field of a null row 
should obviously produce null values. If we do
   ```
   CREATE TABLE test (
 address Address
   );
   
   // the type should be nullable INT, if the outer row is null we can not 
produce a NOT NULL id
   SELECT t.address.id FROM test t;
   ```
   
   BTW this behaviour is already present in `SqlDotOperator#inferReturnType`, 
but as far as I can tell that method is not used.
   
   To summarize it again. I do very much want to respect the nullability of the 
record attributes. Thats the main goal of the changes. Calcite as of now does 
not respect the nullability of nested fields, which we had to fix in 
FLINK-16344. The changes here affect the return type of field accessors, not 
the record itself.
   
   I hope it gives a better overview what are the changes. If you have any 
questions let me know, I will try to elaborate more.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136356#comment-17136356
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 6:45 AM:
--

Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official developer,*

 

*the sub-version number  "linux-x64-72_binding"* of node-sass

is newest of node-sass in history.

 

*so they're not affected by my npm/nodejs.*

 what's your skill to modify *x64-72* to an older version?

#---

 

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 


was (Author: appleyuchi):
Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official developer,*

 

*the sub-version number  "linux-x64-72_binding"* of node-sass

is newest of node-sass in history.

 

*so they're not affected by my npm/nodejs.*

 

 

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18119) Fix unlimitedly growing state for time range bounded over aggregate

2020-06-15 Thread Hyeonseop Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136362#comment-17136362
 ] 

Hyeonseop Lee edited comment on FLINK-18119 at 6/16/20, 6:48 AM:
-

[~libenchao] {{RowTimeRange*Unbounded*PrecedingFunction}} is not the case. 
{{RowTimeRange*Bounded*PrecedingFunction}} in blink runtime 1.11 still has the 
issue.

It performs cleanup using processing timer when proper {{minRetentionTime}} and 
{{maxRetentionTime}} are configured, but what I want to improve is to retract 
records that is no longer required even when the state retention is not set 
(indefinite).

In my case, I first tried to set non-zero {{minRetentionTime}} to enable 
cleanup by retention, but that was applied to whole query and ended up with the 
retract stream instead of append stream. I understand setting state retention 
can be a walkaround to prevent OOM but I think functions must keep state as 
efficiently as possible.


was (Author: hyeonseop):
[~libenchao] {{RowTimeRange*Unbounded*PrecedingFunction}} is not the case. 
{{RowTimeRange*Bounded*PrecedingFunction}} in blink runtime 1.11 still has the 
issue.

It performs cleanup using processing timer when proper {{minRetentionTime}} and 
{{maxRetentionTime}} are configured, but what I want to improve is to retract 
records that is no longer required even the state retention is not set 
(indefinite).

In my case, I first tried to set non-zero {{minRetentionTime}} to enable 
cleanup by retention, but that was applied to whole query and ended up with the 
retract stream instead of append stream. I understand setting state retention 
can be a walkaround to prevent OOM but I think functions must keep state as 
efficiently as possible.

> Fix unlimitedly growing state for time range bounded over aggregate
> ---
>
> Key: FLINK-18119
> URL: https://issues.apache.org/jira/browse/FLINK-18119
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.1
>Reporter: Hyeonseop Lee
>Priority: Major
>
> For time range bounded over aggregation in streaming query, like below,
> {code:java}
> table
>   .window(Over.partitionBy 'a orderBy 'rowtime preceding 1.hour as 'w)
>   .groupBy('w)
>   .select('a, aggregateFunction('b))
> {code}
> the operator must hold incoming records over the preceding time range in the 
> state, but older records are no longer required and can be cleaned up.
> Current implementation cleans the old records up only when newer records come 
> in and so the operator knows that enough time has passed. However, the clean 
> up never happens unless a new record with the same key comes in and this 
> causes a state that perhaps will never be cleaned up, which leads to an 
> unlimitedly growing state especially when the keyspace mutates over time.
> Since aggregate over bounded preceding time interval doesn't require old 
> records by its nature, we can improve this by adding a timer that notifies 
> the operator to clean up old records, resulting in no changes in query result 
> or severe performance degrade.
> This is a distinct feature from state retention: state retention is to forget 
> some states that are expected to be less important to reduce state memory, so 
> it possibly changes query results. Enabling and disabling state retention 
> both make sense with this change.
> This issue applies to both row time range bound and proc time range bound. 
> That is, we are going to have changes in both 
> RowTimeRangeBoundedPrecedingFunction and 
> ProcTimeRangeBoundedPrecedingFunction in flink-table-runtime-blink. I already 
> have a running-in-production version with this change and would be glad to 
> contribute.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18119) Fix unlimitedly growing state for time range bounded over aggregate

2020-06-15 Thread Hyeonseop Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136362#comment-17136362
 ] 

Hyeonseop Lee commented on FLINK-18119:
---

[~libenchao] {{RowTimeRange*Unbounded*PrecedingFunction}} is not the case. 
{{RowTimeRange*Bounded*PrecedingFunction}} in blink runtime 1.11 still has the 
issue.

It performs cleanup using processing timer when proper {{minRetentionTime}} and 
{{maxRetentionTime}} are configured, but what I want to improve is to retract 
records that is no longer required even the state retention is not set 
(indefinite).

In my case, I first tried to set non-zero {{minRetentionTime}} to enable 
cleanup by retention, but that was applied to whole query and ended up with the 
retract stream instead of append stream. I understand setting state retention 
can be a walkaround to prevent OOM but I think functions must keep state as 
efficiently as possible.

> Fix unlimitedly growing state for time range bounded over aggregate
> ---
>
> Key: FLINK-18119
> URL: https://issues.apache.org/jira/browse/FLINK-18119
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.1
>Reporter: Hyeonseop Lee
>Priority: Major
>
> For time range bounded over aggregation in streaming query, like below,
> {code:java}
> table
>   .window(Over.partitionBy 'a orderBy 'rowtime preceding 1.hour as 'w)
>   .groupBy('w)
>   .select('a, aggregateFunction('b))
> {code}
> the operator must hold incoming records over the preceding time range in the 
> state, but older records are no longer required and can be cleaned up.
> Current implementation cleans the old records up only when newer records come 
> in and so the operator knows that enough time has passed. However, the clean 
> up never happens unless a new record with the same key comes in and this 
> causes a state that perhaps will never be cleaned up, which leads to an 
> unlimitedly growing state especially when the keyspace mutates over time.
> Since aggregate over bounded preceding time interval doesn't require old 
> records by its nature, we can improve this by adding a timer that notifies 
> the operator to clean up old records, resulting in no changes in query result 
> or severe performance degrade.
> This is a distinct feature from state retention: state retention is to forget 
> some states that are expected to be less important to reduce state memory, so 
> it possibly changes query results. Enabling and disabling state retention 
> both make sense with this change.
> This issue applies to both row time range bound and proc time range bound. 
> That is, we are going to have changes in both 
> RowTimeRangeBoundedPrecedingFunction and 
> ProcTimeRangeBoundedPrecedingFunction in flink-table-runtime-blink. I already 
> have a running-in-production version with this change and would be glad to 
> contribute.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136356#comment-17136356
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 6:49 AM:
--

Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official developer,*

 

*the sub-version number  "linux-x64-72_binding"* of node-sass

is newest of node-sass in history.

it's common sense for an dependency-manage-software to download the newest 
dependency.

 

*So,It's not dynamic,**they're not affected by my npm/nodejs.*

#---

 

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 


was (Author: appleyuchi):
Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official developer,*

 

*the sub-version number  "linux-x64-72_binding"* of node-sass

is newest of node-sass in history.

 

*so they're not affected by my npm/nodejs.*

 what's your skill to modify *x64-72* to an older version?

#---

 

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16795) End to end tests timeout on Azure

2020-06-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136364#comment-17136364
 ] 

Robert Metzger commented on FLINK-16795:


I guess all these new failures are caused by FLINK-18291. 


> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18119) Fix unlimitedly growing state for time range bounded over aggregate

2020-06-15 Thread Hyeonseop Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyeonseop Lee updated FLINK-18119:
--
Description: 
For time range bounded over aggregation in streaming query, like below,
{code:java}
table
  .window(Over.partitionBy 'a orderBy 'rowtime preceding 1.hour as 'w)
  .groupBy('w)
  .select('a, aggregateFunction('b))
{code}
the operator must hold incoming records over the preceding time range in the 
state, but older records are no longer required and can be cleaned up.

Current implementation retracts the old records only when newer records come in 
and so the operator knows that enough time has passed. However, the retraction 
never happens unless a new record with the same key comes in and this causes a 
state that perhaps will never be released, which leads to an unlimitedly 
growing state especially when the keyspace mutates over time.

Since aggregate over bounded preceding time interval doesn't require old 
records by its nature, we can improve this by adding a timer that notifies the 
operator to retract old records, resulting in no changes in query result or 
severe performance degrade.

This is a distinct feature from state retention: state retention is to forget 
some states that are expected to be less important to reduce state memory, so 
it possibly changes query results. Enabling and disabling state retention both 
make sense with this change.

This issue applies to both row time range bound and proc time range bound. That 
is, we are going to have changes in both RowTimeRangeBoundedPrecedingFunction 
and ProcTimeRangeBoundedPrecedingFunction in flink-table-runtime-blink. I 
already have a running-in-production version with this change and would be glad 
to contribute.

  was:
For time range bounded over aggregation in streaming query, like below,
{code:java}
table
  .window(Over.partitionBy 'a orderBy 'rowtime preceding 1.hour as 'w)
  .groupBy('w)
  .select('a, aggregateFunction('b))
{code}
the operator must hold incoming records over the preceding time range in the 
state, but older records are no longer required and can be cleaned up.

Current implementation cleans the old records up only when newer records come 
in and so the operator knows that enough time has passed. However, the clean up 
never happens unless a new record with the same key comes in and this causes a 
state that perhaps will never be cleaned up, which leads to an unlimitedly 
growing state especially when the keyspace mutates over time.

Since aggregate over bounded preceding time interval doesn't require old 
records by its nature, we can improve this by adding a timer that notifies the 
operator to clean up old records, resulting in no changes in query result or 
severe performance degrade.

This is a distinct feature from state retention: state retention is to forget 
some states that are expected to be less important to reduce state memory, so 
it possibly changes query results. Enabling and disabling state retention both 
make sense with this change.

This issue applies to both row time range bound and proc time range bound. That 
is, we are going to have changes in both RowTimeRangeBoundedPrecedingFunction 
and ProcTimeRangeBoundedPrecedingFunction in flink-table-runtime-blink. I 
already have a running-in-production version with this change and would be glad 
to contribute.


> Fix unlimitedly growing state for time range bounded over aggregate
> ---
>
> Key: FLINK-18119
> URL: https://issues.apache.org/jira/browse/FLINK-18119
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.1
>Reporter: Hyeonseop Lee
>Priority: Major
>
> For time range bounded over aggregation in streaming query, like below,
> {code:java}
> table
>   .window(Over.partitionBy 'a orderBy 'rowtime preceding 1.hour as 'w)
>   .groupBy('w)
>   .select('a, aggregateFunction('b))
> {code}
> the operator must hold incoming records over the preceding time range in the 
> state, but older records are no longer required and can be cleaned up.
> Current implementation retracts the old records only when newer records come 
> in and so the operator knows that enough time has passed. However, the 
> retraction never happens unless a new record with the same key comes in and 
> this causes a state that perhaps will never be released, which leads to an 
> unlimitedly growing state especially when the keyspace mutates over time.
> Since aggregate over bounded preceding time interval doesn't require old 
> records by its nature, we can improve this by adding a timer that notifies 
> the operator to retract old records, resulting in no changes in query result 
> or severe performance degrade.
> This is a distinct feature from state retention: state re

[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136356#comment-17136356
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 6:50 AM:
--

Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official developer,*

 

*the sub-version number  "linux-x64-72_binding"* of node-sass

is newest of node-sass in history.

it's common sense for an dependency-manage-software to download the newest 
dependency.

 

*So,It's not dynamic,**they're not affected by my npm/nodejs.*

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 


was (Author: appleyuchi):
Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official developer,*

 

*the sub-version number  "linux-x64-72_binding"* of node-sass

is newest of node-sass in history.

it's common sense for an dependency-manage-software to download the newest 
dependency.

 

*So,It's not dynamic,**they're not affected by my npm/nodejs.*

#---

 

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18316) Add a dynamic state registration primitive for Stateful Functions

2020-06-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18316:
---
Labels: pull-request-available  (was: )

> Add a dynamic state registration primitive for Stateful Functions
> -
>
> Key: FLINK-18316
> URL: https://issues.apache.org/jira/browse/FLINK-18316
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>  Labels: pull-request-available
>
> Currently, using the {{PersistedValue}} / {{PersistedTable}} / 
> {{PersistedAppendingBuffer}} primitives, the user can only eagerly define 
> states prior to function instance activation using the {{Persisted}} field 
> annotation.
> We propose to add a primitive that allows them to register states dynamically 
> after activation (i.e. during runtime), along the lines of:
> {code}
> public MyStateFn implements StatefulFunction {
> @Persisted
> private final PersistedStateProvider provider = new 
> PersistedStateProvider();
> public MyStateFn() {
> PersistedValue valueState = provider.getValue(...);
> }
> void invoke(Object input) {
> PersistedValue anotherValueState = provider.getValue(...);
> }
> }
> {code}
> Note how you can register state during instantiation (in the constructor) and 
> in the invoke method. Both registrations should be picked up by the runtime 
> and bound to Flink state.
> This will be useful for a few scenarios:
> - Could enable us to get rid of eager state spec definitions in the YAML 
> modules for remote functions in the future.
> - Will allow new state to be registered in remote functions, without shutting 
> down the StateFun cluster.
> - Moreover, this approach allows us to differentiate which functions have 
> dynamic state and which ones have only eager state, which might be handy in 
> the future in case there is a need to differentiate.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] tzulitai opened a new pull request #125: [FLINK-18316] Add support for dynamic (lazy) state registration

2020-06-15 Thread GitBox


tzulitai opened a new pull request #125:
URL: https://github.com/apache/flink-statefun/pull/125


   ## User usage
   
   This PR introduces a new state primitive in Stateful Functions, called 
`PersistedStateRegistry`.
   Users can use a state registry to dynamically register state like so:
   ```
   public MyStateFn implements StatefulFunction {
   
   @Persisted
   private final PersistedStateRegistry registry = new 
PersistedStateRegistry();
   
   public MyStateFn() {
   PersistedValue valueState = registry.getValue(...);
   }
   
   void invoke(Object input) {
   PersistedValue anotherValueState = registry.getValue(...);
   }
   }
   ```
   
   Notice how the registry may be used to register state prior to the 
instantiation of the function instance, as well as after instantiation.
   State registered either way will be bound to the system (i.e. Flink state 
backends).
   
   ## Binding the registry
   
   A `PersistedStateRegistry` contains a `StateBinder` (now a SDK interface) 
which binds state to the system. In local execution (e.g. in tests), this 
`StateBinder` is essentially a no-op binder; the registered states are used as 
is with their non-fault tolerant accessors.
   
   In actual execution, when a function is loaded and the runtime discovers a 
`PersistedStateRegistry` field, the registry object is bound with an actual 
fault-tolerant `StateBinder` that binds state to Flink state backends.
   
   ## Verifying the change
   
   New unit tests have been added for:
   - Demonstrating the example usage of the new SDK classes.
   - Verifying that `PersistedStateRegistry` fields are picked up, and state 
registered using the registry is correctly bound.
   
   ## Brief changelog
   
   - 2693f95 Removes an unused class which conflicts with the support of 
dynamic state registration
   - 23c8df1 Cleans up the responsibilities of the original Flink state 
`StateBinder`.
   - 2eb4aab Introduces the new SDK class `PersistedStateRegistry`. This also 
introduces a new `StateBinder` interface in the SDK, which is used internally 
by the system.
   - d54a66e Refactors the original Flink state `StateBinder` to extend the SDK 
`StateBinder`.
   - c72e87c Makes `PersistedStateRegistry` discoverable when loading a 
function instance.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136356#comment-17136356
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 6:53 AM:
--

Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official developer,*

 

*the sub-version number  "linux-x64-72_binding"* of node-sass

is newest of node-sass in history.

it's common sense for an dependency-manage-software to download the newest 
dependency.

 

*So,It's not dynamic,**they're not affected by my npm/nodejs,*

 

*your description are off the point,you descrition is about compatibility,not 
about availability.*

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 


was (Author: appleyuchi):
Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official developer,*

 

*the sub-version number  "linux-x64-72_binding"* of node-sass

is newest of node-sass in history.

it's common sense for an dependency-manage-software to download the newest 
dependency.

 

*So,It's not dynamic,**they're not affected by my npm/nodejs.*

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16795) End to end tests timeout on Azure

2020-06-15 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136367#comment-17136367
 ] 

Robert Metzger commented on FLINK-16795:


[~becket_qin] thanks a lot for posting the updated link here. From my analysis 
so far, I assume this has been caused by slow download speeds from the Azure VM.
Our current CI setup has the download logs disabled. If we such issues more 
frequently, I'll enable the logs again to get a glimpse what's going on

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18288) WEB UI failure in Flink1.12

2020-06-15 Thread appleyuchi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136356#comment-17136356
 ] 

appleyuchi edited comment on FLINK-18288 at 6/16/20, 6:54 AM:
--

Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official developer,*

 

*the sub-version number  "linux-x64-72_binding"* of node-sass

is newest of node-sass in history.

it's common sense for an dependency-manage-software to download the newest 
dependency.

 

*So,It's not dynamic,**they're not affected by my npm/nodejs,*

 

*your description are off the point,*

*you descrition is about package-compatibility,*

*not about package-availability(version is written by official developer)*

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 


was (Author: appleyuchi):
Thanks for your replies.

some supplement:

*v4.11.0 is written in pacakge-lock.json by official developer,*

 

*the sub-version number  "linux-x64-72_binding"* of node-sass

is newest of node-sass in history.

it's common sense for an dependency-manage-software to download the newest 
dependency.

 

*So,It's not dynamic,**they're not affected by my npm/nodejs,*

 

*your description are off the point,you descrition is about compatibility,not 
about availability.*

#---

for point 2 and 3 in your replies.

*how can you try this in ubuntu19.10?*

*it will cause "core dump" if you don't change npm and node.js version set by 
official develop*

 

 

 

 

 

> WEB UI failure in Flink1.12
> ---
>
> Key: FLINK-18288
> URL: https://issues.apache.org/jira/browse/FLINK-18288
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.0
>Reporter: appleyuchi
>Priority: Major
>
>  
>  
> ①build command:
> *mvn clean install -T 2C  -DskipTests -Dskip.npm -Dmaven.compile.fork=true*
>  
> ②use flink-conf.yaml  from 1.1o.1  in 1.12
> masters:
> Desktop:8082
>  
> slaves:
> Desktop
> Laptop
> ③$FLINK_HOME/bin/start-cluster.sh
>  
>  
> ④open browser in:
> Desktop:8082
> {"errors":["Unable to load requested file /index.html."]}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17977) Check log sanity

2020-06-15 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136369#comment-17136369
 ] 

Till Rohrmann commented on FLINK-17977:
---

Let's create a follow up ticket for the table module and close this ticket then.

> Check log sanity
> 
>
> Key: FLINK-17977
> URL: https://issues.apache.org/jira/browse/FLINK-17977
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available, release-testing
> Fix For: 1.11.0
>
>
> Run a normal Flink workload (e.g. job with fixed number of failures on 
> session cluster) and check that the produced Flink logs make sense and don't 
> contain confusing statements.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15661) JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure failed because of Could not find Flink job

2020-06-15 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136370#comment-17136370
 ] 

Till Rohrmann commented on FLINK-15661:
---

Given these observations, does it make sense to even use the Alibaba hardware 
to run our tests?

> JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure failed 
> because of Could not find Flink job 
> -
>
> Key: FLINK-15661
> URL: https://issues.apache.org/jira/browse/FLINK-15661
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.11.0
>Reporter: Congxian Qiu(klion26)
>Priority: Critical
>  Labels: test-stability
>
> 2020-01-19T06:25:02.3856954Z [ERROR] 
> JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure:347 The 
> program encountered a ExecutionException : 
> org.apache.flink.runtime.rest.util.RestClientException: 
> [org.apache.flink.runtime.rest.handler.RestHandlerException: 
> org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find 
> Flink job (47fe3e8df0e59994938485f683d1410e)
>  2020-01-19T06:25:02.3857171Z at 
> org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.propagateException(JobExecutionResultHandler.java:91)
>  2020-01-19T06:25:02.3857571Z at 
> org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.lambda$handleRequest$1(JobExecutionResultHandler.java:82)
>  2020-01-19T06:25:02.3857866Z at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>  2020-01-19T06:25:02.3857982Z at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>  2020-01-19T06:25:02.3859852Z at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  2020-01-19T06:25:02.3860440Z at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>  2020-01-19T06:25:02.3860732Z at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:872)
>  2020-01-19T06:25:02.3860960Z at 
> akka.dispatch.OnComplete.internal(Future.scala:263)
>  2020-01-19T06:25:02.3861099Z at 
> akka.dispatch.OnComplete.internal(Future.scala:261)
>  2020-01-19T06:25:02.3861232Z at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>  2020-01-19T06:25:02.3861391Z at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>  2020-01-19T06:25:02.3861546Z at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>  2020-01-19T06:25:02.3861712Z at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>  2020-01-19T06:25:02.3861809Z at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>  2020-01-19T06:25:02.3861916Z at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>  2020-01-19T06:25:02.3862221Z at 
> akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>  2020-01-19T06:25:02.3862475Z at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23)
>  2020-01-19T06:25:02.3862626Z at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>  2020-01-19T06:25:02.3862736Z at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>  2020-01-19T06:25:02.3862820Z at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>  2020-01-19T06:25:02.3867146Z at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>  2020-01-19T06:25:02.3867318Z at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>  2020-01-19T06:25:02.3867441Z at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>  2020-01-19T06:25:02.3867552Z at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>  2020-01-19T06:25:02.3867664Z at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>  2020-01-19T06:25:02.3867763Z at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>  2020-01-19T06:25:02.3867843Z at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>  2020-01-19T06:25:02.3867936Z at 
> akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>  2020-01-19T06:25:02.3868036Z at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>  2020-01-19T06:25:02.3868145Z at 
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>  2020-01-19T06:25:02.3868223Z at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.

[GitHub] [flink-statefun] tzulitai closed pull request #117: [FLINK-17690] Python function wrapper omits docstr

2020-06-15 Thread GitBox


tzulitai closed pull request #117:
URL: https://github.com/apache/flink-statefun/pull/117


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] tzulitai commented on pull request #117: [FLINK-17690] Python function wrapper omits docstr

2020-06-15 Thread GitBox


tzulitai commented on pull request #117:
URL: https://github.com/apache/flink-statefun/pull/117#issuecomment-644571299


   I'm closing this due to inactivity, and the fact that the proposed PR does 
not solve the actual problem. @abc863377 please reopen if you plan to still 
work on this, thank you!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




<    2   3   4   5   6   7