[jira] [Updated] (FLINK-20616) Support row-based operation to accept user-defined function directly

2020-12-15 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20616:

Summary: Support row-based operation to accept user-defined function 
directly  (was: Support Row-based Operation Accept Function Name)

> Support row-based operation to accept user-defined function directly
> 
>
> Key: FLINK-20616
> URL: https://issues.apache.org/jira/browse/FLINK-20616
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
> Fix For: 1.13.0
>
>
> Usage
> {code:java}
> @udf(result_type=DataTypes.ROW([DataTypes.FIELD("a", DataTypes.INT()),
>DataTypes.FIELD("b", DataTypes.INT())])
> def map_func(args):
> args # Row(a:Int, b: Int)
> return args
> t = ...  # type: Table, table schema: [a: String, b: Int]
> t.map(map_func){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20616) Support Row-based Operation Accept Function Name

2020-12-15 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-20616:
---

Assignee: Huang Xingbo

> Support Row-based Operation Accept Function Name
> 
>
> Key: FLINK-20616
> URL: https://issues.apache.org/jira/browse/FLINK-20616
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
> Fix For: 1.13.0
>
>
> Usage
> {code:java}
> @udf(result_type=DataTypes.ROW([DataTypes.FIELD("a", DataTypes.INT()),
>DataTypes.FIELD("b", DataTypes.INT())])
> def map_func(args):
> args # Row(a:Int, b: Int)
> return args
> t = ...  # type: Table, table schema: [a: String, b: Int]
> t.map(map_func){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-web] dianfu commented on a change in pull request #399: Add Apache Flink release 1.11.3

2020-12-15 Thread GitBox


dianfu commented on a change in pull request #399:
URL: https://github.com/apache/flink-web/pull/399#discussion_r544080934



##
File path: _config.yml
##
@@ -160,7 +160,12 @@ component_releases:
 
 release_archive:
 flink:
-  - version_short: "1.12"
+  -
+version_short: "1.11"

Review comment:
   All the 1.11 series should be grouped together.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14399: [FLINK-17827] [scala-shell] scala-shell.sh should fail early if no mo…

2020-12-15 Thread GitBox


flinkbot commented on pull request #14399:
URL: https://github.com/apache/flink/pull/14399#issuecomment-745854885


   
   ## CI report:
   
   * b77668657f364d87999556ee746e5d995b79443f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14397: [docs] fix typo in savepoint doc

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14397:
URL: https://github.com/apache/flink/pull/14397#issuecomment-745755749


   
   ## CI report:
   
   * a0536aec90b0c0681ecdc5934dd1dfc56e23a214 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10916)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10920)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14395: [FLINK-16491][formats] Add compression support for ParquetAvroWriters

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14395:
URL: https://github.com/apache/flink/pull/14395#issuecomment-745744218


   
   ## CI report:
   
   * c49e4e668d1ff384073fd39e10af994b19c27018 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10911)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14389: [FLINK-20528][python] Add Python building blocks to make sure the basic functionality of Stream Group Table Aggregation could work

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14389:
URL: https://github.com/apache/flink/pull/14389#issuecomment-745192120


   
   ## CI report:
   
   * 3027c25e19dbb51b9f96738959e043411fabff57 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10909)
 
   * c9a42ef2b9b8c865242fed422050db94466840dd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20619) Remove InputDependencyConstraint and InputDependencyConstraintChecker

2020-12-15 Thread Zhu Zhu (Jira)
Zhu Zhu created FLINK-20619:
---

 Summary: Remove InputDependencyConstraint and 
InputDependencyConstraintChecker
 Key: FLINK-20619
 URL: https://issues.apache.org/jira/browse/FLINK-20619
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Zhu Zhu
 Fix For: 1.13.0


InputDependencyConstraint was used by legacy scheduler and lazy-from-sources 
scheduling strategy. It is not needed anymore since both legacy scheduler and 
lazy-from-sources are removed. Hence we can remove it, as well as 
InputDependencyConstraintChecker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] legendtkl commented on pull request #14397: [docs] fix typo in savepoint doc

2020-12-15 Thread GitBox


legendtkl commented on pull request #14397:
URL: https://github.com/apache/flink/pull/14397#issuecomment-745846861


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo commented on pull request #14389: [FLINK-20528][python] Add Python building blocks to make sure the basic functionality of Stream Group Table Aggregation could work

2020-12-15 Thread GitBox


HuangXingBo commented on pull request #14389:
URL: https://github.com/apache/flink/pull/14389#issuecomment-745842532


   @dianfu Thanks a lot for the update. I have addressed the comments at the 
latest commit.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14399: [FLINK-17827] [scala-shell] scala-shell.sh should fail early if no mo…

2020-12-15 Thread GitBox


flinkbot commented on pull request #14399:
URL: https://github.com/apache/flink/pull/14399#issuecomment-745832362


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b77668657f364d87999556ee746e5d995b79443f (Wed Dec 16 
07:32:26 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-17827).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on pull request #14057: [FLINK-19681][checkpointing] Timeout aligned checkpoints

2020-12-15 Thread GitBox


rkhachatryan commented on pull request #14057:
URL: https://github.com/apache/flink/pull/14057#issuecomment-745830277


   I've updated the PR and addressed your feedback @pnowojski, please take a 
look.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17827) scala-shell.sh should fail early if no mode is specified, or have default logging settings

2020-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17827:
---
Labels: pull-request-available starter  (was: starter)

> scala-shell.sh should fail early if no mode is specified, or have default 
> logging settings
> --
>
> Key: FLINK-17827
> URL: https://issues.apache.org/jira/browse/FLINK-17827
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Affects Versions: 1.11.0
>Reporter: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available, starter
>
> The scala-shell has multiple modes it can run in: local, remote and yarn.
> It is mandatory to specify such a mode, but this is only enforced on the 
> scala side, not in the bash script.
> The problem is that the scala-shell script derives the log4j properties from 
> the mode, and if no mode is set, then the log4j properties are empty.
> This leads to a warning from slf4j that no logger was defined and all that.
> Either scala-shell.sh should fail early if no mode is specified, or it should 
> have some default logging settings (e.g., the ones for local/remote).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] paul8263 opened a new pull request #14399: [FLINK-17827] [scala-shell] scala-shell.sh should fail early if no mo…

2020-12-15 Thread GitBox


paul8263 opened a new pull request #14399:
URL: https://github.com/apache/flink/pull/14399


   …de is specified, or have default logging settings
   
   
   
   ## What is the purpose of the change
   
   Try to avoid the exception that log4j.properties and logback.xml are set 
wrongly when illegal mode or no mode is specified when running Flink Scala 
shell using start-scala-shell.sh
   
   ## Brief change log
   
   - Added an additional if-else branch to set the default log4j.properties and 
logback.xml when  illegal mode or no mode is provided.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo commented on a change in pull request #14389: [FLINK-20528][python] Add Python building blocks to make sure the basic functionality of Stream Group Table Aggregation could

2020-12-15 Thread GitBox


HuangXingBo commented on a change in pull request #14389:
URL: https://github.com/apache/flink/pull/14389#discussion_r544067798



##
File path: 
flink-python/src/main/java/org/apache/flink/table/runtime/operators/python/aggregate/PythonStreamGroupTableAggregateOperator.java
##
@@ -39,30 +37,19 @@
@VisibleForTesting
protected static final String STREAM_GROUP_TABLE_AGGREGATE_URN = 
"flink:transform:stream_group_table_aggregate:v1";
 
-   private final PythonAggregateFunctionInfo aggregateFunction;
-
-   private final DataViewUtils.DataViewSpec[] dataViewSpecs;
-
public PythonStreamGroupTableAggregateOperator(
Configuration config,
RowType inputType,
RowType outputType,
-   PythonAggregateFunctionInfo aggregateFunction,

Review comment:
   When there are multiple flat_maps, an additional 
PythonAggregateFunctionInfo of count(*) will be added to handle the situation 
of retract.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-20618) Some of the source operator subtasks will stuck when flink job in critical backpressure

2020-12-15 Thread zlzhang0122 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250145#comment-17250145
 ] 

zlzhang0122 edited comment on FLINK-20618 at 12/16/20, 7:20 AM:


We found this maybe related to the sequence number is not match the excepted 
sequence number, in the normal channel,the sequence number and excepted  
sequence number shows below.

 

!2020-12-16 11-53-42 的屏幕截图.png!

 

!2020-12-16 11-49-01 的屏幕截图.png!

 

but in the abnormal channel,it shows like this:

 

!2020-12-16 11-47-37 的屏幕截图.png!

 

!2020-12-16 11-48-30 的屏幕截图.png!


was (Author: zlzhang0122):
We found this maybe related to the sequence number is not match the excepted 
sequence number, in the normal channel,the sequence number and excepted  
sequence number shows below.

 

!2020-12-16 11-53-42 的屏幕截图.png!

 

!2020-12-16 11-49-01 的屏幕截图.png!

but in the abnormal channel,it shows like this:

!2020-12-16 11-47-37 的屏幕截图.png!

 

!2020-12-16 11-48-30 的屏幕截图.png!

> Some of the source operator subtasks will stuck when flink job in critical 
> backpressure
> ---
>
> Key: FLINK-20618
> URL: https://issues.apache.org/jira/browse/FLINK-20618
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.11.0, 1.10.2
>Reporter: zlzhang0122
>Priority: Critical
> Attachments: 2020-12-16 11-47-37 的屏幕截图.png, 2020-12-16 11-48-30 
> 的屏幕截图.png, 2020-12-16 11-49-01 的屏幕截图.png, 2020-12-16 11-53-42 的屏幕截图.png
>
>
> In some critical backpressure situation, some of the subtasks of source will 
> blocked to request buffer because of the LocalBufferPool is full,so the whole 
> task will be stuck and the other task run well.
> Bellow is the jstack trace:
>  
> Legacy Source Thread - Source: TalosTableSource(hash, timestamp, step, 
> isCustomize, hostname, endpoint, metric, dsType, orgId, idc, labels, pdl) 
> - SourceConversion(table=[default_catalog.default_database.transfer_c5, 
> source: [TalosTableSource(hash, timestamp, step, isCustomize, hostname, 
> endpoint, metric, dsType, orgId, idc, labels, pdl)]], fields=[hash, 
> timestamp, step, isCustomize, hostname, endpoint, metric, dsType, orgId, idc, 
> labels, pdl]) - Calc(select=[hash, timestamp, step, isCustomize, 
> hostname, endpoint, metric, dsType, orgId, idc, labels, pdl, () AS $12, 
> (timestamp - ((timestamp + 18000) MOD 86400)) AS $13]) (62/128) #113 prio=5 
> os_prio=0 tid=0x7f43d07e1800 nid=0x1b1c waiting on condition 
> [0x7f43b8488000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for 0xdb234488 (a 
> java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegmentBlocking(LocalBufferPool.java:231)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:209)
> at 
> org.apache.flink.runtime.io.network.partition.ResultPartition.getBufferBuilder(ResultPartition.java:189)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.requestNewBufferBuilder(ChannelSelectorRecordWriter.java:103)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:145)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:116)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:60)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:710)
> at StreamExecCalc$33.processElement(Unknown Source)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
> at 

[jira] [Comment Edited] (FLINK-20618) Some of the source operator subtasks will stuck when flink job in critical backpressure

2020-12-15 Thread zlzhang0122 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250145#comment-17250145
 ] 

zlzhang0122 edited comment on FLINK-20618 at 12/16/20, 7:19 AM:


We found this maybe related to the sequence number is not match the excepted 
sequence number, in the normal channel,the sequence number and excepted  
sequence number shows below.

 

!2020-12-16 11-53-42 的屏幕截图.png!

 

!2020-12-16 11-49-01 的屏幕截图.png!

but in the abnormal channel,it shows like this:

!2020-12-16 11-47-37 的屏幕截图.png!

 

!2020-12-16 11-48-30 的屏幕截图.png!


was (Author: zlzhang0122):
We found this maybe related to the sequence number is not match the excepted 
sequence number, in the normal channel,the sequence number and excepted  
sequence number shows below. !2020-12-16 11-53-42 的屏幕截图.png!

!2020-12-16 11-49-01 的屏幕截图.png!

but in the abnormal channel,it shows like this:

[^2020-12-16 11-48-30 的屏幕截图.png]

!2020-12-16 11-48-30 的屏幕截图.png!

> Some of the source operator subtasks will stuck when flink job in critical 
> backpressure
> ---
>
> Key: FLINK-20618
> URL: https://issues.apache.org/jira/browse/FLINK-20618
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.11.0, 1.10.2
>Reporter: zlzhang0122
>Priority: Critical
> Attachments: 2020-12-16 11-47-37 的屏幕截图.png, 2020-12-16 11-48-30 
> 的屏幕截图.png, 2020-12-16 11-49-01 的屏幕截图.png, 2020-12-16 11-53-42 的屏幕截图.png
>
>
> In some critical backpressure situation, some of the subtasks of source will 
> blocked to request buffer because of the LocalBufferPool is full,so the whole 
> task will be stuck and the other task run well.
> Bellow is the jstack trace:
>  
> Legacy Source Thread - Source: TalosTableSource(hash, timestamp, step, 
> isCustomize, hostname, endpoint, metric, dsType, orgId, idc, labels, pdl) 
> - SourceConversion(table=[default_catalog.default_database.transfer_c5, 
> source: [TalosTableSource(hash, timestamp, step, isCustomize, hostname, 
> endpoint, metric, dsType, orgId, idc, labels, pdl)]], fields=[hash, 
> timestamp, step, isCustomize, hostname, endpoint, metric, dsType, orgId, idc, 
> labels, pdl]) - Calc(select=[hash, timestamp, step, isCustomize, 
> hostname, endpoint, metric, dsType, orgId, idc, labels, pdl, () AS $12, 
> (timestamp - ((timestamp + 18000) MOD 86400)) AS $13]) (62/128) #113 prio=5 
> os_prio=0 tid=0x7f43d07e1800 nid=0x1b1c waiting on condition 
> [0x7f43b8488000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for 0xdb234488 (a 
> java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegmentBlocking(LocalBufferPool.java:231)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:209)
> at 
> org.apache.flink.runtime.io.network.partition.ResultPartition.getBufferBuilder(ResultPartition.java:189)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.requestNewBufferBuilder(ChannelSelectorRecordWriter.java:103)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:145)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:116)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:60)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:710)
> at StreamExecCalc$33.processElement(Unknown Source)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
> at 
> 

[GitHub] [flink] HuangXingBo commented on a change in pull request #14389: [FLINK-20528][python] Add Python building blocks to make sure the basic functionality of Stream Group Table Aggregation could

2020-12-15 Thread GitBox


HuangXingBo commented on a change in pull request #14389:
URL: https://github.com/apache/flink/pull/14389#discussion_r544065563



##
File path: flink-python/pyflink/table/udf.py
##
@@ -647,3 +692,72 @@ def udaf(f: Union[Callable, UserDefinedFunction, Type] = 
None,
 else:
 return _create_udaf(f, input_types, result_type, accumulator_type, 
func_type,
 deterministic, name)
+
+
+def udtaf(f: Union[Callable, UserDefinedFunction, Type] = None,
+  input_types: Union[List[DataType], DataType] = None, result_type: 
DataType = None,
+  accumulator_type: DataType = None, deterministic: bool = None, name: 
str = None,
+  func_type: str = 'general') -> 
Union[UserDefinedAggregateFunctionWrapper, Callable]:
+"""
+Helper method for creating a user-defined table aggregate function.
+
+Example:
+::
+
+>>> # The input_types is optional.
+>>> class Top2(TableAggregateFunction):
+... def emit_value(self, accumulator):
+... yield Row(accumulator[0])
+... yield Row(accumulator[1])
+...
+... def create_accumulator(self):
+... return [None, None]
+...
+... def accumulate(self, accumulator, *args):
+... if args[0] is not None:
+... if accumulator[0] is None or args[0] > accumulator[0]:
+... accumulator[1] = accumulator[0]
+... accumulator[0] = args[0]
+... elif accumulator[1] is None or args[0] > 
accumulator[1]:
+... accumulator[1] = args[0]
+...
+... def retract(self, accumulator, *args):
+... accumulator[0] = accumulator[0] - 1
+...
+... def merge(self, accumulator, accumulators):
+... for other_acc in accumulators:
+... self.accumulate(accumulator, other_acc[0])
+... self.accumulate(accumulator, other_acc[1])
+...
+... def get_accumulator_type(self):
+... return DataTypes.ARRAY(DataTypes.BIGINT())
+...
+... def get_result_type(self):
+... return DataTypes.ROW(
+... [DataTypes.FIELD("a", DataTypes.BIGINT())])
+>>> top2 = udtaf(Top2())
+
+:param f: user-defined table aggregate function.
+:param input_types: optional, the input data types.
+:param result_type: the result data type.
+:param accumulator_type: optional, the accumulator data type.
+:param deterministic: the determinism of the function's results. True if 
and only if a call to
+  this function is guaranteed to always return the 
same result given the
+  same parameters. (default True)
+:param name: the function name.
+:param func_type: the type of the python function, available value: general
+ (default: general)
+:return: UserDefinedAggregateFunctionWrapper or function.
+
+.. versionadded:: 1.13.0
+"""
+if func_type != 'general':
+raise ValueError("The func_type must be one of 'general', got %s."

Review comment:
   Yes. Unbounded stream will not support Pandas TableAggregateFunction, 
but group window stream can support Pandas TableAggregateFunction.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-20618) Some of the source operator subtasks will stuck when flink job in critical backpressure

2020-12-15 Thread zlzhang0122 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250145#comment-17250145
 ] 

zlzhang0122 edited comment on FLINK-20618 at 12/16/20, 7:18 AM:


We found this maybe related to the sequence number is not match the excepted 
sequence number, in the normal channel,the sequence number and excepted  
sequence number shows below. !2020-12-16 11-53-42 的屏幕截图.png!

!2020-12-16 11-49-01 的屏幕截图.png!

but in the abnormal channel,it shows like this:

[^2020-12-16 11-48-30 的屏幕截图.png]

!2020-12-16 11-48-30 的屏幕截图.png!


was (Author: zlzhang0122):
We found this maybe related to the sequence number is not match the excepted 
sequence number, in the normal channel,the sequence number and excepted  
sequence number shows below.

[^2020-12-16 11-53-42 的屏幕截图.png]

[^2020-12-16 11-48-30 的屏幕截图.png] [^2020-12-16 11-49-01 的屏幕截图.png]

but in the abnormal channel,it shows like this:

[^2020-12-16 11-47-37 的屏幕截图.png]

[^2020-12-16 11-48-30 的屏幕截图.png]

> Some of the source operator subtasks will stuck when flink job in critical 
> backpressure
> ---
>
> Key: FLINK-20618
> URL: https://issues.apache.org/jira/browse/FLINK-20618
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.11.0, 1.10.2
>Reporter: zlzhang0122
>Priority: Critical
> Attachments: 2020-12-16 11-47-37 的屏幕截图.png, 2020-12-16 11-48-30 
> 的屏幕截图.png, 2020-12-16 11-49-01 的屏幕截图.png, 2020-12-16 11-53-42 的屏幕截图.png
>
>
> In some critical backpressure situation, some of the subtasks of source will 
> blocked to request buffer because of the LocalBufferPool is full,so the whole 
> task will be stuck and the other task run well.
> Bellow is the jstack trace:
>  
> Legacy Source Thread - Source: TalosTableSource(hash, timestamp, step, 
> isCustomize, hostname, endpoint, metric, dsType, orgId, idc, labels, pdl) 
> - SourceConversion(table=[default_catalog.default_database.transfer_c5, 
> source: [TalosTableSource(hash, timestamp, step, isCustomize, hostname, 
> endpoint, metric, dsType, orgId, idc, labels, pdl)]], fields=[hash, 
> timestamp, step, isCustomize, hostname, endpoint, metric, dsType, orgId, idc, 
> labels, pdl]) - Calc(select=[hash, timestamp, step, isCustomize, 
> hostname, endpoint, metric, dsType, orgId, idc, labels, pdl, () AS $12, 
> (timestamp - ((timestamp + 18000) MOD 86400)) AS $13]) (62/128) #113 prio=5 
> os_prio=0 tid=0x7f43d07e1800 nid=0x1b1c waiting on condition 
> [0x7f43b8488000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for 0xdb234488 (a 
> java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegmentBlocking(LocalBufferPool.java:231)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:209)
> at 
> org.apache.flink.runtime.io.network.partition.ResultPartition.getBufferBuilder(ResultPartition.java:189)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.requestNewBufferBuilder(ChannelSelectorRecordWriter.java:103)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:145)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:116)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:60)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:710)
> at StreamExecCalc$33.processElement(Unknown Source)
> at 
> 

[jira] [Updated] (FLINK-20618) Some of the source operator subtasks will stuck when flink job in critical backpressure

2020-12-15 Thread zlzhang0122 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zlzhang0122 updated FLINK-20618:

Attachment: 2020-12-16 11-53-42 的屏幕截图.png
2020-12-16 11-49-01 的屏幕截图.png

> Some of the source operator subtasks will stuck when flink job in critical 
> backpressure
> ---
>
> Key: FLINK-20618
> URL: https://issues.apache.org/jira/browse/FLINK-20618
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.11.0, 1.10.2
>Reporter: zlzhang0122
>Priority: Critical
> Attachments: 2020-12-16 11-47-37 的屏幕截图.png, 2020-12-16 11-48-30 
> 的屏幕截图.png, 2020-12-16 11-49-01 的屏幕截图.png, 2020-12-16 11-53-42 的屏幕截图.png
>
>
> In some critical backpressure situation, some of the subtasks of source will 
> blocked to request buffer because of the LocalBufferPool is full,so the whole 
> task will be stuck and the other task run well.
> Bellow is the jstack trace:
>  
> Legacy Source Thread - Source: TalosTableSource(hash, timestamp, step, 
> isCustomize, hostname, endpoint, metric, dsType, orgId, idc, labels, pdl) 
> - SourceConversion(table=[default_catalog.default_database.transfer_c5, 
> source: [TalosTableSource(hash, timestamp, step, isCustomize, hostname, 
> endpoint, metric, dsType, orgId, idc, labels, pdl)]], fields=[hash, 
> timestamp, step, isCustomize, hostname, endpoint, metric, dsType, orgId, idc, 
> labels, pdl]) - Calc(select=[hash, timestamp, step, isCustomize, 
> hostname, endpoint, metric, dsType, orgId, idc, labels, pdl, () AS $12, 
> (timestamp - ((timestamp + 18000) MOD 86400)) AS $13]) (62/128) #113 prio=5 
> os_prio=0 tid=0x7f43d07e1800 nid=0x1b1c waiting on condition 
> [0x7f43b8488000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for 0xdb234488 (a 
> java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegmentBlocking(LocalBufferPool.java:231)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:209)
> at 
> org.apache.flink.runtime.io.network.partition.ResultPartition.getBufferBuilder(ResultPartition.java:189)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.requestNewBufferBuilder(ChannelSelectorRecordWriter.java:103)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:145)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:116)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:60)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:710)
> at StreamExecCalc$33.processElement(Unknown Source)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:710)
> at SourceConversion$4.processElement(Unknown Source)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
> at 
> 

[jira] [Commented] (FLINK-20618) Some of the source operator subtasks will stuck when flink job in critical backpressure

2020-12-15 Thread zlzhang0122 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250145#comment-17250145
 ] 

zlzhang0122 commented on FLINK-20618:
-

We found this maybe related to the sequence number is not match the excepted 
sequence number, in the normal channel,the sequence number and excepted  
sequence number shows below.

[^2020-12-16 11-53-42 的屏幕截图.png]

[^2020-12-16 11-48-30 的屏幕截图.png] [^2020-12-16 11-49-01 的屏幕截图.png]

but in the abnormal channel,it shows like this:

[^2020-12-16 11-47-37 的屏幕截图.png]

[^2020-12-16 11-48-30 的屏幕截图.png]

> Some of the source operator subtasks will stuck when flink job in critical 
> backpressure
> ---
>
> Key: FLINK-20618
> URL: https://issues.apache.org/jira/browse/FLINK-20618
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.11.0, 1.10.2
>Reporter: zlzhang0122
>Priority: Critical
> Attachments: 2020-12-16 11-47-37 的屏幕截图.png, 2020-12-16 11-48-30 
> 的屏幕截图.png, 2020-12-16 11-49-01 的屏幕截图.png, 2020-12-16 11-53-42 的屏幕截图.png
>
>
> In some critical backpressure situation, some of the subtasks of source will 
> blocked to request buffer because of the LocalBufferPool is full,so the whole 
> task will be stuck and the other task run well.
> Bellow is the jstack trace:
>  
> Legacy Source Thread - Source: TalosTableSource(hash, timestamp, step, 
> isCustomize, hostname, endpoint, metric, dsType, orgId, idc, labels, pdl) 
> - SourceConversion(table=[default_catalog.default_database.transfer_c5, 
> source: [TalosTableSource(hash, timestamp, step, isCustomize, hostname, 
> endpoint, metric, dsType, orgId, idc, labels, pdl)]], fields=[hash, 
> timestamp, step, isCustomize, hostname, endpoint, metric, dsType, orgId, idc, 
> labels, pdl]) - Calc(select=[hash, timestamp, step, isCustomize, 
> hostname, endpoint, metric, dsType, orgId, idc, labels, pdl, () AS $12, 
> (timestamp - ((timestamp + 18000) MOD 86400)) AS $13]) (62/128) #113 prio=5 
> os_prio=0 tid=0x7f43d07e1800 nid=0x1b1c waiting on condition 
> [0x7f43b8488000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for 0xdb234488 (a 
> java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegmentBlocking(LocalBufferPool.java:231)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:209)
> at 
> org.apache.flink.runtime.io.network.partition.ResultPartition.getBufferBuilder(ResultPartition.java:189)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.requestNewBufferBuilder(ChannelSelectorRecordWriter.java:103)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:145)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:116)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:60)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:710)
> at StreamExecCalc$33.processElement(Unknown Source)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
> at 
> 

[jira] [Updated] (FLINK-20618) Some of the source operator subtasks will stuck when flink job in critical backpressure

2020-12-15 Thread zlzhang0122 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zlzhang0122 updated FLINK-20618:

Attachment: 2020-12-16 11-48-30 的屏幕截图.png
2020-12-16 11-47-37 的屏幕截图.png

> Some of the source operator subtasks will stuck when flink job in critical 
> backpressure
> ---
>
> Key: FLINK-20618
> URL: https://issues.apache.org/jira/browse/FLINK-20618
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.11.0, 1.10.2
>Reporter: zlzhang0122
>Priority: Critical
> Attachments: 2020-12-16 11-47-37 的屏幕截图.png, 2020-12-16 11-48-30 
> 的屏幕截图.png
>
>
> In some critical backpressure situation, some of the subtasks of source will 
> blocked to request buffer because of the LocalBufferPool is full,so the whole 
> task will be stuck and the other task run well.
> Bellow is the jstack trace:
>  
> Legacy Source Thread - Source: TalosTableSource(hash, timestamp, step, 
> isCustomize, hostname, endpoint, metric, dsType, orgId, idc, labels, pdl) 
> - SourceConversion(table=[default_catalog.default_database.transfer_c5, 
> source: [TalosTableSource(hash, timestamp, step, isCustomize, hostname, 
> endpoint, metric, dsType, orgId, idc, labels, pdl)]], fields=[hash, 
> timestamp, step, isCustomize, hostname, endpoint, metric, dsType, orgId, idc, 
> labels, pdl]) - Calc(select=[hash, timestamp, step, isCustomize, 
> hostname, endpoint, metric, dsType, orgId, idc, labels, pdl, () AS $12, 
> (timestamp - ((timestamp + 18000) MOD 86400)) AS $13]) (62/128) #113 prio=5 
> os_prio=0 tid=0x7f43d07e1800 nid=0x1b1c waiting on condition 
> [0x7f43b8488000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for 0xdb234488 (a 
> java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
> at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegmentBlocking(LocalBufferPool.java:231)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:209)
> at 
> org.apache.flink.runtime.io.network.partition.ResultPartition.getBufferBuilder(ResultPartition.java:189)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.requestNewBufferBuilder(ChannelSelectorRecordWriter.java:103)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:145)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:116)
> at 
> org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:60)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:710)
> at StreamExecCalc$33.processElement(Unknown Source)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:710)
> at SourceConversion$4.processElement(Unknown Source)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
> at 
> 

[GitHub] [flink] dianfu commented on a change in pull request #14389: [FLINK-20528][python] Add Python building blocks to make sure the basic functionality of Stream Group Table Aggregation could work

2020-12-15 Thread GitBox


dianfu commented on a change in pull request #14389:
URL: https://github.com/apache/flink/pull/14389#discussion_r544054227



##
File path: flink-python/pyflink/table/udf.py
##
@@ -647,3 +692,72 @@ def udaf(f: Union[Callable, UserDefinedFunction, Type] = 
None,
 else:
 return _create_udaf(f, input_types, result_type, accumulator_type, 
func_type,
 deterministic, name)
+
+
+def udtaf(f: Union[Callable, UserDefinedFunction, Type] = None,
+  input_types: Union[List[DataType], DataType] = None, result_type: 
DataType = None,
+  accumulator_type: DataType = None, deterministic: bool = None, name: 
str = None,
+  func_type: str = 'general') -> 
Union[UserDefinedAggregateFunctionWrapper, Callable]:
+"""
+Helper method for creating a user-defined table aggregate function.
+
+Example:
+::
+
+>>> # The input_types is optional.
+>>> class Top2(TableAggregateFunction):
+... def emit_value(self, accumulator):
+... yield Row(accumulator[0])
+... yield Row(accumulator[1])
+...
+... def create_accumulator(self):
+... return [None, None]
+...
+... def accumulate(self, accumulator, *args):
+... if args[0] is not None:
+... if accumulator[0] is None or args[0] > accumulator[0]:
+... accumulator[1] = accumulator[0]
+... accumulator[0] = args[0]
+... elif accumulator[1] is None or args[0] > 
accumulator[1]:
+... accumulator[1] = args[0]
+...
+... def retract(self, accumulator, *args):
+... accumulator[0] = accumulator[0] - 1
+...
+... def merge(self, accumulator, accumulators):
+... for other_acc in accumulators:
+... self.accumulate(accumulator, other_acc[0])
+... self.accumulate(accumulator, other_acc[1])
+...
+... def get_accumulator_type(self):
+... return DataTypes.ARRAY(DataTypes.BIGINT())
+...
+... def get_result_type(self):
+... return DataTypes.ROW(
+... [DataTypes.FIELD("a", DataTypes.BIGINT())])
+>>> top2 = udtaf(Top2())
+
+:param f: user-defined table aggregate function.
+:param input_types: optional, the input data types.
+:param result_type: the result data type.
+:param accumulator_type: optional, the accumulator data type.
+:param deterministic: the determinism of the function's results. True if 
and only if a call to
+  this function is guaranteed to always return the 
same result given the
+  same parameters. (default True)
+:param name: the function name.
+:param func_type: the type of the python function, available value: general
+ (default: general)
+:return: UserDefinedAggregateFunctionWrapper or function.
+
+.. versionadded:: 1.13.0
+"""
+if func_type != 'general':
+raise ValueError("The func_type must be one of 'general', got %s."

Review comment:
   ```suggestion
   raise ValueError("The func_type must be 'general', got %s."
   ```

##
File path: flink-python/pyflink/table/udf.py
##
@@ -647,3 +692,72 @@ def udaf(f: Union[Callable, UserDefinedFunction, Type] = 
None,
 else:
 return _create_udaf(f, input_types, result_type, accumulator_type, 
func_type,
 deterministic, name)
+
+
+def udtaf(f: Union[Callable, UserDefinedFunction, Type] = None,

Review comment:
   ```suggestion
   def udtaf(f: Union[Callable, TableAggregateFunction, Type] = None,
   ```

##
File path: flink-python/pyflink/table/udf.py
##
@@ -647,3 +692,72 @@ def udaf(f: Union[Callable, UserDefinedFunction, Type] = 
None,
 else:
 return _create_udaf(f, input_types, result_type, accumulator_type, 
func_type,
 deterministic, name)
+
+
+def udtaf(f: Union[Callable, UserDefinedFunction, Type] = None,
+  input_types: Union[List[DataType], DataType] = None, result_type: 
DataType = None,
+  accumulator_type: DataType = None, deterministic: bool = None, name: 
str = None,
+  func_type: str = 'general') -> 
Union[UserDefinedAggregateFunctionWrapper, Callable]:
+"""
+Helper method for creating a user-defined table aggregate function.
+
+Example:
+::
+
+>>> # The input_types is optional.
+>>> class Top2(TableAggregateFunction):
+... def emit_value(self, accumulator):
+... yield Row(accumulator[0])
+... yield Row(accumulator[1])
+...
+... def create_accumulator(self):
+... return [None, None]
+...
+... def 

[jira] [Created] (FLINK-20618) Some of the source operator subtasks will stuck when flink job in critical backpressure

2020-12-15 Thread zlzhang0122 (Jira)
zlzhang0122 created FLINK-20618:
---

 Summary: Some of the source operator subtasks will stuck when 
flink job in critical backpressure
 Key: FLINK-20618
 URL: https://issues.apache.org/jira/browse/FLINK-20618
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.10.2, 1.11.0
Reporter: zlzhang0122


In some critical backpressure situation, some of the subtasks of source will 
blocked to request buffer because of the LocalBufferPool is full,so the whole 
task will be stuck and the other task run well.

Bellow is the jstack trace:
 
Legacy Source Thread - Source: TalosTableSource(hash, timestamp, step, 
isCustomize, hostname, endpoint, metric, dsType, orgId, idc, labels, pdl) - 
SourceConversion(table=[default_catalog.default_database.transfer_c5, source: 
[TalosTableSource(hash, timestamp, step, isCustomize, hostname, endpoint, 
metric, dsType, orgId, idc, labels, pdl)]], fields=[hash, timestamp, step, 
isCustomize, hostname, endpoint, metric, dsType, orgId, idc, labels, pdl]) 
- Calc(select=[hash, timestamp, step, isCustomize, hostname, endpoint, 
metric, dsType, orgId, idc, labels, pdl, () AS $12, (timestamp - ((timestamp + 
18000) MOD 86400)) AS $13]) (62/128) #113 prio=5 os_prio=0 
tid=0x7f43d07e1800 nid=0x1b1c waiting on condition [0x7f43b8488000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for 0xdb234488 (a 
java.util.concurrent.CompletableFuture$Signaller)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
at 
java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegmentBlocking(LocalBufferPool.java:231)
at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:209)
at 
org.apache.flink.runtime.io.network.partition.ResultPartition.getBufferBuilder(ResultPartition.java:189)
at 
org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.requestNewBufferBuilder(ChannelSelectorRecordWriter.java:103)
at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:145)
at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:116)
at 
org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:60)
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:710)
at StreamExecCalc$33.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:710)
at SourceConversion$4.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:732)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:710)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
- locked 0xd8d50fa8 (a java.lang.Object)
at 

[jira] [Commented] (FLINK-17111) Support SHOW VIEWS in Flink SQL

2020-12-15 Thread Kurt Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250137#comment-17250137
 ] 

Kurt Young commented on FLINK-17111:


[~knaufk] Could you try 1.11 with SQL CLI for this feature? I suspect that this 
feature was never supported in SQL CLI but only in TableEnvironment.

> Support SHOW VIEWS in Flink SQL 
> 
>
> Key: FLINK-17111
> URL: https://issues.apache.org/jira/browse/FLINK-17111
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> SHOW TABLES and SHOW VIEWS are not SQL standard-compliant commands. 
> MySQL supports SHOW TABLES which lists the non-TEMPORARY tables(and views) in 
> a given database, and doesn't support SHOW VIEWS.
> Oracle/SQL Server/PostgreSQL don't support SHOW TABLES and SHOW VIEWS. A 
> workaround is to query a system table which stores metadata of tables and 
> views.
> Hive supports both SHOW TABLES and SHOW VIEWS. 
> We follows the Hive style which lists all tables and views with SHOW TABLES 
> and lists only views with SHOW VIEWS. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14389: [FLINK-20528][python] Add Python building blocks to make sure the basic functionality of Stream Group Table Aggregation could work

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14389:
URL: https://github.com/apache/flink/pull/14389#issuecomment-745192120


   
   ## CI report:
   
   * 3027c25e19dbb51b9f96738959e043411fabff57 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10909)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-2491) Checkpointing only works if all operators/tasks are still running

2020-12-15 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-2491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250134#comment-17250134
 ] 

Yun Gao commented on FLINK-2491:


Hi [~swapnilkhante], very sorry for the late reply. For delaying two 
checkpoints, one reference might be [this 
test|https://github.com/apache/flink/blob/313e20e8e03953a5e1cec9daa467f561ccfbd599/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/StreamingExecutionFileSinkITCase.java#L105],
 and a simpler way might be just sleep a long time after the source function 
has send all the records. 

However, Based on my knowledge it might not be easy to add the similar logic to 
an existing table source, thus it might need to add some DataStream sources 
manually. 

> Checkpointing only works if all operators/tasks are still running
> -
>
> Key: FLINK-2491
> URL: https://issues.apache.org/jira/browse/FLINK-2491
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 0.10.0
>Reporter: Robert Metzger
>Priority: Critical
> Attachments: fix_checkpoint_not_working_if_tasks_are_finished.patch
>
>
> While implementing a test case for the Kafka Consumer, I came across the 
> following bug:
> Consider the following topology, with the operator parallelism in parentheses:
> Source (2) --> Sink (1).
> In this setup, the {{snapshotState()}} method is called on the source, but 
> not on the Sink.
> The sink receives the generated data.
> only one of the two sources is generating data.
> I've implemented a test case for this, you can find it here: 
> https://github.com/rmetzger/flink/blob/para_checkpoint_bug/flink-tests/src/test/java/org/apache/flink/test/checkpointing/ParallelismChangeCheckpoinedITCase.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger commented on a change in pull request #14394: [FLINK-19013][state-backends] Add start/end logs for state restoration

2020-12-15 Thread GitBox


rmetzger commented on a change in pull request #14394:
URL: https://github.com/apache/flink/pull/14394#discussion_r544037229



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapRestoreOperation.java
##
@@ -164,6 +169,7 @@ public Void restore() throws Exception {
kvStatesById, restoredMetaInfos.size(),
serializationProxy.getReadVersion(),

serializationProxy.isUsingKeyGroupCompression());
+   LOG.info("Finish to restore from state handle: 
{}.", keyedStateHandle);

Review comment:
   ```suggestion
LOG.info("Finished restoring from state handle: 
{}.", keyedStateHandle);
   ```

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapKeyedStateBackendBuilder.java
##
@@ -112,6 +112,7 @@ public HeapKeyedStateBackendBuilder(
keyContext);
try {
restoreOperation.restore();
+   logger.info("Finish to build heap keyed 
state-backend.");

Review comment:
   ```suggestion
logger.info("Finished to build heap keyed 
state-backend.");
   ```

##
File path: 
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/restore/RocksDBFullRestoreOperation.java
##
@@ -162,11 +162,13 @@ public RocksDBRestoreResult restore()
private void restoreKeyGroupsInStateHandle()
throws IOException, StateMigrationException, RocksDBException {
try {
+   logger.info("Start to restore from state handle: {}.", 
currentKeyGroupsStateHandle);
currentStateHandleInStream = 
currentKeyGroupsStateHandle.openInputStream();

cancelStreamRegistry.registerCloseable(currentStateHandleInStream);
currentStateHandleInView = new 
DataInputViewStreamWrapper(currentStateHandleInStream);
restoreKVStateMetaData();
restoreKVStateData();
+   logger.info("Finish to restore from state handle: {}.", 
currentKeyGroupsStateHandle);

Review comment:
   ```suggestion
logger.info("Finished restoring from state handle: 
{}.", currentKeyGroupsStateHandle);
   ```

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapRestoreOperation.java
##
@@ -122,6 +126,7 @@ public Void restore() throws Exception {
throw 
unexpectedStateHandleException(KeyGroupsStateHandle.class, 
keyedStateHandle.getClass());
}
 
+   LOG.info("Start to restore from state handle: {}.", 
keyedStateHandle);

Review comment:
   ```suggestion
LOG.info("Starting to restore from state handle: {}.", 
keyedStateHandle);
   ```

##
File path: 
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackendBuilder.java
##
@@ -317,21 +314,22 @@ private static void checkAndCreateDirectory(File 
directory) throws IOException {
try {
FileUtils.deleteDirectory(instanceBasePath);
} catch (Exception ex) {
-   LOG.warn("Failed to instance base path for 
RocksDB: " + instanceBasePath, ex);
+   logger.warn("Failed to instance base path for 
RocksDB: " + instanceBasePath, ex);

Review comment:
   ```suggestion
logger.warn("Failed to delete base path for 
RocksDB: " + instanceBasePath, ex);
   ```
   
   Not sure, maybe I don't understand what the code is doing here.

##
File path: 
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackendBuilder.java
##
@@ -317,21 +314,22 @@ private static void checkAndCreateDirectory(File 
directory) throws IOException {
try {
FileUtils.deleteDirectory(instanceBasePath);
} catch (Exception ex) {
-   LOG.warn("Failed to instance base path for 
RocksDB: " + instanceBasePath, ex);
+   logger.warn("Failed to instance base path for 
RocksDB: " + instanceBasePath, ex);
}
// Log and rethrow
if (e instanceof BackendBuildingException) {
throw (BackendBuildingException) e;
} else {
String errMsg = "Caught unexpected exception.";
-   

[jira] [Created] (FLINK-20617) Kafka Consumer Deserializer Exception on application mode

2020-12-15 Thread Georger (Jira)
Georger created FLINK-20617:
---

 Summary: Kafka Consumer Deserializer Exception on application mode
 Key: FLINK-20617
 URL: https://issues.apache.org/jira/browse/FLINK-20617
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.11.2
 Environment: application mode

flink 1.11.2 with  hadoop 2.6.0-cdh5.15.0
Reporter: Georger


Kafka source may has some issues on application mode
 
when i run it with application mode on  flink 1.11.2 it can't startup
the detail Excetion is:
org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:789)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:643)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:623)
at 
org.apache.flink.streaming.connectors.kafka.internal.KafkaPartitionDiscoverer.initializeConnections(KafkaPartitionDiscoverer.java:58)
at 
org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.open(AbstractPartitionDiscoverer.java:94)
at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:550)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:291)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:479)
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:92)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:475)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:528)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.KafkaException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer is not an instance 
of org.apache.kafka.common.serialization.Deserializer
at 
org.apache.kafka.common.config.AbstractConfig.getConfiguredInstance(AbstractConfig.java:263)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:688)
... 15 more
The pom is:

 org.apache.flink
 flink-connector-kafka_2.11
 ${flink.version}
 
 
 org.slf4j
 slf4j-api
 
 
 org.apache.kafka
 kafka-clients
 
 


 org.apache.kafka
 kafka-clients
 1.0.1




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20616) Support Row-based Operation Accept Function Name

2020-12-15 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-20616:


 Summary: Support Row-based Operation Accept Function Name
 Key: FLINK-20616
 URL: https://issues.apache.org/jira/browse/FLINK-20616
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Reporter: Huang Xingbo
 Fix For: 1.13.0


Usage
{code:java}
@udf(result_type=DataTypes.ROW([DataTypes.FIELD("a", DataTypes.INT()),
   DataTypes.FIELD("b", DataTypes.INT())])
def map_func(args):
args # Row(a:Int, b: Int)
return args

t = ...  # type: Table, table schema: [a: String, b: Int]
t.map(map_func){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xiaoHoly commented on pull request #14377: [FLINK-19905][Connector][jdbc] The Jdbc-connector's 'lookup.max-retries' option initial value is 1 in JdbcLookupFunction

2020-12-15 Thread GitBox


xiaoHoly commented on pull request #14377:
URL: https://github.com/apache/flink/pull/14377#issuecomment-745792340


   @wangxlong ,cc



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xiaoHoly commented on pull request #14387: [FLINK-19691][Connector][jdbc] Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread GitBox


xiaoHoly commented on pull request #14387:
URL: https://github.com/apache/flink/pull/14387#issuecomment-745791259


   @wangxlong ,plea review again



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-19236) Optimize the performance of Python UDAF

2020-12-15 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-19236:
---

Assignee: Huang Xingbo

> Optimize the performance of Python UDAF
> ---
>
> Key: FLINK-19236
> URL: https://issues.apache.org/jira/browse/FLINK-19236
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Wei Zhong
>Assignee: Huang Xingbo
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14397: [docs] fix typo in savepoint doc

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14397:
URL: https://github.com/apache/flink/pull/14397#issuecomment-745755749


   
   ## CI report:
   
   * a0536aec90b0c0681ecdc5934dd1dfc56e23a214 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10916)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14396: Fix java demo code

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14396:
URL: https://github.com/apache/flink/pull/14396#issuecomment-745755403


   
   ## CI report:
   
   * f6b0082b7556de34c8fbf466e617fc5b71bf2b8f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10914)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14387: [FLINK-19691][Connector][jdbc] Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14387:
URL: https://github.com/apache/flink/pull/14387#issuecomment-745165624


   
   ## CI report:
   
   * 1f2f6e1075b11988e06013f648d1796b4e978a48 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10918)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20612) Add benchmarks for scheduler

2020-12-15 Thread Zhilong Hong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhilong Hong updated FLINK-20612:
-
Description: 
With Flink 1.12, we failed to run large-scale jobs on our cluster. When we were 
trying to run the jobs, we met the exceptions like out of heap memory, 
taskmanager heartbeat timeout, and etc. We increased the size of heap memory 
and extended the heartbeat timeout, the job still failed. After the 
troubleshooting, we found that there are some performance bottlenecks in the 
jobmaster. These bottlenecks are highly related to the complexity of the 
topology.

We implemented several benchmarks on these bottlenecks based on 
flink-benchmark. The topology of the benchmarks is a simple graph, which 
consists of only two vertices: one source vertex and one sink vertex. They are 
both connected with all-to-all blocking edges. The parallelisms of the vertices 
are both 8000. The execution mode is batch. The results of the benchmarks are 
illustrated below:

Table 1: The result of benchmarks on bottlenecks in the jobmaster
| |*Time spent*|
|Build topology|19970.44 ms|
|Init scheduling strategy|38167.351 ms|
|Deploy tasks|15102.850 ms|
|Calculate failover region to restart|12080.271 ms|

We'd like to propose these benchmarks for procedures related to the scheduler. 
There are three main benefits:
 # They help us to understand the current status of task deployment performance 
and locate where the bottleneck is.
 # We can use the benchmarks to evaluate the optimization in the future.
 # As we run the benchmarks daily, they will help us to trace how the 
performance changes and locate the commit that introduces the performance 
regression if there is any.

In the first version of the benchmarks, we mainly focus on the procedures we 
mentioned above. The methods corresponding to the procedures are:
 # Building topology: {{ExecutionGraph#attachJobGraph}}
 # Initializing scheduling strategies: 
{{PipelinedRegionSchedulingStrategy#init}}
 # Deploying tasks: {{Execution#deploy}}
 # Calculating failover regions: 
{{RestartPipelinedRegionFailoverStrategy#getTasksNeedingRestart}}

In the benchmarks, the topology consists of two vertices: source -> sink. They 
are connected with all-to-all edges. The result partition type ({{PIPELINED}} 
and {{BLOCKING}}) should be considered separately.

  was:
With Flink 1.12, we failed to run large-scale jobs on our cluster. When we were 
trying to run the jobs, we met the exceptions like out of heap memory, 
taskmanager heartbeat timeout, and etc. We increased the size of heap memory 
and extended the heartbeat timeout, the job still failed. After the 
troubleshooting, we found that there are some performance bottlenecks in the 
jobmaster. These bottlenecks are highly related to the complexity of the 
topology.

We implemented several benchmarks on these bottlenecks based on 
flink-benchmark. The topology of the benchmarks is a simple graph, which 
consists of only two vertices: one source vertex and one sink vertex. They are 
both connected with all-to-all blocking edges. The parallelisms of the vertices 
are both 8000. The execution mode is batch. The results of the benchmarks are 
illustrated below:

Table 1: The result of benchmarks on bottlenecks in the jobmaster
| |*Time spent*|
|Build topology|19970.44 ms|
|Init scheduling strategy|41668.338 ms|
|Deploy tasks|15102.850 ms|
|Calculate failover region to restart|12080.271 ms|

We'd like to propose these benchmarks for procedures related to the scheduler. 
There are three main benefits:
 # They help us to understand the current status of task deployment performance 
and locate where the bottleneck is.
 # We can use the benchmarks to evaluate the optimization in the future.
 # As we run the benchmarks daily, they will help us to trace how the 
performance changes and locate the commit that introduces the performance 
regression if there is any.

In the first version of the benchmarks, we mainly focus on the procedures we 
mentioned above. The methods corresponding to the procedures are:
 # Building topology: {{ExecutionGraph#attachJobGraph}}
 # Initializing scheduling strategies: 
{{PipelinedRegionSchedulingStrategy#init}}
 # Deploying tasks: {{Execution#deploy}}
 # Calculating failover regions: 
{{RestartPipelinedRegionFailoverStrategy#getTasksNeedingRestart}}

In the benchmarks, the topology consists of two vertices: source -> sink. They 
are connected with all-to-all edges. The result partition type ({{PIPELINED}} 
and {{BLOCKING}}) should be considered separately.


> Add benchmarks for scheduler
> 
>
> Key: FLINK-20612
> URL: https://issues.apache.org/jira/browse/FLINK-20612
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Zhilong Hong
>

[jira] [Updated] (FLINK-20417) Handle "Too old resource version" exception in Kubernetes watch more gracefully

2020-12-15 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-20417:
--
Fix Version/s: (was: 1.11.3)
   1.12.1

> Handle "Too old resource version" exception in Kubernetes watch more 
> gracefully
> ---
>
> Key: FLINK-20417
> URL: https://issues.apache.org/jira/browse/FLINK-20417
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Yang Wang
>Priority: Major
> Fix For: 1.13.0, 1.12.1
>
>
> Currently, when the watcher(pods watcher, configmap watcher) is closed with 
> exception, we will call {{WatchCallbackHandler#handleFatalError}}. And this 
> could cause JobManager terminating and then failover.
> For most cases, this is correct. But not for "too old resource version" 
> exception. See more information here[1]. Usually this exception could happen 
> when the APIServer is restarted. And we just need to create a new watch and 
> continue to do the pods/configmap watching. This could help the Flink cluster 
> reducing the impact of K8s cluster restarting.
>  
> The issue is inspired by this technical article[2]. Thanks the guys from 
> tencent for the debugging. Note this is a Chinese documentation.
>  
> [1]. 
> [https://stackoverflow.com/questions/61409596/kubernetes-too-old-resource-version]
> [2]. [https://cloud.tencent.com/developer/article/1731416]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14398: [FLINK-20516][table-planner-blink] Separate the implementation of BatchExecTableSourceScan and StreamExecTableSourceScan

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14398:
URL: https://github.com/apache/flink/pull/14398#issuecomment-745762402


   
   ## CI report:
   
   * 8cd9624ca8461fad7ad51d1e9fde1ab8b9d62d9b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10917)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16325) A connection check is required, and it needs to be reopened when the JDBC connection is interrupted

2020-12-15 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250102#comment-17250102
 ] 

jiawen xiao commented on FLINK-16325:
-

[~jark],cc

>  A connection check is required, and it needs to be reopened when the JDBC 
> connection is interrupted
> 
>
> Key: FLINK-16325
> URL: https://issues.apache.org/jira/browse/FLINK-16325
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: renjianxu
>Priority: Minor
>
> JDBCOutputFormat#writeRecord.
> When writing data, if the JDBC connection has been disconnected, the data 
> will be lost.Therefore, a connectivity judgment is required in the 
> writeRecord method.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14387: [FLINK-19691][Connector][jdbc] Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14387:
URL: https://github.com/apache/flink/pull/14387#issuecomment-745165624


   
   ## CI report:
   
   * 30b6460816afa6e64767d5e7ebe61bdbfc0ff84c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10912)
 
   * 1f2f6e1075b11988e06013f648d1796b4e978a48 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14398: [FLINK-20516][table-planner-blink] Separate the implementation of BatchExecTableSourceScan and StreamExecTableSourceScan

2020-12-15 Thread GitBox


flinkbot commented on pull request #14398:
URL: https://github.com/apache/flink/pull/14398#issuecomment-745762402


   
   ## CI report:
   
   * 8cd9624ca8461fad7ad51d1e9fde1ab8b9d62d9b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14396: Fix java demo code

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14396:
URL: https://github.com/apache/flink/pull/14396#issuecomment-745755403


   
   ## CI report:
   
   * f6b0082b7556de34c8fbf466e617fc5b71bf2b8f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10914)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14397: [docs] fix typo in savepoint doc

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14397:
URL: https://github.com/apache/flink/pull/14397#issuecomment-745755749


   
   ## CI report:
   
   * a0536aec90b0c0681ecdc5934dd1dfc56e23a214 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10916)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13932: [FLINK-19947][Connectors / Common]Support sink parallelism configuration for Print connector

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #13932:
URL: https://github.com/apache/flink/pull/13932#issuecomment-722193609


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * c1701d79d8f06f5c35eea38ca5ec82723fcfd194 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10913)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20516) Separate the implementation of BatchExecTableSourceScan and StreamExecTableSourceScan

2020-12-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20516:
---
Labels: pull-request-available  (was: )

> Separate the implementation of BatchExecTableSourceScan and 
> StreamExecTableSourceScan
> -
>
> Key: FLINK-20516
> URL: https://issues.apache.org/jira/browse/FLINK-20516
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14398: [FLINK-20516][table-planner-blink] Separate the implementation of BatchExecTableSourceScan and StreamExecTableSourceScan

2020-12-15 Thread GitBox


flinkbot commented on pull request #14398:
URL: https://github.com/apache/flink/pull/14398#issuecomment-745757301


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 8cd9624ca8461fad7ad51d1e9fde1ab8b9d62d9b (Wed Dec 16 
04:33:13 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe opened a new pull request #14398: [FLINK-20513][table-planner-blink] Separate the implementation of BatchExecTableSourceScan and StreamExecTableSourceScan

2020-12-15 Thread GitBox


godfreyhe opened a new pull request #14398:
URL: https://github.com/apache/flink/pull/14398


   ## What is the purpose of the change
   
   *Separate the implementation of BatchExecTableSourceScan and 
StreamExecTableSourceScan*
   
   
   ## Brief change log
   
 - *Introduce StreamPhysicalTableSourceScan, and make 
StreamExecTableSourceScan only extended from ExecNode*
 - *Introduce BatchPhysicalTableSourceScan, and make 
BatchExecTableSourceScan only extended from ExecNode*
   
   
   ## Verifying this change
   
   This change is a refactoring rework covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14397: [docs] fix typo in savepoint doc

2020-12-15 Thread GitBox


flinkbot commented on pull request #14397:
URL: https://github.com/apache/flink/pull/14397#issuecomment-745755749


   
   ## CI report:
   
   * a0536aec90b0c0681ecdc5934dd1dfc56e23a214 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14396: Fix java demo code

2020-12-15 Thread GitBox


flinkbot commented on pull request #14396:
URL: https://github.com/apache/flink/pull/14396#issuecomment-745755403


   
   ## CI report:
   
   * f6b0082b7556de34c8fbf466e617fc5b71bf2b8f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14387: [FLINK-19691][Connector][jdbc] Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14387:
URL: https://github.com/apache/flink/pull/14387#issuecomment-745165624


   
   ## CI report:
   
   * 30b6460816afa6e64767d5e7ebe61bdbfc0ff84c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10912)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20513) Separate the implementation of BatchExecExchange and StreamExecExchange

2020-12-15 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-20513.
--
Resolution: Done

master: dca86b86..24d60659

> Separate the implementation of BatchExecExchange and StreamExecExchange
> ---
>
> Key: FLINK-20513
> URL: https://issues.apache.org/jira/browse/FLINK-20513
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
>
> The issue will separate the implementation of {{Batch(/Stream)ExecExchange}}, 
> we will introduce Batch(/Stream)PhysicalExchange which only extends from 
> {{FlinkPhysicalRel}} , and describes the physical   info of {{Exchange}}. 
> Meanwhile, {{BatchExecExchange}} will be moved into `nodes.exec.batch` 
> package and will implement ExecNode for Exchange,  so do it for 
> {{StreamExecExchange}}. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13932: [FLINK-19947][Connectors / Common]Support sink parallelism configuration for Print connector

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #13932:
URL: https://github.com/apache/flink/pull/13932#issuecomment-722193609


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * c1701d79d8f06f5c35eea38ca5ec82723fcfd194 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe closed pull request #14384: [FLINK-20513][table-planner-blink] Separate the implementation of BatchExecExchange and StreamExecExchange

2020-12-15 Thread GitBox


godfreyhe closed pull request #14384:
URL: https://github.com/apache/flink/pull/14384


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #13641: [FLINK-17760][tests] Rework tests to not rely on legacy scheduling codes in ExecutionGraph components

2020-12-15 Thread GitBox


zhuzhurk commented on a change in pull request #13641:
URL: https://github.com/apache/flink/pull/13641#discussion_r543925697



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/executiongraph/metrics/RestartTimeGaugeTest.java
##
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.executiongraph.metrics;
+
+import org.apache.flink.api.common.JobStatus;
+import org.apache.flink.runtime.executiongraph.TestingJobStatusProvider;
+import org.apache.flink.util.TestLogger;
+
+import org.junit.Test;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.hamcrest.Matchers.greaterThan;
+import static org.hamcrest.Matchers.is;
+import static org.junit.Assert.assertThat;
+
+/**
+ * Tests for {@link RestartTimeGauge}.
+ */
+public class RestartTimeGaugeTest extends TestLogger {
+
+   @Test
+   public void testNotRestarted() {
+   final RestartTimeGauge gauge = new RestartTimeGauge(new 
TestingJobStatusProvider(JobStatus.RUNNING, -1));
+   assertThat(gauge.getValue(), is(0L));
+   }
+
+   @Test
+   public void testInRestarting() {
+   final Map statusTimestampMap = new HashMap<>();
+   statusTimestampMap.put(JobStatus.RESTARTING, 1L);
+
+   final RestartTimeGauge gauge = new RestartTimeGauge(
+   new TestingJobStatusProvider(
+   JobStatus.RESTARTING,
+   status -> 
statusTimestampMap.getOrDefault(status, -1L)));
+   // System.currentTimeMillis() is surely to be larger than 123L

Review comment:
   removed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuxiaoshang commented on pull request #13932: [FLINK-19947][Connectors / Common]Support sink parallelism configuration for Print connector

2020-12-15 Thread GitBox


zhuxiaoshang commented on pull request #13932:
URL: https://github.com/apache/flink/pull/13932#issuecomment-745750153


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14395: [FLINK-16491][formats] Add compression support for ParquetAvroWriters

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14395:
URL: https://github.com/apache/flink/pull/14395#issuecomment-745744218


   
   ## CI report:
   
   * c49e4e668d1ff384073fd39e10af994b19c27018 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10911)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14387: [FLINK-19691][Connector][jdbc] Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14387:
URL: https://github.com/apache/flink/pull/14387#issuecomment-745165624


   
   ## CI report:
   
   * d27c9bafc7a71fd26dc3616acae075837f5eab2e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10910)
 
   * 30b6460816afa6e64767d5e7ebe61bdbfc0ff84c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14397: [docs] fix typo in savepoint doc

2020-12-15 Thread GitBox


flinkbot commented on pull request #14397:
URL: https://github.com/apache/flink/pull/14397#issuecomment-745748437


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit a0536aec90b0c0681ecdc5934dd1dfc56e23a214 (Wed Dec 16 
03:59:53 UTC 2020)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14396: Fix java demo code

2020-12-15 Thread GitBox


flinkbot commented on pull request #14396:
URL: https://github.com/apache/flink/pull/14396#issuecomment-745747959


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit f6b0082b7556de34c8fbf466e617fc5b71bf2b8f (Wed Dec 16 
03:57:59 UTC 2020)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] legendtkl opened a new pull request #14397: [docs] fix typo in savepoint doc

2020-12-15 Thread GitBox


legendtkl opened a new pull request #14397:
URL: https://github.com/apache/flink/pull/14397


   Before:
   > The exception are incremental checkpoints with the RocksDB state backend.
   
   After:
   > The exception is incremental checkpoints with the RocksDB state backend.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20615) Local recovery and sticky scheduling end-to-end test timeout with "IOException: Stream Closed"

2020-12-15 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-20615:


 Summary: Local recovery and sticky scheduling end-to-end test 
timeout with "IOException: Stream Closed"
 Key: FLINK-20615
 URL: https://issues.apache.org/jira/browse/FLINK-20615
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.12.0, 1.13.0
Reporter: Huang Xingbo


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10905=logs=6caf31d6-847a-526e-9624-468e053467d6=0b23652f-b18b-5b6e-6eb6-a11070364610]

It tried to restart many times, and the final error was following:
{code:java}
2020-12-15T23:54:00.5067862Z Dec 15 23:53:42 2020-12-15 23:53:41,538 ERROR 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackendBuilder [] - 
Caught unexpected exception.
2020-12-15T23:54:00.5068392Z Dec 15 23:53:42 java.io.IOException: Stream Closed
2020-12-15T23:54:00.5068767Z Dec 15 23:53:42at 
java.io.FileInputStream.readBytes(Native Method) ~[?:?]
2020-12-15T23:54:00.5069223Z Dec 15 23:53:42at 
java.io.FileInputStream.read(FileInputStream.java:279) ~[?:?]
2020-12-15T23:54:00.5070150Z Dec 15 23:53:42at 
org.apache.flink.core.fs.local.LocalDataInputStream.read(LocalDataInputStream.java:73)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5071217Z Dec 15 23:53:42at 
org.apache.flink.core.fs.FSDataInputStreamWrapper.read(FSDataInputStreamWrapper.java:61)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5072295Z Dec 15 23:53:42at 
org.apache.flink.runtime.util.ForwardingInputStream.read(ForwardingInputStream.java:51)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5072967Z Dec 15 23:53:42at 
java.io.DataInputStream.readFully(DataInputStream.java:200) ~[?:?]
2020-12-15T23:54:00.5073483Z Dec 15 23:53:42at 
java.io.DataInputStream.readFully(DataInputStream.java:170) ~[?:?]
2020-12-15T23:54:00.5074535Z Dec 15 23:53:42at 
org.apache.flink.api.common.typeutils.base.array.BytePrimitiveArraySerializer.deserialize(BytePrimitiveArraySerializer.java:85)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5075847Z Dec 15 23:53:42at 
org.apache.flink.contrib.streaming.state.restore.RocksDBFullRestoreOperation.restoreKVStateData(RocksDBFullRestoreOperation.java:222)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5077187Z Dec 15 23:53:42at 
org.apache.flink.contrib.streaming.state.restore.RocksDBFullRestoreOperation.restoreKeyGroupsInStateHandle(RocksDBFullRestoreOperation.java:169)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5078495Z Dec 15 23:53:42at 
org.apache.flink.contrib.streaming.state.restore.RocksDBFullRestoreOperation.restore(RocksDBFullRestoreOperation.java:152)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5079802Z Dec 15 23:53:42at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackendBuilder.build(RocksDBKeyedStateBackendBuilder.java:269)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5081013Z Dec 15 23:53:42at 
org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createKeyedStateBackend(RocksDBStateBackend.java:565)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5082215Z Dec 15 23:53:42at 
org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createKeyedStateBackend(RocksDBStateBackend.java:94)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5083500Z Dec 15 23:53:42at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:299)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5084899Z Dec 15 23:53:42at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:142)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5086342Z Dec 15 23:53:42at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:121)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5087601Z Dec 15 23:53:42at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:316)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5088924Z Dec 15 23:53:42at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:155)
 ~[flink-dist_2.11-1.13-SNAPSHOT.jar:1.13-SNAPSHOT]
2020-12-15T23:54:00.5090261Z Dec 15 23:53:42at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:248)
 

[GitHub] [flink] atealxt opened a new pull request #14396: Fix java demo code

2020-12-15 Thread GitBox


atealxt opened a new pull request #14396:
URL: https://github.com/apache/flink/pull/14396


   The demo code missed one parenthesis.
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14395: [FLINK-16491][formats] Add compression support for ParquetAvroWriters

2020-12-15 Thread GitBox


flinkbot commented on pull request #14395:
URL: https://github.com/apache/flink/pull/14395#issuecomment-745744218


   
   ## CI report:
   
   * c49e4e668d1ff384073fd39e10af994b19c27018 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14387: [FLINK-19691][Connector][jdbc] Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14387:
URL: https://github.com/apache/flink/pull/14387#issuecomment-745165624


   
   ## CI report:
   
   * d27c9bafc7a71fd26dc3616acae075837f5eab2e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10910)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16491) Add compression support for ParquetAvroWriters

2020-12-15 Thread Yao Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250081#comment-17250081
 ] 

Yao Zhang commented on FLINK-16491:
---

Sorry that I did not notice that there is a duplicated issue. Recently I 
noticed that there were some conflicts in my previous PR and I fixed it with 
new PR #14395. PR that early than this is outdated and I have already closed it.

> Add compression support for ParquetAvroWriters
> --
>
> Key: FLINK-16491
> URL: https://issues.apache.org/jira/browse/FLINK-16491
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.10.0
>Reporter: Yao Zhang
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add compression support for ParquetAvroWriters



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14387: [FLINK-19691][Connector][jdbc] Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14387:
URL: https://github.com/apache/flink/pull/14387#issuecomment-745165624


   
   ## CI report:
   
   * c1b6232ba6ce91e3f91297ca1e53c96f867c14c6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10892)
 
   * d27c9bafc7a71fd26dc3616acae075837f5eab2e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10910)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14395: [FLINK-16491][formats] Add compression support for ParquetAvroWriters

2020-12-15 Thread GitBox


flinkbot commented on pull request #14395:
URL: https://github.com/apache/flink/pull/14395#issuecomment-745737861


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c49e4e668d1ff384073fd39e10af994b19c27018 (Wed Dec 16 
03:20:53 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-16491).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] paul8263 opened a new pull request #14395: [FLINK-16491][formats] Add compression support for ParquetAvroWriters

2020-12-15 Thread GitBox


paul8263 opened a new pull request #14395:
URL: https://github.com/apache/flink/pull/14395


   
   
   ## What is the purpose of the change
   
   Add compression support for ParquetAvroWriters.
   
   
   ## Brief change log
   
 - Added overloaded versions for methods 'forSpecificRecord', 
'forGenericRecord' and 'forReflectRecord', which need an extra 
CompressionCodecName parameter.
 - Added corresponding unit tests.
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - Added extra unit test cases locates in ParquetStreamingFileSinkITCase, 
with methods named `testWriteParquetAvroSpecificWithCompression`, 
`testWriteParquetAvroGenericWithCompression` and 
`testWriteParquetAvroReflectWithCompression`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented?  JavaDocs
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20389) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-15 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250072#comment-17250072
 ] 

Huang Xingbo commented on FLINK-20389:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10907=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20389
> URL: https://issues.apache.org/jira/browse/FLINK-20389
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Matthias
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
> Attachments: FLINK-20389-failure.log
>
>
> [Build|https://dev.azure.com/mapohl/flink/_build/results?buildId=118=results]
>  failed due to {{UnalignedCheckpointITCase}} caused by a 
> {{NullPointerException}}:
> {code:java}
> Test execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase) failed 
> with:
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #14389: [FLINK-20528][python] Add Python building blocks to make sure the basic functionality of Stream Group Table Aggregation could work

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14389:
URL: https://github.com/apache/flink/pull/14389#issuecomment-745192120


   
   ## CI report:
   
   * 3385ab7b31e3d312cc4f98ab9b619c15ad400215 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10888)
 
   * 3027c25e19dbb51b9f96738959e043411fabff57 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10909)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14387: [FLINK-19691][Connector][jdbc] Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14387:
URL: https://github.com/apache/flink/pull/14387#issuecomment-745165624


   
   ## CI report:
   
   * c1b6232ba6ce91e3f91297ca1e53c96f867c14c6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10892)
 
   * d27c9bafc7a71fd26dc3616acae075837f5eab2e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14389: [FLINK-20528][python] Add Python building blocks to make sure the basic functionality of Stream Group Table Aggregation could work

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14389:
URL: https://github.com/apache/flink/pull/14389#issuecomment-745192120


   
   ## CI report:
   
   * 3385ab7b31e3d312cc4f98ab9b619c15ad400215 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10888)
 
   * 3027c25e19dbb51b9f96738959e043411fabff57 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20562) Support ExplainDetails for EXPLAIN sytnax

2020-12-15 Thread Fangliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250057#comment-17250057
 ] 

Fangliang Liu commented on FLINK-20562:
---

[~jark] Can you assign this to me ?

 

> Support ExplainDetails for EXPLAIN sytnax
> -
>
> Key: FLINK-20562
> URL: https://issues.apache.org/jira/browse/FLINK-20562
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.13.0
>
>
> Currently, {{EXPLAIN}} syntax only supports to print the default AST, logical 
> plan, and physical plan. However, it doesn't support to print detailed 
> information such as CHANGELOG_MODE, ESTIMATED_COST, JSON_EXECUTION_PLAN which 
> are defined in {{ExplainDetail}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] paul8263 closed pull request #11345: [FLINK-16491][formats] Add compression support for ParquetAvroWriters

2020-12-15 Thread GitBox


paul8263 closed pull request #11345:
URL: https://github.com/apache/flink/pull/11345


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20389) UnalignedCheckpointITCase failure caused by NullPointerException

2020-12-15 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250052#comment-17250052
 ] 

Dian Fu commented on FLINK-20389:
-

Hi [~mapohl] It seems that this issue occurs quite frequently, could you help 
to take a further look at of this issue?

> UnalignedCheckpointITCase failure caused by NullPointerException
> 
>
> Key: FLINK-20389
> URL: https://issues.apache.org/jira/browse/FLINK-20389
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Matthias
>Assignee: Matthias
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
> Attachments: FLINK-20389-failure.log
>
>
> [Build|https://dev.azure.com/mapohl/flink/_build/results?buildId=118=results]
>  failed due to {{UnalignedCheckpointITCase}} caused by a 
> {{NullPointerException}}:
> {code:java}
> Test execute[Parallel cogroup, p = 
> 10](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase) failed 
> with:
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:119)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:996)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=5, 

[jira] [Updated] (FLINK-20601) Rework PyFlink CLI documentation

2020-12-15 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20601:

Fix Version/s: 1.12.1
   1.13.0

> Rework PyFlink CLI documentation
> 
>
> Key: FLINK-20601
> URL: https://issues.apache.org/jira/browse/FLINK-20601
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Documentation
>Reporter: Matthias
>Assignee: Shuiqiang Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.12.1
>
>
> The CLI PyFlink section needs to be refactored as well. This issue covers 
> this work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xiaoHoly commented on pull request #14387: [FLINK-19691][Connector][jdbc] Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread GitBox


xiaoHoly commented on pull request #14387:
URL: https://github.com/apache/flink/pull/14387#issuecomment-745708913


   @wangxlong ,thanks for you suggestion,i will do it 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xiaoHoly removed a comment on pull request #14387: [FLINK-19691][Connector][jdbc] Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread GitBox


xiaoHoly removed a comment on pull request #14387:
URL: https://github.com/apache/flink/pull/14387#issuecomment-745708634


   > Thanks for your contribution @xiaoHoly.
   > We should expose a ConfigOption like JdbcDynamicTableFactory#PASSWORD.
   > The configOption key can be `connection.check.timeout` and the type is 
duration.
   > And we also should add a test to verify this configOption in 
JdbcDynamicTableFactoryTest.
   thanks for your suggestion,i will do it
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xiaoHoly commented on pull request #14387: [FLINK-19691][Connector][jdbc] Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread GitBox


xiaoHoly commented on pull request #14387:
URL: https://github.com/apache/flink/pull/14387#issuecomment-745708634


   > Thanks for your contribution @xiaoHoly.
   > We should expose a ConfigOption like JdbcDynamicTableFactory#PASSWORD.
   > The configOption key can be `connection.check.timeout` and the type is 
duration.
   > And we also should add a test to verify this configOption in 
JdbcDynamicTableFactoryTest.
   thanks for your suggestion,i will do it
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14057: [FLINK-19681][checkpointing] Timeout aligned checkpoints

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14057:
URL: https://github.com/apache/flink/pull/14057#issuecomment-726354432


   
   ## CI report:
   
   * c98b6606d03656b144f18c76a8af8b14c0b65b17 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10904)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20614) Registered sql drivers not deregistered after task finished in session cluster

2020-12-15 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250028#comment-17250028
 ] 

Chesnay Schepler commented on FLINK-20614:
--

This issue was documented in FLINK-19005 
(https://ci.apache.org/projects/flink/flink-docs-release-1.12/ops/debugging/debugging_classloading.html#unloading-of-dynamically-loaded-classes-in-user-code).

The tomcat approach pretty much does what I suggested in FLINK-19005. Looking 
at what they [actually have to do to make it work 
|https://github.com/apache/tomcat/blob/efc6af6778ff3c1605d8b053f6fd2a4d9fd8e0d3/java/org/apache/catalina/loader/WebappClassLoaderBase.java#L1673]
 I'd rather not actually go down that route.

> Registered sql drivers not deregistered after task finished in session cluster
> --
>
> Key: FLINK-20614
> URL: https://issues.apache.org/jira/browse/FLINK-20614
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Runtime / Task
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Kezhu Wang
>Priority: Major
>
> {{DriverManager}} keeps registered drivers in its internal data structures 
> which prevents they from gc after task finished. I confirm it in standalone 
> session cluster by observing that {{ChildFirstClassLoader}} could not be 
> reclaimed after several {{GC.run}}, it should exist in all session clusters.
> Tomcat documents 
> [this|https://ci.apache.org/projects/tomcat/tomcat85/docs/jndi-datasource-examples-howto.html#DriverManager,_the_service_provider_mechanism_and_memory_leaks]
>  and fixes/circumvents this with 
> [JdbcLeakPrevention|https://github.com/apache/tomcat/blob/master/java/org/apache/catalina/loader/JdbcLeakPrevention.java#L30].
> Should we solve this in runtime ? Or treat it as connector and clients' 
> responsibility to solve it using 
> {{RuntimeContext.registerUserCodeClassLoaderReleaseHookIfAbsent}} or similar ?
> Personally, it would be nice to solve in runtime as a catch-all to avoid 
> memory-leaking and provide consistent behavior to clients cross per-job and 
> session mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20614) Registered sql drivers not deregistered after task finished in session cluster

2020-12-15 Thread Kezhu Wang (Jira)
Kezhu Wang created FLINK-20614:
--

 Summary: Registered sql drivers not deregistered after task 
finished in session cluster
 Key: FLINK-20614
 URL: https://issues.apache.org/jira/browse/FLINK-20614
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC, Runtime / Task
Affects Versions: 1.12.0, 1.13.0
Reporter: Kezhu Wang


{{DriverManager}} keeps registered drivers in its internal data structures 
which prevents they from gc after task finished. I confirm it in standalone 
session cluster by observing that {{ChildFirstClassLoader}} could not be 
reclaimed after several {{GC.run}}, it should exist in all session clusters.

Tomcat documents 
[this|https://ci.apache.org/projects/tomcat/tomcat85/docs/jndi-datasource-examples-howto.html#DriverManager,_the_service_provider_mechanism_and_memory_leaks]
 and fixes/circumvents this with 
[JdbcLeakPrevention|https://github.com/apache/tomcat/blob/master/java/org/apache/catalina/loader/JdbcLeakPrevention.java#L30].

Should we solve this in runtime ? Or treat it as connector and clients' 
responsibility to solve it using 
{{RuntimeContext.registerUserCodeClassLoaderReleaseHookIfAbsent}} or similar ?

Personally, it would be nice to solve in runtime as a catch-all to avoid 
memory-leaking and provide consistent behavior to clients cross per-job and 
session mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14394: [FLINK-19013][state-backends] Add start/end logs for state restoration

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14394:
URL: https://github.com/apache/flink/pull/14394#issuecomment-745451922


   
   ## CI report:
   
   * 95f0ec70e5ded62603dcde72a4eac42ea1116b02 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10902)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14393: [FLINK-20605][coordination] Rework cancellation of slot allocation futures

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14393:
URL: https://github.com/apache/flink/pull/14393#issuecomment-745440486


   
   ## CI report:
   
   * e449ce26f5e0b80a6e46810bbbffc19d09f5f59b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10901)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20533) Add histogram support to Datadog reporter

2020-12-15 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-20533.

Resolution: Fixed

master:
38e424fd5fa7c4a6e6165921689a232a10e85bdd
313e20e8e03953a5e1cec9daa467f561ccfbd599

> Add histogram support to Datadog reporter
> -
>
> Key: FLINK-20533
> URL: https://issues.apache.org/jira/browse/FLINK-20533
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> The datadog reporter currently ignores Histograms. I think we just saved some 
> time when we added it, but we should rectify that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol merged pull request #14340: [FLINK-20533][datadog] Add Histogram support

2020-12-15 Thread GitBox


zentol merged pull request #14340:
URL: https://github.com/apache/flink/pull/14340


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14057: [FLINK-19681][checkpointing] Timeout aligned checkpoints

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14057:
URL: https://github.com/apache/flink/pull/14057#issuecomment-726354432


   
   ## CI report:
   
   * cbce4fc8b854e23dc7128b161336dc09e2d81e03 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10903)
 
   * c98b6606d03656b144f18c76a8af8b14c0b65b17 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10904)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on a change in pull request #14057: [FLINK-19681][checkpointing] Timeout aligned checkpoints

2020-12-15 Thread GitBox


rkhachatryan commented on a change in pull request #14057:
URL: https://github.com/apache/flink/pull/14057#discussion_r543678627



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
##
@@ -451,11 +451,17 @@ public void onBuffer(Buffer buffer, int sequenceNumber, 
int backlog) throws IOEx
}
else {
receivedBuffers.add(sequenceBuffer);
-   
channelStatePersister.maybePersist(buffer);
if (dataType.requiresAnnouncement()) {
firstPriorityEvent = 
addPriorityBuffer(announce(sequenceBuffer));
}
}
+   
channelStatePersister.checkForBarrier(sequenceBuffer.buffer).ifPresent(id -> {
+   // checkpoint was not yet started by 
task thread,
+   // so remember the numbers of buffers 
to spill for the time when it will be started
+   lastBarrierSequenceNumber = 
sequenceBuffer.sequenceNumber;
+   lastBarrierId = id;
+   });
+   channelStatePersister.maybePersist(buffer);

Review comment:
   I didn't add a unit test as after the other fixes in master (#14052) 
this change is not strictly necessary
   (though I think it's still less error-prone to not update SQN unnecessarily).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on a change in pull request #14057: [FLINK-19681][checkpointing] Timeout aligned checkpoints

2020-12-15 Thread GitBox


rkhachatryan commented on a change in pull request #14057:
URL: https://github.com/apache/flink/pull/14057#discussion_r543676863



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/LocalInputChannel.java
##
@@ -229,11 +229,8 @@ public void run() {
 
numBytesIn.inc(buffer.getSize());
numBuffersIn.inc();
-   if (buffer.getDataType().hasPriority()) {
-   channelStatePersister.checkForBarrier(buffer);
-   } else {
-   channelStatePersister.maybePersist(buffer);
-   }
+   channelStatePersister.checkForBarrier(buffer);
+   channelStatePersister.maybePersist(buffer);

Review comment:
   I've added 
`LocalInputChannelTest.testNoDataPersistedAfterReceivingAlignedBarrier`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on a change in pull request #14057: [FLINK-19681][checkpointing] Timeout aligned checkpoints

2020-12-15 Thread GitBox


rkhachatryan commented on a change in pull request #14057:
URL: https://github.com/apache/flink/pull/14057#discussion_r543676375



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
##
@@ -569,14 +569,23 @@ public void convertToPriorityEvent(int sequenceNumber) 
throws IOException {
"Attempted to convertToPriorityEvent an event 
[%s] that has already been prioritized [%s]",
toPrioritize,
numPriorityElementsBeforeRemoval);
+   // set the priority flag (checked on poll)
+   // don't convert the barrier itself (barrier controller 
might not have been switched yet)
+   AbstractEvent e = 
EventSerializer.fromBuffer(toPrioritize.buffer, 
this.getClass().getClassLoader());
+   toPrioritize.buffer.setReaderIndex(0);
+   toPrioritize = new 
SequenceBuffer(EventSerializer.toBuffer(e, true), toPrioritize.sequenceNumber);
firstPriorityEvent = addPriorityBuffer(toPrioritize);   
// note that only position of the element is changed

// converting the event 
itself would require switching the controller sooner
}
if (firstPriorityEvent) {
-   notifyPriorityEvent(sequenceNumber);
+   notifyPriorityEventForce(); // use force here because 
the barrier SQN might be seen by gate during the announcement

Review comment:
   Rephrased as:
   ```
   // forcibly notify about the priority event
   // instead of passing barrier SQN to be checked
   // because this SQN might have be seen by the input gate during the 
announcement
   ```
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on a change in pull request #14057: [FLINK-19681][checkpointing] Timeout aligned checkpoints

2020-12-15 Thread GitBox


rkhachatryan commented on a change in pull request #14057:
URL: https://github.com/apache/flink/pull/14057#discussion_r543675851



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/AlternatingController.java
##
@@ -114,6 +114,7 @@ public void barrierAnnouncement(
lastSeenBarrier = barrier.getId();
firstBarrierArrivalTime = getArrivalTime(barrier);
}
+   activeController = chooseController(barrier);

Review comment:
   I've added `AlternatingControllerTest.testSwitchToUnalignedByUpstream`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on a change in pull request #14057: [FLINK-19681][checkpointing] Timeout aligned checkpoints

2020-12-15 Thread GitBox


rkhachatryan commented on a change in pull request #14057:
URL: https://github.com/apache/flink/pull/14057#discussion_r543675153



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/AlignedController.java
##
@@ -120,6 +122,12 @@ public void obsoleteBarrierReceived(
resumeConsumption(channelInfo);
}
 
+   protected void resetPendingCheckpoint(long cancelledId) {
+   for (final CheckpointableInput input : inputs) {
+   input.checkpointStopped(cancelledId);
+   }
+   }
+

Review comment:
   I've added `AlternatingControllerTest.testChannelResetOnNewBarrier`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14392: [FLINK-20606][connectors/hive, table sql] sql cli with hive catalog c…

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14392:
URL: https://github.com/apache/flink/pull/14392#issuecomment-745400662


   
   ## CI report:
   
   * f0c76ebb8bd79c13788359ff50fdc4416a3bfe56 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10900)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14057: [FLINK-19681][checkpointing] Timeout aligned checkpoints

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14057:
URL: https://github.com/apache/flink/pull/14057#issuecomment-726354432


   
   ## CI report:
   
   * cbce4fc8b854e23dc7128b161336dc09e2d81e03 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10903)
 
   * c98b6606d03656b144f18c76a8af8b14c0b65b17 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14057: [FLINK-19681][checkpointing] Timeout aligned checkpoints

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14057:
URL: https://github.com/apache/flink/pull/14057#issuecomment-726354432


   
   ## CI report:
   
   * cbce4fc8b854e23dc7128b161336dc09e2d81e03 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10903)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14057: [FLINK-19681][checkpointing] Timeout aligned checkpoints

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14057:
URL: https://github.com/apache/flink/pull/14057#issuecomment-726354432


   
   ## CI report:
   
   * 64f5b48580f748cc15bc7ee65db45dd937b4821a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10522)
 
   * cbce4fc8b854e23dc7128b161336dc09e2d81e03 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13641: [FLINK-17760][tests] Rework tests to not rely on legacy scheduling codes in ExecutionGraph components

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #13641:
URL: https://github.com/apache/flink/pull/13641#issuecomment-708569491


   
   ## CI report:
   
   * 09d8deb89416f53dfe8b5c16fb9d723cbd98612c UNKNOWN
   * fe1562c5cda8ecb15f6af1afdf7b6217e6c20c42 UNKNOWN
   * 89bea5233d5efb9db88eacc21b445a617a8c3c27 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10899)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14391: [FLINK-20325][build] Move docs_404_check to CI stage

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14391:
URL: https://github.com/apache/flink/pull/14391#issuecomment-745287984


   
   ## CI report:
   
   * dcf8df64f76cb4839fd5c1d54805ad307b96914b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10898)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14394: [FLINK-19013][state-backends] Add start/end logs for state restoration

2020-12-15 Thread GitBox


flinkbot edited a comment on pull request #14394:
URL: https://github.com/apache/flink/pull/14394#issuecomment-745451922


   
   ## CI report:
   
   * 95f0ec70e5ded62603dcde72a4eac42ea1116b02 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10902)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   >