[GitHub] [flink] Myracle commented on pull request #13487: [FLINK-19409][Documentation] remove unrelated comment for getValue in ListView

2020-09-26 Thread GitBox


Myracle commented on pull request #13487:
URL: https://github.com/apache/flink/pull/13487#issuecomment-699595167


   @wuchong Please review the code. Thank you.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13489: [hotfix][docs]Fix the FROM_UNIXTIME UTC sample time error

2020-09-26 Thread GitBox


flinkbot commented on pull request #13489:
URL: https://github.com/apache/flink/pull/13489#issuecomment-699594969


   
   ## CI report:
   
   * 085372c3425a2a9109a6edfca26cd3fe0e2c0358 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13483: [FLINK-19403][python] Support Pandas Stream Group Window Aggregation

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13483:
URL: https://github.com/apache/flink/pull/13483#issuecomment-698904241


   
   ## CI report:
   
   * 8516ccaf4a5c62eb02ae4a3f851f7661c63657c3 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6987)
 
   * 58c1c6581235e6d55eff19a9ce463c3ac7cdfad3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6991)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #13456: Single task add partial flag in buffer

2020-09-26 Thread GitBox


curcur commented on a change in pull request #13456:
URL: https://github.com/apache/flink/pull/13456#discussion_r494377293



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.java
##
@@ -86,11 +94,34 @@ public int append(ByteBuffer source) {
int available = getMaxCapacity() - positionMarker.getCached();
int toCopy = Math.min(needed, available);
 
+   // Each Data BufferBuilder starts with a 4-byte integer header
+   // Since length can only be 0 or positive numbers, the first 
bit of the integer
+   // is used to identify whether the first record is partial (1) 
or not (0)
+   // The remaining 31 bits stands for the length of the record 
remaining.
+   // The data written is not made visible to reader until {@link 
#commit()}, so the BufferBuilder
+   // ends either with a complete record or full buffer after 
append();
+   if (isEmptyBufferBuilder()) {
+   available = available - BUFFER_BUILDER_HEADER_SIZE;
+   toCopy = Math.min(needed, available);

Review comment:
   > Can `available` be less than `BUFFER_BUILDER_HEADER_SIZE`?
   > Then `toCopy` can be negative.
   
   the `if` clause check makes sure the builder is empty; buffer size minimal 
be 4096 bytes, so no.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13489: [hotfix][docs]Fix the FROM_UNIXTIME UTC sample time error

2020-09-26 Thread GitBox


flinkbot commented on pull request #13489:
URL: https://github.com/apache/flink/pull/13489#issuecomment-699593480


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 085372c3425a2a9109a6edfca26cd3fe0e2c0358 (Sun Sep 27 
06:38:24 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19432) Whether to capture the updates which don't change any monitored columns

2020-09-26 Thread tinny cat (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tinny cat updated FLINK-19432:
--
Description: 
with `debezium-json` and `canal-json`: 

Whether to capture the updates which don't change any monitored columns. This 
may happen if the monitored columns (columns defined in Flink SQL DDL) is a 
subset of the columns in database table.  We can provide an optional option, 
default 'true', which means all the updates will be captured. You can set to 
'false' to only capture changed updates

  was:
with `debezium-json` and `canal-json`: 

Whether to capture the updates which don't change any monitored columns. This 
may happen if the monitored columns (columns defined in Flink SQL DDL) is a 
subset of the columns in database table.  We can provide an optional option, 
default 'true', which means all the updates will be captured. You can set to 
'false' to only capture changed updates, but note this may increase some 
comparison overhead for each update event.


> Whether to capture the updates which don't change any monitored columns
> ---
>
> Key: FLINK-19432
> URL: https://issues.apache.org/jira/browse/FLINK-19432
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.2
>Reporter: tinny cat
>Priority: Major
> Fix For: 1.11.3
>
>
> with `debezium-json` and `canal-json`: 
> Whether to capture the updates which don't change any monitored columns. This 
> may happen if the monitored columns (columns defined in Flink SQL DDL) is a 
> subset of the columns in database table.  We can provide an optional option, 
> default 'true', which means all the updates will be captured. You can set to 
> 'false' to only capture changed updates



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Tartarus0zm commented on pull request #13489: [hotfix][docs]Fix the FROM_UNIXTIME UTC sample time error

2020-09-26 Thread GitBox


Tartarus0zm commented on pull request #13489:
URL: https://github.com/apache/flink/pull/13489#issuecomment-699593347


   @wuchong  please take a look, if you have time, thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13483: [FLINK-19403][python] Support Pandas Stream Group Window Aggregation

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13483:
URL: https://github.com/apache/flink/pull/13483#issuecomment-698904241


   
   ## CI report:
   
   * c89615b7eb8f36e5e8966ad6f0098aa4757b5206 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6964)
 
   * 8516ccaf4a5c62eb02ae4a3f851f7661c63657c3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6987)
 
   * 58c1c6581235e6d55eff19a9ce463c3ac7cdfad3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13487: [FLINK-19409][Documentation] remove unrelated comment for getValue in ListView

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13487:
URL: https://github.com/apache/flink/pull/13487#issuecomment-699575614


   
   ## CI report:
   
   * 9cbc32fe3fb52c05dce29124ac61c4a7c2952241 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6985)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Tartarus0zm opened a new pull request #13489: [hotfix][docs]Fix the FROM_UNIXTIME UTC sample time error

2020-09-26 Thread GitBox


Tartarus0zm opened a new pull request #13489:
URL: https://github.com/apache/flink/pull/13489


   ## What is the purpose of the change
   
   Fix the FROM_UNIXTIME UTC sample time error
   
   ## Brief change log
   
   44 in UTC is `1970-01-01 00:00:44`
   
   ## Verifying this change
   
   no
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19432) Whether to capture the updates which don't change any monitored columns

2020-09-26 Thread tinny cat (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202750#comment-17202750
 ] 

tinny cat commented on FLINK-19432:
---

In fact, I have already mentioned a pull request in 
[flink-cdc|[https://github.com/ververica/flink-cdc-connectors/pull/41]] , 

but `canal-json` currently cannot use the simply equals of `before` and 
`after`, because in canal, `before` is not a full field. As long as the field 
is indeed null before the update, `before` and `after` will be equals. 

> Whether to capture the updates which don't change any monitored columns
> ---
>
> Key: FLINK-19432
> URL: https://issues.apache.org/jira/browse/FLINK-19432
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.2
>Reporter: tinny cat
>Priority: Major
> Fix For: 1.11.3
>
>
> with `debezium-json` and `canal-json`: 
> Whether to capture the updates which don't change any monitored columns. This 
> may happen if the monitored columns (columns defined in Flink SQL DDL) is a 
> subset of the columns in database table.  We can provide an optional option, 
> default 'true', which means all the updates will be captured. You can set to 
> 'false' to only capture changed updates, but note this may increase some 
> comparison overhead for each update event.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18828) Terminate jobmanager process with zero exit code to avoid unexpected restarting by K8s

2020-09-26 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202749#comment-17202749
 ] 

Yang Wang commented on FLINK-18828:
---

[~uce] Make sense. We could only do this in 1.12 or later. However, it seems 
that we still not reach a consensus yet. So maybe we need more inputs to make 
sure that we are not taking too much influence on the downstream projects.

> Terminate jobmanager process with zero exit code to avoid unexpected 
> restarting by K8s
> --
>
> Key: FLINK-18828
> URL: https://issues.apache.org/jira/browse/FLINK-18828
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.10.1, 1.12.0, 1.11.1
>Reporter: Yang Wang
>Priority: Major
> Fix For: 1.12.0, 1.10.3, 1.11.3
>
>
> Currently, Flink jobmanager process terminates with a non-zero exit code if 
> the job reaches the {{ApplicationStatus.FAILED}}. It is not ideal in K8s 
> deployment, since non-zero exit code will cause unexpected restarting. Also 
> from a framework's perspective, a FAILED job does not mean that Flink has 
> failed and, hence, the return code could still be 0.
> > Note:
> This is a special case for standalone K8s deployment. For 
> standalone/Yarn/Mesos/native K8s, terminating with non-zero exit code is 
> harmless. And a non-zero exit code could help to check the job result quickly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18828) Terminate jobmanager process with zero exit code to avoid unexpected restarting by K8s

2020-09-26 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-18828:
--
Fix Version/s: (was: 1.11.3)
   (was: 1.10.3)

> Terminate jobmanager process with zero exit code to avoid unexpected 
> restarting by K8s
> --
>
> Key: FLINK-18828
> URL: https://issues.apache.org/jira/browse/FLINK-18828
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.10.1, 1.12.0, 1.11.1
>Reporter: Yang Wang
>Priority: Major
> Fix For: 1.12.0
>
>
> Currently, Flink jobmanager process terminates with a non-zero exit code if 
> the job reaches the {{ApplicationStatus.FAILED}}. It is not ideal in K8s 
> deployment, since non-zero exit code will cause unexpected restarting. Also 
> from a framework's perspective, a FAILED job does not mean that Flink has 
> failed and, hence, the return code could still be 0.
> > Note:
> This is a special case for standalone K8s deployment. For 
> standalone/Yarn/Mesos/native K8s, terminating with non-zero exit code is 
> harmless. And a non-zero exit code could help to check the job result quickly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19432) Whether to capture the updates which don't change any monitored columns

2020-09-26 Thread tinny cat (Jira)
tinny cat created FLINK-19432:
-

 Summary: Whether to capture the updates which don't change any 
monitored columns
 Key: FLINK-19432
 URL: https://issues.apache.org/jira/browse/FLINK-19432
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.11.2
Reporter: tinny cat
 Fix For: 1.11.3


with `debezium-json` and `canal-json`: 

Whether to capture the updates which don't change any monitored columns. This 
may happen if the monitored columns (columns defined in Flink SQL DDL) is a 
subset of the columns in database table.  We can provide an optional option, 
default 'true', which means all the updates will be captured. You can set to 
'false' to only capture changed updates, but note this may increase some 
comparison overhead for each update event.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19255) Add configuration to make AsyncWaitOperation Chainable

2020-09-26 Thread Kyle Bendickson (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kyle Bendickson closed FLINK-19255.
---
Resolution: Information Provided

I've received feedback that a new interface for source operators will be coming 
in 1.12, so I am marking this ticket with information provided. I am still 
hoping for any more info that I can get about the new interface like a link to 
it and specifically, will the DataStream kafka source be converted to the new 
interface / available as the new interface.

 

Thanks again for your assistance.

> Add configuration to make AsyncWaitOperation Chainable
> --
>
> Key: FLINK-19255
> URL: https://issues.apache.org/jira/browse/FLINK-19255
> Project: Flink
>  Issue Type: Task
>  Components: API / Core
>Affects Versions: 1.10.2, 1.11.2
> Environment: Any flink job using Async IO post this PR: 
> [https://github.com/apache/flink/pull/11177/files#diff-00b95aec1fa804a531b553610ec74c27R117]
> (so I believe anything starting at either 1.9 or 1.10).
>  
>Reporter: Kyle Bendickson
>Priority: Major
>
> Currently, we no longer chain Async IO calls. Instead, anything using AsyncIO 
> starts the new head of an operator chain as a temporary workaround for this 
> issue: https://issues.apache.org/jira/browse/FLINK-13063
>  
> However, because this change can (and does in my customers' cases) have very 
> large impact on the job graph size, and because people were previously 
> accepting of their results, in the 1.10 release it was made so that 
> AsyncWaitOperator could be chained in this issue 
> https://issues.apache.org/jira/browse/FLINK-16219.
>  
> However, it's very complicated and not intuitive for users to call out to 
> operator factory methods. I have users who would very much like to not have 
> their AsyncIO calls generate a new chain, as it's ballooned the number of 
> state stores they have and they were accepting of their previous results. The 
> only exmaple I could find was in the tests, and its rather convoluted.
>  
> My proposal would be to add that config check just before the line of code in 
> AsyncWaitOperator.java that would not add the following line, which is 
> currently hardcoded into the operator and what requires one to use the 
> operator factory:
> {noformat}
> setChainingStrategy(ChainingStrategy.HEAD){noformat}
>   
> Given that this is considered potentially unsafe / legacy behavior, I would 
> suggest that we add a config, something that explicitly calls this out as 
> unsafe / legacy, so that users do not have to go through the unintuitive 
> process of using operator factories but that new users know not to use this 
> option or to use it at their own risk. We could also document that it is not 
> necessarily going to be supported in the future if need be.
>  
> My suggestion for config names that would avoid that setChainingStrategy line 
> include
> {noformat}
> taskmanager.async-io-operator.legacy-unsafe-chaining-strategy{noformat}
> which specifically calls this behavior out as legacy and unsafe.
>  
> Another possible name could be
> {noformat}
> pipeline.operator-chaining.async-io.legacy-inconsistent-chaining-strategy-always{noformat}
> (which would be more in line with the existing config of 
> pipeline.operator-chaining).
>  
>  
> Given that it is possible to stop operator chaining, it's just very 
> unintuitive and requires using operator factories, I think that this 
> configuration would be a good addition. I would be happy to submit a PR, with 
> tests, and updated documentation, so that power users who are looking to do 
> this could enable / disable this behavior without having to change their code 
> much.
>  
> I recognize that this might be an odd request as this has been deemed unsafe, 
> but this change has made it very difficult for some of my users to use 
> rocksdb, namely those with very large state that previously made very liberal 
> use of Async IO (especially for things like analytics events which can be 
> sent on a best effort basis) and who therefore have a very large job graph 
> after this change.
>  
> If anybody has any better suggestions for names, I'd be open to them. And 
> then as mentioned, I'd be happy to submit a PR with tests etc.
>  
> For reference, here are the tests where I found the ability to use the 
> operator factory and here is the utility function which is needed to create a 
> chained async io operator vertex. Note that this utility function is in the 
> test and not part of the public facing API. 
> [https://github.com/apache/flink/blob/3a04e179e09224b09c4ee656d31558844da83a26/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/operators/async/AsyncWaitOperatorTest.java#L880-L912]
> If there is a simpler way to handle this, I'd be happy to

[GitHub] [flink] lirui-apache commented on pull request #13434: [FLINK-19292][hive] HiveCatalog should support specifying Hadoop conf dir with configuration

2020-09-26 Thread GitBox


lirui-apache commented on pull request #13434:
URL: https://github.com/apache/flink/pull/13434#issuecomment-699592081


   @SteNicholas Thanks for updating. LGTM overall, only have some minor 
comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on a change in pull request #13434: [FLINK-19292][hive] HiveCatalog should support specifying Hadoop conf dir with configuration

2020-09-26 Thread GitBox


lirui-apache commented on a change in pull request #13434:
URL: https://github.com/apache/flink/pull/13434#discussion_r495534885



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java
##
@@ -195,18 +202,19 @@ private static HiveConf createHiveConf(@Nullable String 
hiveConfDir) {
String.format("Failed to get hive-site.xml from 
%s", hiveConfDir), e);
}
 
-   // create HiveConf from hadoop configuration
-   Configuration hadoopConf = 
HadoopUtils.getHadoopConfiguration(new 
org.apache.flink.configuration.Configuration());
-
-   // Add mapred-site.xml. We need to read configurations like 
compression codec.

Review comment:
   This comment should be migrated to 
`HiveTableUtil::getHadoopConfiguration`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19255) Add configuration to make AsyncWaitOperation Chainable

2020-09-26 Thread Kyle Bendickson (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202747#comment-17202747
 ] 

Kyle Bendickson commented on FLINK-19255:
-

Hi [~AHeise],

This is very helpful. I have not noticed it work in 1.11.0+, but as you 
mentioned the new implementation of the sources has not been included in a 
stable release.

I am ok with waiting for now until more information is revealed about these new 
sources and their adoptability.

In 1.12.0, do you happen to know what provided sourced (e.g. Kafka Source 
specifically) will support the new style? Or is there a link to the interface 
somewhere you could provide so I could do my own digging?


I imagine it's possible that the initial implementation might only be released 
to SQL / table environment users (just a guess based on observed optimizations 
in the past), but these particular users are using the `DataStream` api.

 

I would be super grateful if you could point me to the new interface and any 
documentation about it that exists - or really just let me know if the out of 
the box Kafka Source should work assuming the newest supported Kafka version 
etc. Otherwise, I'll likely get to work on implementing the interface as a 
workaround for some of these teams once 1.12.0 comes out.

Thanks again for your response and my apologies for my delay in response. Work 
has been very busy! You may close this ticket as done for now as I will wait to 
see what happens in 1.12, but any further info you could provide me would be 
great!

 

> Add configuration to make AsyncWaitOperation Chainable
> --
>
> Key: FLINK-19255
> URL: https://issues.apache.org/jira/browse/FLINK-19255
> Project: Flink
>  Issue Type: Task
>  Components: API / Core
>Affects Versions: 1.10.2, 1.11.2
> Environment: Any flink job using Async IO post this PR: 
> [https://github.com/apache/flink/pull/11177/files#diff-00b95aec1fa804a531b553610ec74c27R117]
> (so I believe anything starting at either 1.9 or 1.10).
>  
>Reporter: Kyle Bendickson
>Priority: Major
>
> Currently, we no longer chain Async IO calls. Instead, anything using AsyncIO 
> starts the new head of an operator chain as a temporary workaround for this 
> issue: https://issues.apache.org/jira/browse/FLINK-13063
>  
> However, because this change can (and does in my customers' cases) have very 
> large impact on the job graph size, and because people were previously 
> accepting of their results, in the 1.10 release it was made so that 
> AsyncWaitOperator could be chained in this issue 
> https://issues.apache.org/jira/browse/FLINK-16219.
>  
> However, it's very complicated and not intuitive for users to call out to 
> operator factory methods. I have users who would very much like to not have 
> their AsyncIO calls generate a new chain, as it's ballooned the number of 
> state stores they have and they were accepting of their previous results. The 
> only exmaple I could find was in the tests, and its rather convoluted.
>  
> My proposal would be to add that config check just before the line of code in 
> AsyncWaitOperator.java that would not add the following line, which is 
> currently hardcoded into the operator and what requires one to use the 
> operator factory:
> {noformat}
> setChainingStrategy(ChainingStrategy.HEAD){noformat}
>   
> Given that this is considered potentially unsafe / legacy behavior, I would 
> suggest that we add a config, something that explicitly calls this out as 
> unsafe / legacy, so that users do not have to go through the unintuitive 
> process of using operator factories but that new users know not to use this 
> option or to use it at their own risk. We could also document that it is not 
> necessarily going to be supported in the future if need be.
>  
> My suggestion for config names that would avoid that setChainingStrategy line 
> include
> {noformat}
> taskmanager.async-io-operator.legacy-unsafe-chaining-strategy{noformat}
> which specifically calls this behavior out as legacy and unsafe.
>  
> Another possible name could be
> {noformat}
> pipeline.operator-chaining.async-io.legacy-inconsistent-chaining-strategy-always{noformat}
> (which would be more in line with the existing config of 
> pipeline.operator-chaining).
>  
>  
> Given that it is possible to stop operator chaining, it's just very 
> unintuitive and requires using operator factories, I think that this 
> configuration would be a good addition. I would be happy to submit a PR, with 
> tests, and updated documentation, so that power users who are looking to do 
> this could enable / disable this behavior without having to change their code 
> much.
>  
> I recognize that this might be an odd request as this has been deemed unsafe, 
> but this change has made it ver

[GitHub] [flink] flinkbot edited a comment on pull request #13488: [FLINK-19381][docs] Fix docs for savepoint relocation

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13488:
URL: https://github.com/apache/flink/pull/13488#issuecomment-699590439


   
   ## CI report:
   
   * 065a966e973df30debc9f5e0fdb41919d2fdb866 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6990)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13322:
URL: https://github.com/apache/flink/pull/13322#issuecomment-686859976


   
   ## CI report:
   
   * ae9c1538fa1bcb1458006aa8f54979b720b64fd1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6986)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6982)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo commented on pull request #13483: [FLINK-19403][python] Support Pandas Stream Group Window Aggregation

2020-09-26 Thread GitBox


HuangXingBo commented on pull request #13483:
URL: https://github.com/apache/flink/pull/13483#issuecomment-699591756


   @dianfu Thanks a lot for the review. I have addressed the comments at the 
latest commit.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on a change in pull request #13434: [FLINK-19292][hive] HiveCatalog should support specifying Hadoop conf dir with configuration

2020-09-26 Thread GitBox


lirui-apache commented on a change in pull request #13434:
URL: https://github.com/apache/flink/pull/13434#discussion_r495534590



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTableUtil.java
##
@@ -431,22 +431,23 @@ public static void checkAcidTable(CatalogTable 
catalogTable, ObjectPath tablePat
 * @return A Hadoop configuration instance.
 */
public static Configuration getHadoopConfiguration(String 
hadoopConfDir) {
-   Configuration hadoopConfiguration = new Configuration();
if (new File(hadoopConfDir).exists()) {
+   Configuration hadoopConfiguration = new Configuration();
if (new File(hadoopConfDir + 
"/core-site.xml").exists()) {

Review comment:
   ```suggestion
if (new File(hadoopConfDir, "core-site.xml").exists()) {
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangXingBo commented on a change in pull request #13483: [FLINK-19403][python] Support Pandas Stream Group Window Aggregation

2020-09-26 Thread GitBox


HuangXingBo commented on a change in pull request #13483:
URL: https://github.com/apache/flink/pull/13483#discussion_r495534368



##
File path: 
flink-python/src/main/java/org/apache/flink/table/runtime/operators/python/aggregate/arrow/stream/StreamArrowPythonGroupWindowAggregateFunctionOperator.java
##
@@ -0,0 +1,547 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.operators.python.aggregate.arrow.stream;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.ExecutionConfig;
+import org.apache.flink.api.common.state.ListState;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import org.apache.flink.api.common.state.State;
+import org.apache.flink.api.common.state.StateDescriptor;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.metrics.MetricGroup;
+import org.apache.flink.runtime.state.internal.InternalListState;
+import org.apache.flink.streaming.api.operators.InternalTimer;
+import org.apache.flink.streaming.api.operators.InternalTimerService;
+import org.apache.flink.streaming.api.operators.Triggerable;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.JoinedRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.data.binary.BinaryRowData;
+import org.apache.flink.table.data.binary.BinaryRowDataUtil;
+import org.apache.flink.table.data.util.RowDataUtil;
+import org.apache.flink.table.functions.AggregateFunction;
+import org.apache.flink.table.functions.python.PythonFunctionInfo;
+import 
org.apache.flink.table.runtime.operators.python.aggregate.arrow.AbstractArrowPythonAggregateFunctionOperator;
+import org.apache.flink.table.runtime.operators.window.TimeWindow;
+import org.apache.flink.table.runtime.operators.window.Window;
+import 
org.apache.flink.table.runtime.operators.window.assigners.WindowAssigner;
+import 
org.apache.flink.table.runtime.operators.window.internal.InternalWindowProcessFunction;
+import org.apache.flink.table.runtime.operators.window.triggers.Trigger;
+import org.apache.flink.table.runtime.typeutils.RowDataSerializer;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.types.RowKind;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.LinkedList;
+import java.util.List;
+
+/**
+ * The Stream Arrow Python {@link AggregateFunction} Operator for Group Window 
Aggregation.
+ */
+@Internal
+public class StreamArrowPythonGroupWindowAggregateFunctionOperator
+   extends AbstractArrowPythonAggregateFunctionOperator implements 
Triggerable {
+
+   private static final long serialVersionUID = 1L;
+
+   /**
+* The Infos of the Window.
+* 0 -> start of the Window.
+* 1 -> end of the Window.
+* 2 -> row time of the Window.
+*/
+   private final int[] namedProperties;
+
+   /**
+* The row time index of the input data.
+*/
+   private final int inputTimeFieldIndex;
+
+   /**
+* A {@link WindowAssigner} assigns zero or more {@link Window Windows} 
to an element.
+*/
+   private final WindowAssigner windowAssigner;
+
+   /**
+* A {@link Trigger} determines when a pane of a window should be 
evaluated to emit the
+* results for that part of the window.
+*/
+   private final Trigger trigger;
+
+   /**
+* The allowed lateness for elements. This is used for:
+* 
+*  Deciding if an element should be dropped from a 
window due to lateness.
+*  Clearing the state of a window if the system time 
passes the
+*  {@code window.maxTimestamp + allowedLateness} landmark.
+* 
+*/
+   private final long allowedLateness;
+
+   /**
+* Interface for working with time and timers.
+*/
+   private transient InternalTimerService internalTimerService;
+
+   /**
+* Stores accumulat

[GitHub] [flink] flinkbot commented on pull request #13488: [FLINK-19381][docs] Fix docs for savepoint relocation

2020-09-26 Thread GitBox


flinkbot commented on pull request #13488:
URL: https://github.com/apache/flink/pull/13488#issuecomment-699590439


   
   ## CI report:
   
   * 065a966e973df30debc9f5e0fdb41919d2fdb866 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13488: [FLINK-19381][docs] Fix docs for savepoint relocation

2020-09-26 Thread GitBox


flinkbot commented on pull request #13488:
URL: https://github.com/apache/flink/pull/13488#issuecomment-699588870


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 065a966e973df30debc9f5e0fdb41919d2fdb866 (Sun Sep 27 
05:38:00 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19381) Fix docs about relocatable savepoints

2020-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19381:
---
Labels: pull-request-available  (was: )

> Fix docs about relocatable savepoints
> -
>
> Key: FLINK-19381
> URL: https://issues.apache.org/jira/browse/FLINK-19381
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Nico Kruber
>Assignee: Congxian Qiu(klion26)
>Priority: Major
>  Labels: pull-request-available
>
> Although savepoints are relocatable since Flink 1.11, the docs still state 
> otherwise, for example in 
> [https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/savepoints.html#triggering-savepoints]
> The warning there, as well as the other changes from FLINK-15863, should be 
> removed again and potentially replaces with new constraints.
> One known constraint is that if taskowned state is used 
> (\{{GenericWriteAhreadLog}} sink), savepoints are currently not relocatable 
> yet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 opened a new pull request #13488: [FLINK-19381][docs] Fix docs for savepoint relocation

2020-09-26 Thread GitBox


klion26 opened a new pull request #13488:
URL: https://github.com/apache/flink/pull/13488


   ## What is the purpose of the change
   
   Fix doc for savepoint relocation
   
   ## Brief change log
   
   update docs:
  - ops/state/{savepoints.md, savepoints.zh.md}
 -  ops/{upgrading.md, upgrading.zh.md}
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: ( no )
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on pull request #13488: [FLINK-19381][docs] Fix docs for savepoint relocation

2020-09-26 Thread GitBox


klion26 commented on pull request #13488:
URL: https://github.com/apache/flink/pull/13488#issuecomment-699588723


   cc @NicoK 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13434: [FLINK-19292][hive] HiveCatalog should support specifying Hadoop conf dir with configuration

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13434:
URL: https://github.com/apache/flink/pull/13434#issuecomment-695893232


   
   ## CI report:
   
   * f4033f16e326039a0f33958d096ba0c30ef26a6f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6977)
 
   * 2c08acf8c251407d39f2628b970d5b7eb437af99 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6989)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13434: [FLINK-19292][hive] HiveCatalog should support specifying Hadoop conf dir with configuration

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13434:
URL: https://github.com/apache/flink/pull/13434#issuecomment-695893232


   
   ## CI report:
   
   * f4033f16e326039a0f33958d096ba0c30ef26a6f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6977)
 
   * 2c08acf8c251407d39f2628b970d5b7eb437af99 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on pull request #13434: [FLINK-19292][hive] HiveCatalog should support specifying Hadoop conf dir with configuration

2020-09-26 Thread GitBox


SteNicholas commented on pull request #13434:
URL: https://github.com/apache/flink/pull/13434#issuecomment-699583396


   @lirui-apache Sorry for previous misunderstanding not rely 
on`HadoopUtils.getHadoopConfiguration`. I have already followed up with your 
comments to modify the logic of loading hadoop configuration. Please help to 
review again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19419) "null-string-literal" does not work in HBaseSource decoder

2020-09-26 Thread CaoZhen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202733#comment-17202733
 ] 

CaoZhen commented on FLINK-19419:
-

But this doesn't work for string type, because empty byte is also a valid 
value. 

---

I think that if this is the case, then the "null" string read from Hbase should 
return the "NULL" string downstream instead of Java NULL

> "null-string-literal" does not work  in HBaseSource decoder 
> 
>
> Key: FLINK-19419
> URL: https://issues.apache.org/jira/browse/FLINK-19419
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.11.0, 1.11.1, 1.11.2
>Reporter: CaoZhen
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2020-09-25-21-11-36-418.png
>
>
>  
> When using HBaseSoucre, it is found that "null-string-literal" does not work.
> The current decoder processing logic is shown below.
> `nullStringBytes` should be used when the `value` is null.
>  
> !image-2020-09-25-21-11-36-418.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13216: [FLINK-18999][table-planner-blink][hive] Temporary generic table does…

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13216:
URL: https://github.com/apache/flink/pull/13216#issuecomment-678268420


   
   ## CI report:
   
   * f07cf90226588764811e5e5075d2bca96558aa40 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6782)
 
   * 555452d0580d6eeeced07936ec9947fff8f17b47 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6988)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19408) Update flink-statefun-docker release scripts for cross release Java 8 and 11

2020-09-26 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-19408:

Fix Version/s: (was: statefun-2.2.0)
   statefun-2.3.0

> Update flink-statefun-docker release scripts for cross release Java 8 and 11
> 
>
> Key: FLINK-19408
> URL: https://issues.apache.org/jira/browse/FLINK-19408
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: statefun-2.3.0
>
>
> Currently, the {{add-version.sh}} script in the {{flink-statefun-docker}} 
> repo does not generate Dockerfiles for different Java versions.
> Since we have decided to cross-release images for Java 8 and 11, that script 
> needs to be updated as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19408) Update flink-statefun-docker release scripts for cross release Java 8 and 11

2020-09-26 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-19408:

Issue Type: New Feature  (was: Task)

> Update flink-statefun-docker release scripts for cross release Java 8 and 11
> 
>
> Key: FLINK-19408
> URL: https://issues.apache.org/jira/browse/FLINK-19408
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: statefun-2.2.0
>
>
> Currently, the {{add-version.sh}} script in the {{flink-statefun-docker}} 
> repo does not generate Dockerfiles for different Java versions.
> Since we have decided to cross-release images for Java 8 and 11, that script 
> needs to be updated as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19192) Set higher limit on the HTTP connection pool

2020-09-26 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-19192.
---
Release Note: The size of the connection pool used for remote function HTTP 
invocation requests have been increased to 1024.
  Resolution: Fixed

> Set higher limit on the HTTP connection pool
> 
>
> Key: FLINK-19192
> URL: https://issues.apache.org/jira/browse/FLINK-19192
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Assignee: Igal Shilman
>Priority: Major
>  Labels: pull-request-available
> Fix For: statefun-2.2.0
>
>
> The default size of the connection pool is too low, we should set it to a 
> higher value
> and let servers to decide if they will keep the connection alive or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19095) Add expire mode for remote function state TTL

2020-09-26 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-19095:

Issue Type: New Feature  (was: Task)

> Add expire mode for remote function state TTL
> -
>
> Key: FLINK-19095
> URL: https://issues.apache.org/jira/browse/FLINK-19095
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>  Labels: pull-request-available
> Fix For: statefun-2.2.0
>
>
> We did not allow setting expire mode for each remote function state before 
> due to FLINK-17954. Now that remote function state is de-multiplexed, we can 
> now easily support this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19191) Reduce the default number for async operations

2020-09-26 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-19191.
---
Release Note: The default value for "statefun.async.max-per-task" has been 
decreased to 1024.
  Resolution: Fixed

> Reduce the default number for async operations 
> ---
>
> Key: FLINK-19191
> URL: https://issues.apache.org/jira/browse/FLINK-19191
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Assignee: Igal Shilman
>Priority: Major
>  Labels: pull-request-available
> Fix For: statefun-2.2.0
>
>
> The default upper limit for async operations per task slot is currently set 
> to 10 million,
> and it is unrealistically high, we should set it to a more realistic value. A 
> closer example would be the recommend value in Flink's AsyncWait operator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-19192) Set higher limit on the HTTP connection pool

2020-09-26 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-19192:
-

Reopening to add release note.

> Set higher limit on the HTTP connection pool
> 
>
> Key: FLINK-19192
> URL: https://issues.apache.org/jira/browse/FLINK-19192
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Assignee: Igal Shilman
>Priority: Major
>  Labels: pull-request-available
> Fix For: statefun-2.2.0
>
>
> The default size of the connection pool is too low, we should set it to a 
> higher value
> and let servers to decide if they will keep the connection alive or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19001) Add data-stream integration for stateful functions

2020-09-26 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-19001:

Issue Type: New Feature  (was: Improvement)

> Add data-stream integration for stateful functions 
> ---
>
> Key: FLINK-19001
> URL: https://issues.apache.org/jira/browse/FLINK-19001
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Assignee: Igal Shilman
>Priority: Major
>  Labels: pull-request-available
> Fix For: statefun-2.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-19191) Reduce the default number for async operations

2020-09-26 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-19191:
-

Reopening to add release notes.

> Reduce the default number for async operations 
> ---
>
> Key: FLINK-19191
> URL: https://issues.apache.org/jira/browse/FLINK-19191
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Assignee: Igal Shilman
>Priority: Major
>  Labels: pull-request-available
> Fix For: statefun-2.2.0
>
>
> The default upper limit for async operations per task slot is currently set 
> to 10 million,
> and it is unrealistically high, we should set it to a more realistic value. A 
> closer example would be the recommend value in Flink's AsyncWait operator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18518) Add Async RequestReply handler for the Python SDK

2020-09-26 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-18518:

Issue Type: New Feature  (was: Improvement)

> Add Async RequestReply handler for the Python SDK
> -
>
> Key: FLINK-18518
> URL: https://issues.apache.org/jira/browse/FLINK-18518
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Affects Versions: statefun-2.1.0
>Reporter: Igal Shilman
>Assignee: Igal Shilman
>Priority: Major
>  Labels: beginner-friendly, pull-request-available
> Fix For: statefun-2.2.0
>
>
> I/O bound stateful functions can benefit from the built-in async/io support 
> in Python, but the 
> RequestReply handler is not an async-io compatible.  See 
> [this|https://stackoverflow.com/questions/62640283/flink-stateful-functions-async-calls-with-the-python-sdk]
>  question on stackoverflow.
>  
> Having an asyncio compatible handler will open the door to the usage of 
> aiohttp for example:
>  
> {code:java}
> import aiohttp
> import asyncio
> ...
> async def fetch(session, url):
> async with session.get(url) as response:
> return await response.text()
> @function.bind("example/hello")
> async def hello(context, message):
> async with aiohttp.ClientSession() as session:
> html = await fetch(session, 'http://python.org')
> context.pack_and_reply(SomeProtobufMessage(html))
> from aiohttp import webhandler 
> handler = AsyncRequestReplyHandler(functions)
> async def handle(request):
> req = await request.read()
> res = await handler(req)
> return web.Response(body=res, content_type="application/octet-stream'")
> app = web.Application()
> app.add_routes([web.post('/statefun', handle)])
> if __name__ == '__main__':
> web.run_app(app, port=5000)
>  {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13216: [FLINK-18999][table-planner-blink][hive] Temporary generic table does…

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13216:
URL: https://github.com/apache/flink/pull/13216#issuecomment-678268420


   
   ## CI report:
   
   * f07cf90226588764811e5e5075d2bca96558aa40 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6782)
 
   * 555452d0580d6eeeced07936ec9947fff8f17b47 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19431) Translate "Monitoring REST API" page of "Debugging & Monitoring" into Chinese

2020-09-26 Thread weizheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202727#comment-17202727
 ] 

weizheng commented on FLINK-19431:
--

Hi [~jark],

I did not find the issue that deduplicate with it. If so, thank you for 
assigning it to me.

> Translate "Monitoring REST API" page of "Debugging & Monitoring" into Chinese
> -
>
> Key: FLINK-19431
> URL: https://issues.apache.org/jira/browse/FLINK-19431
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: weizheng
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/monitoring/rest_api.html
> The markdown file is located in {{flink/docs/monitoring/rest_api.zh.md}}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19431) Translate "Monitoring REST API" page of "Debugging & Monitoring" into Chinese

2020-09-26 Thread weizheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weizheng updated FLINK-19431:
-
Component/s: Documentation
 chinese-translation

> Translate "Monitoring REST API" page of "Debugging & Monitoring" into Chinese
> -
>
> Key: FLINK-19431
> URL: https://issues.apache.org/jira/browse/FLINK-19431
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: weizheng
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/monitoring/rest_api.html
> The markdown file is located in {{flink/docs/monitoring/rest_api.zh.md}}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19431) Translate "Monitoring REST API" page of "Debugging & Monitoring" into Chinese

2020-09-26 Thread weizheng (Jira)
weizheng created FLINK-19431:


 Summary: Translate "Monitoring REST API" page of "Debugging & 
Monitoring" into Chinese
 Key: FLINK-19431
 URL: https://issues.apache.org/jira/browse/FLINK-19431
 Project: Flink
  Issue Type: Improvement
Reporter: weizheng


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/zh/monitoring/rest_api.html

The markdown file is located in {{flink/docs/monitoring/rest_api.zh.md}}

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13483: [FLINK-19403][python] Support Pandas Stream Group Window Aggregation

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13483:
URL: https://github.com/apache/flink/pull/13483#issuecomment-698904241


   
   ## CI report:
   
   * c89615b7eb8f36e5e8966ad6f0098aa4757b5206 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6964)
 
   * 8516ccaf4a5c62eb02ae4a3f851f7661c63657c3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6987)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13322:
URL: https://github.com/apache/flink/pull/13322#issuecomment-686859976


   
   ## CI report:
   
   * ae9c1538fa1bcb1458006aa8f54979b720b64fd1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6986)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6982)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on a change in pull request #13434: [FLINK-19292][hive] HiveCatalog should support specifying Hadoop conf dir with configuration

2020-09-26 Thread GitBox


lirui-apache commented on a change in pull request #13434:
URL: https://github.com/apache/flink/pull/13434#discussion_r495519793



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTableUtil.java
##
@@ -421,6 +424,31 @@ public static void checkAcidTable(CatalogTable 
catalogTable, ObjectPath tablePat
}
}
 
+   /**
+* Returns a new Hadoop Configuration object using the path to the 
hadoop conf configured.
+*
+* @param hadoopConfDir Hadoop conf directory path.
+* @return A Hadoop configuration instance.
+*/
+   public static Configuration getHadoopConfiguration(String 
hadoopConfDir) {
+   Configuration hadoopConfiguration = new Configuration();
+   if (new File(hadoopConfDir).exists()) {
+   if (new File(hadoopConfDir + 
"/core-site.xml").exists()) {
+   hadoopConfiguration.addResource(new 
Path(hadoopConfDir + "/core-site.xml"));
+   }
+   if (new File(hadoopConfDir + 
"/hdfs-default.xml").exists()) {

Review comment:
   Why do we need hdfs-default.xml?

##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTableUtil.java
##
@@ -421,6 +424,31 @@ public static void checkAcidTable(CatalogTable 
catalogTable, ObjectPath tablePat
}
}
 
+   /**
+* Returns a new Hadoop Configuration object using the path to the 
hadoop conf configured.
+*
+* @param hadoopConfDir Hadoop conf directory path.
+* @return A Hadoop configuration instance.
+*/
+   public static Configuration getHadoopConfiguration(String 
hadoopConfDir) {
+   Configuration hadoopConfiguration = new Configuration();
+   if (new File(hadoopConfDir).exists()) {
+   if (new File(hadoopConfDir + 
"/core-site.xml").exists()) {
+   hadoopConfiguration.addResource(new 
Path(hadoopConfDir + "/core-site.xml"));
+   }
+   if (new File(hadoopConfDir + 
"/hdfs-default.xml").exists()) {
+   hadoopConfiguration.addResource(new 
Path(hadoopConfDir + "/hdfs-default.xml"));
+   }
+   if (new File(hadoopConfDir + 
"/hdfs-site.xml").exists()) {
+   hadoopConfiguration.addResource(new 
Path(hadoopConfDir + "/hdfs-site.xml"));
+   }
+   if (new File(hadoopConfDir + 
"/mapred-site.xml").exists()) {

Review comment:
   We also need yarn-site.xml, which is needed to generate Parquet splits 
in a kerberized environment.

##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java
##
@@ -196,15 +204,21 @@ private static HiveConf createHiveConf(@Nullable String 
hiveConfDir) {
}
 
// create HiveConf from hadoop configuration
-   Configuration hadoopConf = 
HadoopUtils.getHadoopConfiguration(new 
org.apache.flink.configuration.Configuration());
+   Configuration hadoopConf;
 
-   // Add mapred-site.xml. We need to read configurations like 
compression codec.
-   for (String possibleHadoopConfPath : 
HadoopUtils.possibleHadoopConfPaths(new 
org.apache.flink.configuration.Configuration())) {
-   File mapredSite = new File(new 
File(possibleHadoopConfPath), "mapred-site.xml");
-   if (mapredSite.exists()) {
-   hadoopConf.addResource(new 
Path(mapredSite.getAbsolutePath()));
-   break;
+   if (isNullOrWhitespaceOnly(hadoopConfDir)) {
+   hadoopConf = HadoopUtils.getHadoopConfiguration(new 
org.apache.flink.configuration.Configuration());

Review comment:
   Let's not rely on `HadoopUtils.getHadoopConfiguration` to get the 
configuration. We can just call `HadoopUtils.possibleHadoopConfPaths` to get 
the paths and load the files by ourselves. And the loading logic should be 
consistent with `HiveTableUtil.getHadoopConfiguration`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13483: [FLINK-19403][python] Support Pandas Stream Group Window Aggregation

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13483:
URL: https://github.com/apache/flink/pull/13483#issuecomment-698904241


   
   ## CI report:
   
   * c89615b7eb8f36e5e8966ad6f0098aa4757b5206 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6964)
 
   * 8516ccaf4a5c62eb02ae4a3f851f7661c63657c3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13487: [FLINK-19409][Documentation] remove unrelated comment for getValue in ListView

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13487:
URL: https://github.com/apache/flink/pull/13487#issuecomment-699575614


   
   ## CI report:
   
   * 9cbc32fe3fb52c05dce29124ac61c4a7c2952241 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6985)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13322:
URL: https://github.com/apache/flink/pull/13322#issuecomment-686859976


   
   ## CI report:
   
   * ae9c1538fa1bcb1458006aa8f54979b720b64fd1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6982)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6986)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shuiqiangchen commented on pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-26 Thread GitBox


shuiqiangchen commented on pull request #13322:
URL: https://github.com/apache/flink/pull/13322#issuecomment-699576731


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #13483: [FLINK-19403][python] Support Pandas Stream Group Window Aggregation

2020-09-26 Thread GitBox


dianfu commented on a change in pull request #13483:
URL: https://github.com/apache/flink/pull/13483#discussion_r495514707



##
File path: flink-python/pyflink/table/tests/test_pandas_udaf.py
##
@@ -259,6 +261,128 @@ def test_over_window_aggregate_function(self):
 "3,2.0,3,2.0,1.0,1.0,2.0,2.0,1.0,1.0"])
 
 
+class StreamPandasUDAFITTests(PyFlinkBlinkStreamTableTestCase):
+def test_group_window_aggregate_function_over_time(self):

Review comment:
   ```suggestion
   def test_sliding_group_window_over_time(self):
   ```

##
File path: flink-python/pyflink/table/tests/test_pandas_udaf.py
##
@@ -259,6 +261,128 @@ def test_over_window_aggregate_function(self):
 "3,2.0,3,2.0,1.0,1.0,2.0,2.0,1.0,1.0"])
 
 
+class StreamPandasUDAFITTests(PyFlinkBlinkStreamTableTestCase):
+def test_group_window_aggregate_function_over_time(self):
+# create source file path
+import tempfile
+import os
+tmp_dir = tempfile.gettempdir()
+data = [
+'1,2,2018-03-11 03:10:00',
+'3,2,2018-03-11 03:10:00',
+'2,1,2018-03-11 03:10:00',
+'1,3,2018-03-11 03:40:00',
+'1,8,2018-03-11 04:20:00',
+'2,3,2018-03-11 03:30:00'
+]
+source_path = tmp_dir + 
'/test_group_window_aggregate_function_over_time.csv'
+with open(source_path, 'w') as fd:
+for ele in data:
+fd.write(ele + '\n')
+
+from pyflink.table.window import Slide
+self.env.set_stream_time_characteristic(TimeCharacteristic.EventTime)
+self.t_env.register_function("mean_udaf", mean_udaf)
+
+source_table = """
+create table source_table(
+a TINYINT,
+b SMALLINT,
+rowtime TIMESTAMP(3),
+WATERMARK FOR rowtime AS rowtime - INTERVAL '60' MINUTE
+) with(
+'connector.type' = 'filesystem',
+'format.type' = 'csv',
+'connector.path' = '%s',
+'format.ignore-first-line' = 'false',
+'format.field-delimiter' = ','
+)
+""" % source_path
+self.t_env.execute_sql(source_table)
+t = self.t_env.from_path("source_table")
+
+table_sink = source_sink_utils.TestAppendSink(
+['a', 'b', 'c', 'd'],
+[
+DataTypes.TINYINT(),
+DataTypes.TIMESTAMP(3),
+DataTypes.TIMESTAMP(3),
+DataTypes.FLOAT()])
+self.t_env.register_table_sink("Results", table_sink)
+
t.window(Slide.over("1.hours").every("30.minutes").on("rowtime").alias("w")) \
+.group_by("a, w") \
+.select("a, w.start, w.end, mean_udaf(b) as b") \
+.execute_insert("Results") \
+.wait()
+actual = source_sink_utils.results()
+self.assert_equals(actual,
+   ["1,2018-03-11 02:30:00.0,2018-03-11 
03:30:00.0,2.0",
+"1,2018-03-11 03:00:00.0,2018-03-11 
04:00:00.0,2.5",
+"1,2018-03-11 03:30:00.0,2018-03-11 
04:30:00.0,5.5",
+"1,2018-03-11 04:00:00.0,2018-03-11 
05:00:00.0,8.0",
+"2,2018-03-11 02:30:00.0,2018-03-11 
03:30:00.0,1.0",
+"2,2018-03-11 03:00:00.0,2018-03-11 
04:00:00.0,2.0",
+"2,2018-03-11 03:30:00.0,2018-03-11 
04:30:00.0,3.0",
+"3,2018-03-11 03:00:00.0,2018-03-11 
04:00:00.0,2.0",
+"3,2018-03-11 02:30:00.0,2018-03-11 
03:30:00.0,2.0"])
+os.remove(source_path)
+
+def test_group_window_aggregate_function_over_count(self):

Review comment:
   ```suggestion
   def test_sliding_group_window_over_count(self):
   ```

##
File path: 
flink-python/src/main/java/org/apache/flink/table/runtime/operators/python/aggregate/arrow/stream/StreamArrowPythonGroupWindowAggregateFunctionOperator.java
##
@@ -0,0 +1,547 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package

[jira] [Closed] (FLINK-19428) Translate "Elasticsearch Connector" page of "DataStream Connectors" into Chinese

2020-09-26 Thread weizheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weizheng closed FLINK-19428.

Resolution: Duplicate

> Translate "Elasticsearch Connector" page of "DataStream Connectors" into 
> Chinese
> 
>
> Key: FLINK-19428
> URL: https://issues.apache.org/jira/browse/FLINK-19428
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: weizheng
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/connectors/elasticsearch.html
> The markdown file is located in 
> {{flink/docs/dev/connectors/elasticsearch.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19428) Translate "Elasticsearch Connector" page of "DataStream Connectors" into Chinese

2020-09-26 Thread weizheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202721#comment-17202721
 ] 

weizheng commented on FLINK-19428:
--

ok thanks

> Translate "Elasticsearch Connector" page of "DataStream Connectors" into 
> Chinese
> 
>
> Key: FLINK-19428
> URL: https://issues.apache.org/jira/browse/FLINK-19428
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: weizheng
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/connectors/elasticsearch.html
> The markdown file is located in 
> {{flink/docs/dev/connectors/elasticsearch.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19425) Correct the usage of BulkWriter#flush and BulkWriter#finish

2020-09-26 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202723#comment-17202723
 ] 

hailong wang commented on FLINK-19425:
--

Hi [~jark] [~lzljs3620320], what do you think of this?

> Correct the usage of BulkWriter#flush and BulkWriter#finish
> ---
>
> Key: FLINK-19425
> URL: https://issues.apache.org/jira/browse/FLINK-19425
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Major
> Fix For: 1.11.0, 1.12.0
>
>
> From the comments, BulkWriter#finish method should flush all buffer before 
> close.
> But some subclasses of it do not flush data. These classes are as follows:
> 1.AvroBulkWriter#finish
> 2.HadoopCompressionBulkWriter#finish
> 3.NoCompressionBulkWriter#finish
> 4.SequenceFileWriter#finish
> We should invoke BulkWriter#flush in this finish methods.
> On the other hand, We don't have to  invoke BulkWriter#flush in close method. 
> For BulkWriter#finish will flush all data.
> 1. HadoopPathBasedPartFileWriter#closeForCommit
> 2. BulkPartWriter#closeForCommit
> 3. FileSystemTableSink#OutputFormat#close
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] caozhen1937 closed pull request #13484: [FLINK-19419][Connectors][HBase] "null-string-literal" does not work in HBaseSource decoder

2020-09-26 Thread GitBox


caozhen1937 closed pull request #13484:
URL: https://github.com/apache/flink/pull/13484


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13487: [FLINK-19409][Documentation] remove unrelated comment for getValue in ListView

2020-09-26 Thread GitBox


flinkbot commented on pull request #13487:
URL: https://github.com/apache/flink/pull/13487#issuecomment-699575614


   
   ## CI report:
   
   * 9cbc32fe3fb52c05dce29124ac61c4a7c2952241 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18694) Add unaligned checkpoint config to web ui

2020-09-26 Thread Yadong Xie (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202717#comment-17202717
 ] 

Yadong Xie commented on FLINK-18694:


Hi, it seems that the WEB UI is not updated yet, is anyone working on this?

> Add unaligned checkpoint config to web ui
> -
>
> Key: FLINK-18694
> URL: https://issues.apache.org/jira/browse/FLINK-18694
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: Kboh
>Assignee: Kboh
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> h2. What is the purpose of the change
>  * Show in web ui if unaligned checkpoints are enabled.
> h2. Brief change log
>  * Adds unaligned checkpoint config to REST endpoint, and web ui.
>  
> [https://github.com/apache/flink/pull/12962]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19428) Translate "Elasticsearch Connector" page of "DataStream Connectors" into Chinese

2020-09-26 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202716#comment-17202716
 ] 

hailong wang commented on FLINK-19428:
--

Is it deduplicated with https://issues.apache.org/jira/browse/FLINK-12942?

> Translate "Elasticsearch Connector" page of "DataStream Connectors" into 
> Chinese
> 
>
> Key: FLINK-19428
> URL: https://issues.apache.org/jira/browse/FLINK-19428
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: weizheng
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/connectors/elasticsearch.html
> The markdown file is located in 
> {{flink/docs/dev/connectors/elasticsearch.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13487: [FLINK-19409][Documentation] remove unrelated comment for getValue in ListView

2020-09-26 Thread GitBox


flinkbot commented on pull request #13487:
URL: https://github.com/apache/flink/pull/13487#issuecomment-699575156


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9cbc32fe3fb52c05dce29124ac61c4a7c2952241 (Sun Sep 27 
02:31:28 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19430) Translate page 'datastream_tutorial' into Chinese

2020-09-26 Thread hailong wang (Jira)
hailong wang created FLINK-19430:


 Summary: Translate page 'datastream_tutorial' into Chinese
 Key: FLINK-19430
 URL: https://issues.apache.org/jira/browse/FLINK-19430
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation, Documentation
Affects Versions: 1.11.0
Reporter: hailong wang
 Fix For: 1.12.0


The page url 
[datastream_tutorial|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream_tutorial.html]
 
[datastream_tutorial|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream_tutorial.html]

The doc is located at /dev/python/user-guide/datastream_tutorial.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19430) Translate page 'datastream_tutorial' into Chinese

2020-09-26 Thread hailong wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hailong wang updated FLINK-19430:
-
Description: 
The page url 
[datastream_tutorial|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream_tutorial.html]

The doc is located at /dev/python/user-guide/datastream_tutorial.zh.md

  was:
The page url 
[datastream_tutorial|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream_tutorial.html]
 
[datastream_tutorial|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream_tutorial.html]

The doc is located at /dev/python/user-guide/datastream_tutorial.zh.md


> Translate page 'datastream_tutorial' into Chinese
> -
>
> Key: FLINK-19430
> URL: https://issues.apache.org/jira/browse/FLINK-19430
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Minor
> Fix For: 1.12.0
>
>
> The page url 
> [datastream_tutorial|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream_tutorial.html]
> The doc is located at /dev/python/user-guide/datastream_tutorial.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19409) The comment for getValue has wrong code in class ListView

2020-09-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19409:
---
Labels: pull-request-available  (was: )

> The comment for getValue has wrong code in class ListView
> -
>
> Key: FLINK-19409
> URL: https://issues.apache.org/jira/browse/FLINK-19409
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Liu
>Assignee: Liu
>Priority: Minor
>  Labels: pull-request-available
>
> The comment for getValue is as following currently:
> {code:java}
> *    @Override  
> *    public Long getValue(MyAccumulator accumulator) {  
> *        accumulator.list.add(id);  
> *        ... ...  
> *        accumulator.list.get()  
> *         ... ...  
> *        return accumulator.count;  
> *    }  
> {code}
>  Users may be confused with the code "accumulator.list.add(id); ". It should 
> be removed. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Myracle opened a new pull request #13487: [FLINK-19409][Documentation] remove unrelated comment for getValue in ListView

2020-09-26 Thread GitBox


Myracle opened a new pull request #13487:
URL: https://github.com/apache/flink/pull/13487


   ## What is the purpose of the change
   
   *In class ListView, the comment for method getValue has unrelated code 
'accumulator.list.add(id);'. Users and developers may be confused with the 
comment. It should be removed.*
   
   
   ## Brief change log
   
 - *Remove the unrelated code comment.*
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] vthinkxie commented on pull request #13458: FLINK-18851 [runtime] Add checkpoint type to checkpoint history entries in Web UI

2020-09-26 Thread GitBox


vthinkxie commented on pull request #13458:
URL: https://github.com/apache/flink/pull/13458#issuecomment-699574773


   > Thanks @gm7y8 for your changes. There are only two minor formatting things 
left which need to be addressed. Additionally, please update the commit message 
to comply to [Flink's commit 
format](https://flink.apache.org/contributing/contribute-documentation.html#submit-your-contribution).
 It should look like `[FLINK-18851][runtime-web] ...`.
   > 
   > I'm gonna ask @vthinkxie to review the `web-dashboard` changes in the 
meantime. For me, they look good.
   
   The frontend code looks good to me



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19429) Translate page 'Data Types' into Chinese

2020-09-26 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202715#comment-17202715
 ] 

hailong wang commented on FLINK-19429:
--

I did not find the issue that deduplicate with it. If so, thank you for 
assigning it to me.

> Translate page 'Data Types' into Chinese
> 
>
> Key: FLINK-19429
> URL: https://issues.apache.org/jira/browse/FLINK-19429
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Minor
> Fix For: 1.12.0
>
>
> Translate the page 
> [data_types|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream-api-users-guide/data_types.html].
> The doc located in 
> "flink/docs/dev/python/datastream-api-users-guide/data_types.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19429) Translate page 'Data Types' into Chinese

2020-09-26 Thread hailong wang (Jira)
hailong wang created FLINK-19429:


 Summary: Translate page 'Data Types' into Chinese
 Key: FLINK-19429
 URL: https://issues.apache.org/jira/browse/FLINK-19429
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation, Documentation
Affects Versions: 1.11.0
Reporter: hailong wang
 Fix For: 1.12.0


Translate the page 
[data_types|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream-api-users-guide/data_types.html].

The doc located in 
"flink/docs/dev/python/datastream-api-users-guide/data_types.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16267) Flink uses more memory than taskmanager.memory.process.size in Kubernetes

2020-09-26 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reassigned FLINK-16267:


Assignee: (was: Xintong Song)

> Flink uses more memory than taskmanager.memory.process.size in Kubernetes
> -
>
> Key: FLINK-16267
> URL: https://issues.apache.org/jira/browse/FLINK-16267
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.10.0
>Reporter: ChangZhuo Chen (陳昌倬)
>Priority: Major
> Attachments: flink-conf_1.10.0.yaml, flink-conf_1.9.1.yaml, 
> oomkilled_taskmanager.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This issue is from 
> [https://stackoverflow.com/questions/60336764/flink-uses-more-memory-than-taskmanager-memory-process-size-in-kubernetes]
> h1. Description
>  * In Flink 1.10.0, we try to use `taskmanager.memory.process.size` to limit 
> the resource used by taskmanager to ensure they are not killed by Kubernetes. 
> However, we still get lots of taskmanager `OOMKilled`. The setup is in the 
> following section.
>  * The taskmanager log is in attachment [^oomkilled_taskmanager.log].
> h2. Kubernete
>  * The Kubernetes setup is the same as described in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/kubernetes.html].
>  * The following is resource configuration for taskmanager deployment in 
> Kubernetes:
> {{resources:}}
>  {{  requests:}}
>  {{    cpu: 1000m}}
>  {{    memory: 4096Mi}}
>  {{  limits:}}
>  {{    cpu: 1000m}}
>  {{    memory: 4096Mi}}
> h2. Flink Docker
>  * The Flink docker is built by the following Docker file.
> {{FROM flink:1.10-scala_2.11}}
> RUN mkdir -p /opt/flink/plugins/s3 &&
> ln -s /opt/flink/opt/flink-s3-fs-presto-1.10.0.jar /opt/flink/plugins/s3/
>  {{RUN ln -s /opt/flink/opt/flink-metrics-prometheus-1.10.0.jar 
> /opt/flink/lib/}}
> h2. Flink Configuration
>  * The following are all memory related configurations in `flink-conf.yaml` 
> in 1.10.0:
> {{jobmanager.heap.size: 820m}}
>  {{taskmanager.memory.jvm-metaspace.size: 128m}}
>  {{taskmanager.memory.process.size: 4096m}}
>  * We use RocksDB and we don't set `state.backend.rocksdb.memory.managed` in 
> `flink-conf.yaml`.
>  ** Use S3 as checkpoint storage.
>  * The code uses DateStream API
>  ** input/output are both Kafka.
> h2. Project Dependencies
>  * The following is our dependencies.
> {{val flinkVersion = "1.10.0"}}{{libraryDependencies += 
> "com.squareup.okhttp3" % "okhttp" % "4.2.2"}}
>  {{libraryDependencies += "com.typesafe" % "config" % "1.4.0"}}
>  {{libraryDependencies += "joda-time" % "joda-time" % "2.10.5"}}
>  {{libraryDependencies += "org.apache.flink" %% "flink-connector-kafka" % 
> flinkVersion}}
>  {{libraryDependencies += "org.apache.flink" % "flink-metrics-dropwizard" % 
> flinkVersion}}
>  {{libraryDependencies += "org.apache.flink" %% "flink-scala" % flinkVersion 
> % "provided"}}
>  {{libraryDependencies += "org.apache.flink" %% "flink-statebackend-rocksdb" 
> % flinkVersion % "provided"}}
>  {{libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" % 
> flinkVersion % "provided"}}
>  {{libraryDependencies += "org.json4s" %% "json4s-jackson" % "3.6.7"}}
>  {{libraryDependencies += "org.log4s" %% "log4s" % "1.8.2"}}
>  {{libraryDependencies += "org.rogach" %% "scallop" % "3.3.1"}}
> h2. Previous Flink 1.9.1 Configuration
>  * The configuration we used in Flink 1.9.1 are the following. It does not 
> have `OOMKilled`.
> h3. Kubernetes
> {{resources:}}
>  {{  requests:}}
>  {{    cpu: 1200m}}
>  {{    memory: 2G}}
>  {{  limits:}}
>  {{    cpu: 1500m}}
>  {{    memory: 2G}}
> h3. Flink 1.9.1
> {{jobmanager.heap.size: 820m}}
>  {{taskmanager.heap.size: 1024m}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16267) Flink uses more memory than taskmanager.memory.process.size in Kubernetes

2020-09-26 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reassigned FLINK-16267:


Assignee: Xintong Song

> Flink uses more memory than taskmanager.memory.process.size in Kubernetes
> -
>
> Key: FLINK-16267
> URL: https://issues.apache.org/jira/browse/FLINK-16267
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.10.0
>Reporter: ChangZhuo Chen (陳昌倬)
>Assignee: Xintong Song
>Priority: Major
> Attachments: flink-conf_1.10.0.yaml, flink-conf_1.9.1.yaml, 
> oomkilled_taskmanager.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This issue is from 
> [https://stackoverflow.com/questions/60336764/flink-uses-more-memory-than-taskmanager-memory-process-size-in-kubernetes]
> h1. Description
>  * In Flink 1.10.0, we try to use `taskmanager.memory.process.size` to limit 
> the resource used by taskmanager to ensure they are not killed by Kubernetes. 
> However, we still get lots of taskmanager `OOMKilled`. The setup is in the 
> following section.
>  * The taskmanager log is in attachment [^oomkilled_taskmanager.log].
> h2. Kubernete
>  * The Kubernetes setup is the same as described in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/kubernetes.html].
>  * The following is resource configuration for taskmanager deployment in 
> Kubernetes:
> {{resources:}}
>  {{  requests:}}
>  {{    cpu: 1000m}}
>  {{    memory: 4096Mi}}
>  {{  limits:}}
>  {{    cpu: 1000m}}
>  {{    memory: 4096Mi}}
> h2. Flink Docker
>  * The Flink docker is built by the following Docker file.
> {{FROM flink:1.10-scala_2.11}}
> RUN mkdir -p /opt/flink/plugins/s3 &&
> ln -s /opt/flink/opt/flink-s3-fs-presto-1.10.0.jar /opt/flink/plugins/s3/
>  {{RUN ln -s /opt/flink/opt/flink-metrics-prometheus-1.10.0.jar 
> /opt/flink/lib/}}
> h2. Flink Configuration
>  * The following are all memory related configurations in `flink-conf.yaml` 
> in 1.10.0:
> {{jobmanager.heap.size: 820m}}
>  {{taskmanager.memory.jvm-metaspace.size: 128m}}
>  {{taskmanager.memory.process.size: 4096m}}
>  * We use RocksDB and we don't set `state.backend.rocksdb.memory.managed` in 
> `flink-conf.yaml`.
>  ** Use S3 as checkpoint storage.
>  * The code uses DateStream API
>  ** input/output are both Kafka.
> h2. Project Dependencies
>  * The following is our dependencies.
> {{val flinkVersion = "1.10.0"}}{{libraryDependencies += 
> "com.squareup.okhttp3" % "okhttp" % "4.2.2"}}
>  {{libraryDependencies += "com.typesafe" % "config" % "1.4.0"}}
>  {{libraryDependencies += "joda-time" % "joda-time" % "2.10.5"}}
>  {{libraryDependencies += "org.apache.flink" %% "flink-connector-kafka" % 
> flinkVersion}}
>  {{libraryDependencies += "org.apache.flink" % "flink-metrics-dropwizard" % 
> flinkVersion}}
>  {{libraryDependencies += "org.apache.flink" %% "flink-scala" % flinkVersion 
> % "provided"}}
>  {{libraryDependencies += "org.apache.flink" %% "flink-statebackend-rocksdb" 
> % flinkVersion % "provided"}}
>  {{libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" % 
> flinkVersion % "provided"}}
>  {{libraryDependencies += "org.json4s" %% "json4s-jackson" % "3.6.7"}}
>  {{libraryDependencies += "org.log4s" %% "log4s" % "1.8.2"}}
>  {{libraryDependencies += "org.rogach" %% "scallop" % "3.3.1"}}
> h2. Previous Flink 1.9.1 Configuration
>  * The configuration we used in Flink 1.9.1 are the following. It does not 
> have `OOMKilled`.
> h3. Kubernetes
> {{resources:}}
>  {{  requests:}}
>  {{    cpu: 1200m}}
>  {{    memory: 2G}}
>  {{  limits:}}
>  {{    cpu: 1500m}}
>  {{    memory: 2G}}
> h3. Flink 1.9.1
> {{jobmanager.heap.size: 820m}}
>  {{taskmanager.heap.size: 1024m}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19428) Translate "Elasticsearch Connector" page of "DataStream Connectors" into Chinese

2020-09-26 Thread weizheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202708#comment-17202708
 ] 

weizheng commented on FLINK-19428:
--

Hi [~jark], Could you please assign it to me

> Translate "Elasticsearch Connector" page of "DataStream Connectors" into 
> Chinese
> 
>
> Key: FLINK-19428
> URL: https://issues.apache.org/jira/browse/FLINK-19428
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: weizheng
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/connectors/elasticsearch.html
> The markdown file is located in 
> {{flink/docs/dev/connectors/elasticsearch.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19428) Translate "Elasticsearch Connector" page of "DataStream Connectors" into Chinese

2020-09-26 Thread weizheng (Jira)
weizheng created FLINK-19428:


 Summary: Translate "Elasticsearch Connector" page of "DataStream 
Connectors" into Chinese
 Key: FLINK-19428
 URL: https://issues.apache.org/jira/browse/FLINK-19428
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation, Documentation
Reporter: weizheng


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/connectors/elasticsearch.html

The markdown file is located in 
{{flink/docs/dev/connectors/elasticsearch.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16788) Support username and password options for Elasticsearch SQL connector

2020-09-26 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu closed FLINK-16788.
--
Release Note: Resolved in https://issues.apache.org/jira/browse/FLINK-18361
  Resolution: Resolved

> Support username and password options for Elasticsearch SQL connector
> -
>
> Key: FLINK-16788
> URL: https://issues.apache.org/jira/browse/FLINK-16788
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch, Table SQL / API
>Affects Versions: 1.10.0
>Reporter: zhisheng
>Assignee: zhisheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> In a production environment, accessing elasticsearch usually requires 
> authentication, and requires a username and password to access it, but the 
> current version of SQL DDL does not support users to configure these 
> parameters.
>  
> I have improve it in our company, and we use it as follows:
>  
> CREATE TABLE user_behavior_es (
>     user_idBIGINT,
>     item_id BIGINT
> ) WITH (
>     'connector.type'='elasticsearch',
>     'connector.version'='7',
>     'connector.hosts'='http://localhost:9200',
>     'connector.index'='user_behavior_es',
>     'connector.document-type'='user_behavior_es',
>     'connector.enable-auth'='true',
>     'connector.username'='zhisheng',
>     'connector.password'='123456',
>     'format.type'='json',
>     'update-mode'='append',
>     'connector.bulk-flush.max-actions'='10'
> )



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19420) Translate "Program Packaging" page of "Managing Execution" into Chinese

2020-09-26 Thread Xiao Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Huang updated FLINK-19420:
---
Description: 
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/packaging.html]

The markdown file is located in {{flink/docs/dev/packaging.zh.md}}

  was:
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/packaging.html]

The markdown file is located in {{flink/docs/packaging.zh.md}}


> Translate "Program Packaging" page of "Managing Execution" into Chinese
> ---
>
> Key: FLINK-19420
> URL: https://issues.apache.org/jira/browse/FLINK-19420
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.1, 1.11.2
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/packaging.html]
> The markdown file is located in {{flink/docs/dev/packaging.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19427) SplitFetcherTest.testNotifiesWhenGoingIdleConcurrent is instable

2020-09-26 Thread Dian Fu (Jira)
Dian Fu created FLINK-19427:
---

 Summary: SplitFetcherTest.testNotifiesWhenGoingIdleConcurrent is 
instable
 Key: FLINK-19427
 URL: https://issues.apache.org/jira/browse/FLINK-19427
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common
Affects Versions: 1.12.0
Reporter: Dian Fu


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6983&view=logs&j=8fd975ef-f478-511d-4997-6f15fe8a1fd3&t=ac0fa443-5d45-5a6b-3597-0310ecc1d2ab

{code}
2020-09-26T21:27:46.6223579Z [ERROR] 
testNotifiesWhenGoingIdleConcurrent(org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherTest)
  Time elapsed: 0.602 s  <<< FAILURE!
2020-09-26T21:27:46.6224448Z java.lang.AssertionError
2020-09-26T21:27:46.6224804Zat org.junit.Assert.fail(Assert.java:86)
2020-09-26T21:27:46.6225136Zat org.junit.Assert.assertTrue(Assert.java:41)
2020-09-26T21:27:46.6225498Zat org.junit.Assert.assertTrue(Assert.java:52)
2020-09-26T21:27:46.6225984Zat 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherTest.testNotifiesWhenGoingIdleConcurrent(SplitFetcherTest.java:129)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19427) SplitFetcherTest.testNotifiesWhenGoingIdleConcurrent is instable

2020-09-26 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19427:

Labels: test-stability  (was: )

> SplitFetcherTest.testNotifiesWhenGoingIdleConcurrent is instable
> 
>
> Key: FLINK-19427
> URL: https://issues.apache.org/jira/browse/FLINK-19427
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6983&view=logs&j=8fd975ef-f478-511d-4997-6f15fe8a1fd3&t=ac0fa443-5d45-5a6b-3597-0310ecc1d2ab
> {code}
> 2020-09-26T21:27:46.6223579Z [ERROR] 
> testNotifiesWhenGoingIdleConcurrent(org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherTest)
>   Time elapsed: 0.602 s  <<< FAILURE!
> 2020-09-26T21:27:46.6224448Z java.lang.AssertionError
> 2020-09-26T21:27:46.6224804Z  at org.junit.Assert.fail(Assert.java:86)
> 2020-09-26T21:27:46.6225136Z  at org.junit.Assert.assertTrue(Assert.java:41)
> 2020-09-26T21:27:46.6225498Z  at org.junit.Assert.assertTrue(Assert.java:52)
> 2020-09-26T21:27:46.6225984Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherTest.testNotifiesWhenGoingIdleConcurrent(SplitFetcherTest.java:129)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19388) Streaming bucketing end-to-end test failed with "Number of running task managers has not reached 4 within a timeout of 40 sec"

2020-09-26 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19388:

Fix Version/s: 1.12.0

> Streaming bucketing end-to-end test failed with "Number of running task 
> managers has not reached 4 within a timeout of 40 sec"
> --
>
> Key: FLINK-19388
> URL: https://issues.apache.org/jira/browse/FLINK-19388
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6876&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529
> {code}
> 2020-09-24T03:13:12.1370987Z Starting taskexecutor daemon on host fv-az661.
> 2020-09-24T03:13:12.1773280Z Number of running task managers 2 is not yet 4.
> 2020-09-24T03:13:16.4342638Z Number of running task managers 3 is not yet 4.
> 2020-09-24T03:13:20.4570976Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:24.4762428Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:28.4955622Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:32.5110079Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:36.5272551Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:40.5672343Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:44.5857760Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:48.6039181Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:52.6056222Z Number of running task managers has not reached 
> 4 within a timeout of 40 sec
> 2020-09-24T03:13:52.9275629Z Stopping taskexecutor daemon (pid: 13694) on 
> host fv-az661.
> 2020-09-24T03:13:53.1753734Z No standalonesession daemon (pid: 10610) is 
> running anymore on fv-az661.
> 2020-09-24T03:13:53.5812242Z Skipping taskexecutor daemon (pid: 10912), 
> because it is not running anymore on fv-az661.
> 2020-09-24T03:13:53.5813449Z Stopping taskexecutor daemon (pid: 11330) on 
> host fv-az661.
> 2020-09-24T03:13:53.5818053Z Stopping taskexecutor daemon (pid: 11632) on 
> host fv-az661.
> 2020-09-24T03:13:53.5819341Z Skipping taskexecutor daemon (pid: 11965), 
> because it is not running anymore on fv-az661.
> 2020-09-24T03:13:53.5820870Z Skipping taskexecutor daemon (pid: 12906), 
> because it is not running anymore on fv-az661.
> 2020-09-24T03:13:53.5821698Z Stopping taskexecutor daemon (pid: 13392) on 
> host fv-az661.
> 2020-09-24T03:13:53.5839544Z [FAIL] Test script contains errors.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19388) Streaming bucketing end-to-end test failed with "Number of running task managers has not reached 4 within a timeout of 40 sec"

2020-09-26 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202707#comment-17202707
 ] 

Dian Fu commented on FLINK-19388:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6983&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=3425d8ba-5f03-540a-c64b-51b8481bf7d6

> Streaming bucketing end-to-end test failed with "Number of running task 
> managers has not reached 4 within a timeout of 40 sec"
> --
>
> Key: FLINK-19388
> URL: https://issues.apache.org/jira/browse/FLINK-19388
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Runtime / Coordination
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6876&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529
> {code}
> 2020-09-24T03:13:12.1370987Z Starting taskexecutor daemon on host fv-az661.
> 2020-09-24T03:13:12.1773280Z Number of running task managers 2 is not yet 4.
> 2020-09-24T03:13:16.4342638Z Number of running task managers 3 is not yet 4.
> 2020-09-24T03:13:20.4570976Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:24.4762428Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:28.4955622Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:32.5110079Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:36.5272551Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:40.5672343Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:44.5857760Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:48.6039181Z Number of running task managers 0 is not yet 4.
> 2020-09-24T03:13:52.6056222Z Number of running task managers has not reached 
> 4 within a timeout of 40 sec
> 2020-09-24T03:13:52.9275629Z Stopping taskexecutor daemon (pid: 13694) on 
> host fv-az661.
> 2020-09-24T03:13:53.1753734Z No standalonesession daemon (pid: 10610) is 
> running anymore on fv-az661.
> 2020-09-24T03:13:53.5812242Z Skipping taskexecutor daemon (pid: 10912), 
> because it is not running anymore on fv-az661.
> 2020-09-24T03:13:53.5813449Z Stopping taskexecutor daemon (pid: 11330) on 
> host fv-az661.
> 2020-09-24T03:13:53.5818053Z Stopping taskexecutor daemon (pid: 11632) on 
> host fv-az661.
> 2020-09-24T03:13:53.5819341Z Skipping taskexecutor daemon (pid: 11965), 
> because it is not running anymore on fv-az661.
> 2020-09-24T03:13:53.5820870Z Skipping taskexecutor daemon (pid: 12906), 
> because it is not running anymore on fv-az661.
> 2020-09-24T03:13:53.5821698Z Stopping taskexecutor daemon (pid: 13392) on 
> host fv-az661.
> 2020-09-24T03:13:53.5839544Z [FAIL] Test script contains errors.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19426) Streaming File Sink end-to-end test is instable

2020-09-26 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19426:

Component/s: Tests
 Connectors / FileSystem

> Streaming File Sink end-to-end test is instable
> ---
>
> Key: FLINK-19426
> URL: https://issues.apache.org/jira/browse/FLINK-19426
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Reporter: Dian Fu
>Priority: Major
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6983&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=16ca2cca-2f63-5cce-12d2-d519b930a729
> {code}
> 2020-09-26T22:16:26.9856525Z 
> org.apache.flink.runtime.io.network.partition.consumer.PartitionConnectionException:
>  Connection for partition 
> 619775973ed0f282e20f9d55d13913ab#0@bc764cd8ddf7a0cff126f51c16239658_0_1 not 
> reachable.
> 2020-09-26T22:16:26.9857848Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.requestSubpartition(RemoteInputChannel.java:159)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9859168Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.internalRequestPartitions(SingleInputGate.java:336)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9860449Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.requestPartitions(SingleInputGate.java:308)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9861677Z  at 
> org.apache.flink.runtime.taskmanager.InputGateWithMetrics.requestPartitions(InputGateWithMetrics.java:95)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9862861Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.requestPartitions(StreamTask.java:542)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9864018Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.readRecoveredChannelState(StreamTask.java:507)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9865284Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:498)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9866415Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9867500Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:492)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9868514Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:550)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9869450Z  at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722) 
> [flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9870339Z  at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) 
> [flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9870869Z  at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_265]
> 2020-09-26T22:16:26.9872060Z Caused by: java.io.IOException: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
> Connecting to remote task manager '/10.1.0.4:38905' has failed. This might 
> indicate that the remote task manager has been lost.
> 2020-09-26T22:16:26.9873511Z  at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory.createPartitionRequestClient(PartitionRequestClientFactory.java:85)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9874788Z  at 
> org.apache.flink.runtime.io.network.netty.NettyConnectionManager.createPartitionRequestClient(NettyConnectionManager.java:67)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9876084Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.requestSubpartition(RemoteInputChannel.java:156)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9876567Z  ... 12 more
> 2020-09-26T22:16:26.9877477Z Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
> Connecting to remote task manager '/10.1.0.4:38905' has failed. This might 
> indicate that the remote task manager has been lost.
> 2020-09-26T22:16:26.9878503Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> ~[?:1.8.0_265]
> 2020-09-26T22:16:26.9879061Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) 
> ~[?:1.8.0_265]
> 2020-0

[jira] [Updated] (FLINK-19426) Streaming File Sink end-to-end test is instable

2020-09-26 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19426:

Affects Version/s: 1.12.0

> Streaming File Sink end-to-end test is instable
> ---
>
> Key: FLINK-19426
> URL: https://issues.apache.org/jira/browse/FLINK-19426
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6983&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=16ca2cca-2f63-5cce-12d2-d519b930a729
> {code}
> 2020-09-26T22:16:26.9856525Z 
> org.apache.flink.runtime.io.network.partition.consumer.PartitionConnectionException:
>  Connection for partition 
> 619775973ed0f282e20f9d55d13913ab#0@bc764cd8ddf7a0cff126f51c16239658_0_1 not 
> reachable.
> 2020-09-26T22:16:26.9857848Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.requestSubpartition(RemoteInputChannel.java:159)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9859168Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.internalRequestPartitions(SingleInputGate.java:336)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9860449Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.requestPartitions(SingleInputGate.java:308)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9861677Z  at 
> org.apache.flink.runtime.taskmanager.InputGateWithMetrics.requestPartitions(InputGateWithMetrics.java:95)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9862861Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.requestPartitions(StreamTask.java:542)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9864018Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.readRecoveredChannelState(StreamTask.java:507)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9865284Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:498)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9866415Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9867500Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:492)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9868514Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:550)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9869450Z  at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722) 
> [flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9870339Z  at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) 
> [flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9870869Z  at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_265]
> 2020-09-26T22:16:26.9872060Z Caused by: java.io.IOException: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
> Connecting to remote task manager '/10.1.0.4:38905' has failed. This might 
> indicate that the remote task manager has been lost.
> 2020-09-26T22:16:26.9873511Z  at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory.createPartitionRequestClient(PartitionRequestClientFactory.java:85)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9874788Z  at 
> org.apache.flink.runtime.io.network.netty.NettyConnectionManager.createPartitionRequestClient(NettyConnectionManager.java:67)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9876084Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.requestSubpartition(RemoteInputChannel.java:156)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9876567Z  ... 12 more
> 2020-09-26T22:16:26.9877477Z Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
> Connecting to remote task manager '/10.1.0.4:38905' has failed. This might 
> indicate that the remote task manager has been lost.
> 2020-09-26T22:16:26.9878503Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> ~[?:1.8.0_265]
> 2020-09-26T22:16:26.9879061Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.jav

[jira] [Updated] (FLINK-19426) Streaming File Sink end-to-end test is instable

2020-09-26 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19426:

Labels: test-stability  (was: )

> Streaming File Sink end-to-end test is instable
> ---
>
> Key: FLINK-19426
> URL: https://issues.apache.org/jira/browse/FLINK-19426
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6983&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=16ca2cca-2f63-5cce-12d2-d519b930a729
> {code}
> 2020-09-26T22:16:26.9856525Z 
> org.apache.flink.runtime.io.network.partition.consumer.PartitionConnectionException:
>  Connection for partition 
> 619775973ed0f282e20f9d55d13913ab#0@bc764cd8ddf7a0cff126f51c16239658_0_1 not 
> reachable.
> 2020-09-26T22:16:26.9857848Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.requestSubpartition(RemoteInputChannel.java:159)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9859168Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.internalRequestPartitions(SingleInputGate.java:336)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9860449Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.requestPartitions(SingleInputGate.java:308)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9861677Z  at 
> org.apache.flink.runtime.taskmanager.InputGateWithMetrics.requestPartitions(InputGateWithMetrics.java:95)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9862861Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.requestPartitions(StreamTask.java:542)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9864018Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.readRecoveredChannelState(StreamTask.java:507)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9865284Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:498)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9866415Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9867500Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:492)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9868514Z  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:550)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9869450Z  at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722) 
> [flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9870339Z  at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) 
> [flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9870869Z  at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_265]
> 2020-09-26T22:16:26.9872060Z Caused by: java.io.IOException: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
> Connecting to remote task manager '/10.1.0.4:38905' has failed. This might 
> indicate that the remote task manager has been lost.
> 2020-09-26T22:16:26.9873511Z  at 
> org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory.createPartitionRequestClient(PartitionRequestClientFactory.java:85)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9874788Z  at 
> org.apache.flink.runtime.io.network.netty.NettyConnectionManager.createPartitionRequestClient(NettyConnectionManager.java:67)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9876084Z  at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.requestSubpartition(RemoteInputChannel.java:156)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-09-26T22:16:26.9876567Z  ... 12 more
> 2020-09-26T22:16:26.9877477Z Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
> Connecting to remote task manager '/10.1.0.4:38905' has failed. This might 
> indicate that the remote task manager has been lost.
> 2020-09-26T22:16:26.9878503Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> ~[?:1.8.0_265]
> 2020-09-26T22:16:26.9879061Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) 
> ~[?:1.8.0_265

[jira] [Created] (FLINK-19426) Streaming File Sink end-to-end test is instable

2020-09-26 Thread Dian Fu (Jira)
Dian Fu created FLINK-19426:
---

 Summary: Streaming File Sink end-to-end test is instable
 Key: FLINK-19426
 URL: https://issues.apache.org/jira/browse/FLINK-19426
 Project: Flink
  Issue Type: Bug
Reporter: Dian Fu


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6983&view=logs&j=68a897ab-3047-5660-245a-cce8f83859f6&t=16ca2cca-2f63-5cce-12d2-d519b930a729

{code}
2020-09-26T22:16:26.9856525Z 
org.apache.flink.runtime.io.network.partition.consumer.PartitionConnectionException:
 Connection for partition 
619775973ed0f282e20f9d55d13913ab#0@bc764cd8ddf7a0cff126f51c16239658_0_1 not 
reachable.
2020-09-26T22:16:26.9857848Zat 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.requestSubpartition(RemoteInputChannel.java:159)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9859168Zat 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.internalRequestPartitions(SingleInputGate.java:336)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9860449Zat 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.requestPartitions(SingleInputGate.java:308)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9861677Zat 
org.apache.flink.runtime.taskmanager.InputGateWithMetrics.requestPartitions(InputGateWithMetrics.java:95)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9862861Zat 
org.apache.flink.streaming.runtime.tasks.StreamTask.requestPartitions(StreamTask.java:542)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9864018Zat 
org.apache.flink.streaming.runtime.tasks.StreamTask.readRecoveredChannelState(StreamTask.java:507)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9865284Zat 
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:498)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9866415Zat 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9867500Zat 
org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:492)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9868514Zat 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:550) 
~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9869450Zat 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722) 
[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9870339Zat 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) 
[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9870869Zat java.lang.Thread.run(Thread.java:748) 
[?:1.8.0_265]
2020-09-26T22:16:26.9872060Z Caused by: java.io.IOException: 
java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
Connecting to remote task manager '/10.1.0.4:38905' has failed. This might 
indicate that the remote task manager has been lost.
2020-09-26T22:16:26.9873511Zat 
org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory.createPartitionRequestClient(PartitionRequestClientFactory.java:85)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9874788Zat 
org.apache.flink.runtime.io.network.netty.NettyConnectionManager.createPartitionRequestClient(NettyConnectionManager.java:67)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9876084Zat 
org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.requestSubpartition(RemoteInputChannel.java:156)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9876567Z... 12 more
2020-09-26T22:16:26.9877477Z Caused by: 
java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: 
Connecting to remote task manager '/10.1.0.4:38905' has failed. This might 
indicate that the remote task manager has been lost.
2020-09-26T22:16:26.9878503Zat 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
~[?:1.8.0_265]
2020-09-26T22:16:26.9879061Zat 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) 
~[?:1.8.0_265]
2020-09-26T22:16:26.9880244Zat 
org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory.createPartitionRequestClient(PartitionRequestClientFactory.java:83)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-09-26T22:16:26.9884461Zat 
org.apache.flink.runtime.io.network.netty.NettyConnectionManager.createPartitionRequestClient(NettyConnectionManager.java:67)
 ~[flink-dist_2.11-1.12-SNAPSHOT

[jira] [Closed] (FLINK-18732) Update the hyperlink to the latest version

2020-09-26 Thread weizheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weizheng closed FLINK-18732.

Resolution: Not A Problem

> Update the hyperlink to the latest version
> --
>
> Key: FLINK-18732
> URL: https://issues.apache.org/jira/browse/FLINK-18732
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: weizheng
>Priority: Major
>  Labels: pull-request-available, starter
> Fix For: 1.11.0
>
> Attachments: 1.png
>
>
> In 
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/formats/parquet.html
> Update the hyperlink to the latest version



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17159) ES6 ElasticsearchSinkITCase unstable

2020-09-26 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202706#comment-17202706
 ] 

Dian Fu commented on FLINK-17159:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6984&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8

> ES6 ElasticsearchSinkITCase unstable
> 
>
> Key: FLINK-17159
> URL: https://issues.apache.org/jira/browse/FLINK-17159
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Chesnay Schepler
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0, 1.11.3
>
>
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7482&view=logs&j=64110e28-73be-50d7-9369-8750330e0bf1&t=aa84fb9a-59ae-5696-70f7-011bc086e59b]
> {code:java}
> 2020-04-15T02:37:04.4289477Z [ERROR] 
> testElasticsearchSinkWithSmile(org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase)
>   Time elapsed: 0.145 s  <<< ERROR!
> 2020-04-15T02:37:04.4290310Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-04-15T02:37:04.4290790Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-04-15T02:37:04.4291404Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:659)
> 2020-04-15T02:37:04.4291956Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:77)
> 2020-04-15T02:37:04.4292548Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1643)
> 2020-04-15T02:37:04.4293254Z  at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkTestBase.runElasticSearchSinkTest(ElasticsearchSinkTestBase.java:128)
> 2020-04-15T02:37:04.4293990Z  at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkTestBase.runElasticsearchSinkSmileTest(ElasticsearchSinkTestBase.java:106)
> 2020-04-15T02:37:04.4295096Z  at 
> org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase.testElasticsearchSinkWithSmile(ElasticsearchSinkITCase.java:45)
> 2020-04-15T02:37:04.4295923Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-04-15T02:37:04.4296489Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-04-15T02:37:04.4297076Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-04-15T02:37:04.4297513Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-04-15T02:37:04.4297951Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-04-15T02:37:04.4298688Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-04-15T02:37:04.4299374Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-04-15T02:37:04.4300069Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-04-15T02:37:04.4300960Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-04-15T02:37:04.4301705Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-04-15T02:37:04.4302204Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-04-15T02:37:04.4302661Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-04-15T02:37:04.4303234Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-04-15T02:37:04.4303706Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-04-15T02:37:04.4304127Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-04-15T02:37:04.4304716Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-04-15T02:37:04.4305394Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-04-15T02:37:04.4305965Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-04-15T02:37:04.4306425Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-04-15T02:37:04.4306942Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-04-15T02:37:04.4307466Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-04-15T02:37:04.4307920Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-04-15T02:37:04.4308375Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalRes

[GitHub] [flink] flinkbot edited a comment on pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13322:
URL: https://github.com/apache/flink/pull/13322#issuecomment-686859976


   
   ## CI report:
   
   * ae9c1538fa1bcb1458006aa8f54979b720b64fd1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6982)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] walterddr commented on a change in pull request #13356: [FLINK-16789][runtime][rest] Enable JMX RMI port retrieval via REST API

2020-09-26 Thread GitBox


walterddr commented on a change in pull request #13356:
URL: https://github.com/apache/flink/pull/13356#discussion_r495482128



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/TaskExecutorRegistration.java
##
@@ -72,13 +77,15 @@ public TaskExecutorRegistration(
final String taskExecutorAddress,
final ResourceID resourceId,
final int dataPort,
+   final int jmxPort,
final HardwareDescription hardwareDescription,
final TaskExecutorMemoryConfiguration 
memoryConfiguration,
final ResourceProfile defaultSlotResourceProfile,
final ResourceProfile totalResourceProfile) {
this.taskExecutorAddress = checkNotNull(taskExecutorAddress);
this.resourceId = checkNotNull(resourceId);
this.dataPort = dataPort;
+   this.jmxPort = jmxPort;

Review comment:
   after some experiment, it seems like it is modifying a whole lot of 
public classes including header/messages in REST API as well as TaskExecutor. I 
think separating that into a refactor PR might've been a better idea. what do 
you think? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13322:
URL: https://github.com/apache/flink/pull/13322#issuecomment-686859976


   
   ## CI report:
   
   * 8d1c17e7fdfddb482627ebeaa5330ad6a9e6a80d Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6981)
 
   * ae9c1538fa1bcb1458006aa8f54979b720b64fd1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6982)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13322:
URL: https://github.com/apache/flink/pull/13322#issuecomment-686859976


   
   ## CI report:
   
   * c86880fdde55700b120aa62eb463f3b6977546f7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6971)
 
   * 8d1c17e7fdfddb482627ebeaa5330ad6a9e6a80d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6981)
 
   * ae9c1538fa1bcb1458006aa8f54979b720b64fd1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6982)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-19420) Translate "Program Packaging" page of "Managing Execution" into Chinese

2020-09-26 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-19420:
---

Assignee: Xiao Huang

> Translate "Program Packaging" page of "Managing Execution" into Chinese
> ---
>
> Key: FLINK-19420
> URL: https://issues.apache.org/jira/browse/FLINK-19420
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.1, 1.11.2
>Reporter: Xiao Huang
>Assignee: Xiao Huang
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/packaging.html]
> The markdown file is located in {{flink/docs/packaging.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19409) The comment for getValue has wrong code in class ListView

2020-09-26 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-19409:
---

Assignee: Liu

> The comment for getValue has wrong code in class ListView
> -
>
> Key: FLINK-19409
> URL: https://issues.apache.org/jira/browse/FLINK-19409
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Liu
>Assignee: Liu
>Priority: Minor
>
> The comment for getValue is as following currently:
> {code:java}
> *    @Override  
> *    public Long getValue(MyAccumulator accumulator) {  
> *        accumulator.list.add(id);  
> *        ... ...  
> *        accumulator.list.get()  
> *         ... ...  
> *        return accumulator.count;  
> *    }  
> {code}
>  Users may be confused with the code "accumulator.list.add(id); ". It should 
> be removed. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19419) "null-string-literal" does not work in HBaseSource decoder

2020-09-26 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202623#comment-17202623
 ] 

Jark Wu commented on FLINK-19419:
-

This has been explained in the docs: 
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/hbase.html#null-string-literal

The value of {{null-string-literal}} is a representation of null values, that 
means encode null values to {{null-string-literal}} string when writing data 
into HBase. HBase source and sink encodes/decodes empty bytes as null values by 
default. But this doesn't work for string type, because empty byte is also a 
valid value. That's why we have such configuration.

> "null-string-literal" does not work  in HBaseSource decoder 
> 
>
> Key: FLINK-19419
> URL: https://issues.apache.org/jira/browse/FLINK-19419
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.11.0, 1.11.1, 1.11.2
>Reporter: CaoZhen
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2020-09-25-21-11-36-418.png
>
>
>  
> When using HBaseSoucre, it is found that "null-string-literal" does not work.
> The current decoder processing logic is shown below.
> `nullStringBytes` should be used when the `value` is null.
>  
> !image-2020-09-25-21-11-36-418.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19423) Primary key position cause JDBC SQL upsert sink ArrayIndexOutOfBoundsException

2020-09-26 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-19423:

Component/s: Table SQL / Ecosystem

> Primary key position cause JDBC SQL upsert sink ArrayIndexOutOfBoundsException
> --
>
> Key: FLINK-19423
> URL: https://issues.apache.org/jira/browse/FLINK-19423
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.11.1
>Reporter: limbo
>Priority: Major
>
> We found  that the primary key position can cause  
> ArrayIndexOutOfBoundsException
> the sink like that( the primary key select the position of 1, 3):
> {code:java}
> CREATE TABLE `test`(
>   col1 STRING, 
>   col2 STRING, 
>   col3 STRING, 
>   PRIMARY KEY (col1, col3) NOT ENFORCED ) WITH (
>   'connector' = 'jdbc',
>   ...
> ){code}
> when the DELETE (cdc message) come , it will raise 
> ArrayIndexOutOfBoundsException:
> {code:java}
> Caused by: java.lang.RuntimeException: Writing records to JDBC failed.... 
> 10 moreCaused by: java.lang.ArrayIndexOutOfBoundsException: 2at 
> org.apache.flink.table.data.GenericRowData.getString(GenericRowData.java:169) 
>    at 
> org.apache.flink.table.data.RowData.lambda$createFieldGetter$245ca7d1$1(RowData.java:310)
> at 
> org.apache.flink.connector.jdbc.table.JdbcDynamicOutputFormatBuilder.getPrimaryKey(JdbcDynamicOutputFormatBuilder.java:216)
> at 
> org.apache.flink.connector.jdbc.table.JdbcDynamicOutputFormatBuilder.lambda$createRowKeyExtractor$7(JdbcDynamicOutputFormatBuilder.java:193)
> at 
> org.apache.flink.connector.jdbc.table.JdbcDynamicOutputFormatBuilder.lambda$createKeyedRowExecutor$3fd497bb$1(JdbcDynamicOutputFormatBuilder.java:128)
> at 
> org.apache.flink.connector.jdbc.internal.executor.KeyedBatchStatementExecutor.executeBatch(KeyedBatchStatementExecutor.java:71)
> at 
> org.apache.flink.connector.jdbc.internal.executor.BufferReduceStatementExecutor.executeBatch(BufferReduceStatementExecutor.java:99)
> at 
> org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.attemptFlush(JdbcBatchingOutputFormat.java:200)
> at 
> org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.flush(JdbcBatchingOutputFormat.java:171)
> ... 8 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13322:
URL: https://github.com/apache/flink/pull/13322#issuecomment-686859976


   
   ## CI report:
   
   * c86880fdde55700b120aa62eb463f3b6977546f7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6971)
 
   * 8d1c17e7fdfddb482627ebeaa5330ad6a9e6a80d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6981)
 
   * ae9c1538fa1bcb1458006aa8f54979b720b64fd1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13322:
URL: https://github.com/apache/flink/pull/13322#issuecomment-686859976


   
   ## CI report:
   
   * c86880fdde55700b120aa62eb463f3b6977546f7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6971)
 
   * 8d1c17e7fdfddb482627ebeaa5330ad6a9e6a80d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6981)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19417) Fix the bug of the method from_data_stream in table_environement

2020-09-26 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202620#comment-17202620
 ] 

Dian Fu commented on FLINK-19417:
-

[~nicholasjiang] Thanks a lot! I have assigned this JIRA to you!

> Fix the bug of the method from_data_stream in table_environement
> 
>
> Key: FLINK-19417
> URL: https://issues.apache.org/jira/browse/FLINK-19417
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parameter fields should be str or expression *, not the current list 
> [str]. And the table_env object passed to the Table object should be Python's 
> TableEnvironment, not Java's TableEnvironment



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-26 Thread GitBox


flinkbot edited a comment on pull request #13322:
URL: https://github.com/apache/flink/pull/13322#issuecomment-686859976


   
   ## CI report:
   
   * c86880fdde55700b120aa62eb463f3b6977546f7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=6971)
 
   * 8d1c17e7fdfddb482627ebeaa5330ad6a9e6a80d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-19417) Fix the bug of the method from_data_stream in table_environement

2020-09-26 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-19417:
---

Assignee: Nicholas Jiang

> Fix the bug of the method from_data_stream in table_environement
> 
>
> Key: FLINK-19417
> URL: https://issues.apache.org/jira/browse/FLINK-19417
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Huang Xingbo
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: 1.12.0
>
>
> The parameter fields should be str or expression *, not the current list 
> [str]. And the table_env object passed to the Table object should be Python's 
> TableEnvironment, not Java's TableEnvironment



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe merged pull request #13283: [FLINK-18759][tests] Add readme.md for TPC-DS tools

2020-09-26 Thread GitBox


godfreyhe merged pull request #13283:
URL: https://github.com/apache/flink/pull/13283


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19417) Fix the bug of the method from_data_stream in table_environement

2020-09-26 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202618#comment-17202618
 ] 

Nicholas Jiang commented on FLINK-19417:


[~hxbks2ks], could you please assign to me for fix? I could fix the bug of the 
method from_data_stream in table_environement.

> Fix the bug of the method from_data_stream in table_environement
> 
>
> Key: FLINK-19417
> URL: https://issues.apache.org/jira/browse/FLINK-19417
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Huang Xingbo
>Priority: Major
> Fix For: 1.12.0
>
>
> The parameter fields should be str or expression *, not the current list 
> [str]. And the table_env object passed to the Table object should be Python's 
> TableEnvironment, not Java's TableEnvironment



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >