[GitHub] [flink] libenchao commented on pull request #11191: [FLINK-16094][docs] Translate /dev/table/functions/udfs.zh.md

2020-05-18 Thread GitBox


libenchao commented on pull request #11191:
URL: https://github.com/apache/flink/pull/11191#issuecomment-630598222


   @wuchong Thanks for the reminder and merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-16076) Translate "Queryable State" page into Chinese

2020-05-18 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li closed FLINK-16076.
-
Resolution: Done

Merged into master via 4cfd23a8a0454b321616e9acd91faebbda42ca27

> Translate "Queryable State" page into Chinese
> -
>
> Key: FLINK-16076
> URL: https://issues.apache.org/jira/browse/FLINK-16076
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: Yu Li
>Assignee: PengFei Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Complete the translation in `docs/dev/stream/state/queryable_state.zh.md`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12238: [FLINK-17626][fs-connector] Fs connector should use FLIP-122 format options style

2020-05-18 Thread GitBox


flinkbot commented on pull request #12238:
URL: https://github.com/apache/flink/pull/12238#issuecomment-630595564


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 50937916a7724ea232c71a6a42510984187adb19 (Tue May 19 
05:49:19 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] carp84 closed pull request #12139: [FLINK-16076] Translate "Queryable State" page into Chinese

2020-05-18 Thread GitBox


carp84 closed pull request #12139:
URL: https://github.com/apache/flink/pull/12139


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12138: [FLINK-16611] [datadog-metrics] Make number of metrics per request configurable

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12138:
URL: https://github.com/apache/flink/pull/12138#issuecomment-628319069


   
   ## CI report:
   
   * 3fa364afeda36c6e3de19804317d7470ecfcbd21 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1795)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12181:
URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595


   
   ## CI report:
   
   * 757a2307f027acba2aca26f7f5ec6f9639ce9079 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1721)
 
   * caf4400401fbaffaaeb0e16fafe3de1bcee1cc8b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1800)
 
   * 97f9193f189afae6147d9d1bdd68beb132bc63ac UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 commented on pull request #12236: [FLINK-17527][flink-dist] Make kubernetes-session.sh use session log4j/logback configuration files

2020-05-18 Thread GitBox


wangyang0918 commented on pull request #12236:
URL: https://github.com/apache/flink/pull/12236#issuecomment-630590441


   cc @tillrohrmann Could you please help to review this change?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi opened a new pull request #12238: [FLINK-17626][fs-connector] Fs connector should use FLIP-122 format options style

2020-05-18 Thread GitBox


JingsongLi opened a new pull request #12238:
URL: https://github.com/apache/flink/pull/12238


   ## What is the purpose of the change
   
   This is 1.11 cherry-pick for https://github.com/apache/flink/pull/12212
   
   Fs connector should use FLIP-122 format options style. Like:
   ```
   create table t (...) with (
 'connector'='filesystem',
 'path'='...',
 'format'='csv',
 'csv.field-delimiter'=';'
   )
   ```
   
   ## Brief change log
   
   - FileSystemFormatFactory implements FLIP-95 Factory
   - Update formats
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-16094) Translate "User-defined Functions" page of "Functions" into Chinese

2020-05-18 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-16094.
---
Fix Version/s: 1.11.0
   Resolution: Fixed

- master (1.12.0): 92f2ae1585ec6e98666b86c72242666ee29ee65f
- 1.11.0: 4314e7af4bfc46f8f61dd58a1b2a9350e91c38fb

> Translate "User-defined Functions" page of "Functions" into Chinese 
> 
>
> Key: FLINK-16094
> URL: https://issues.apache.org/jira/browse/FLINK-16094
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Benchao Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/functions/udfs.html
> The markdown file is located in {{flink/docs/dev/table/functions/udfs.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] yangyichao-mango edited a comment on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master

2020-05-18 Thread GitBox


yangyichao-mango edited a comment on pull request #12196:
URL: https://github.com/apache/flink/pull/12196#issuecomment-630550918


   That is the result of check_links.sh.
   找到 1 个死链接。
   
   http://localhost:4000/zh/ops/memory/mem_detail.html
   
   When I try to fix http://localhost:4000/zh/ops/memory/mem_detail.html, I'm 
blocked by three mds that are not updated to match English md.
   
   **ops/memory/mem_setup_tm.md**
   **ops/memory/mem_trouble.md**
   **ops/memory/mem_migration.md**
   
   I think we should create a new jira issue to update those mds. When that new 
issue is done, I can continue to fix this broken link 
(http://localhost:4000/zh/ops/memory/mem_detail.html).
   
   cc @wuchong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yangyichao-mango edited a comment on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master

2020-05-18 Thread GitBox


yangyichao-mango edited a comment on pull request #12196:
URL: https://github.com/apache/flink/pull/12196#issuecomment-630550918


   That is the result of check_links.sh.
   找到 1 个死链接。
   
   http://localhost:4000/zh/ops/memory/mem_detail.html
   
   When I try to fix http://localhost:4000/zh/ops/memory/mem_detail.html, I'm 
blocked by three mds that is not be updated.
   
   **ops/memory/mem_setup_tm.md**
   **ops/memory/mem_trouble.md**
   **ops/memory/mem_migration.md**
   
   I think we should create a new jira issue to update those mds. When that new 
issue is done, I can continue to fix this broken link 
(http://localhost:4000/zh/ops/memory/mem_detail.html).
   
   cc @wuchong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12212: [FLINK-17626][fs-connector] Fs connector should use FLIP-122 format options style

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12212:
URL: https://github.com/apache/flink/pull/12212#issuecomment-630019032


   
   ## CI report:
   
   * 0ec02542ed4721376e60ea71090cbb335885e6b0 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1698)
 
   * 14ab5f81d7f9d7324e3a0f7ab51d47bf9242abee Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1801)
 
   * 7a972db935a0bcc39d7979358962538fd8790a9c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12221: [FLINK-17797][FLINK-17798][hbase][jdbc] Align the behavior between the new and legacy HBase/JDBC table source

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12221:
URL: https://github.com/apache/flink/pull/12221#issuecomment-630123655


   
   ## CI report:
   
   * bd621a07015e49f48360dd5f60209c0ef0cdaa32 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1796)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17378) KafkaProducerExactlyOnceITCase>KafkaProducerTestBase.testExactlyOnceCustomOperator unstable

2020-05-18 Thread lining (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110852#comment-17110852
 ] 

lining commented on FLINK-17378:


Another instance: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1256=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20]

> KafkaProducerExactlyOnceITCase>KafkaProducerTestBase.testExactlyOnceCustomOperator
>  unstable
> ---
>
> Key: FLINK-17378
> URL: https://issues.apache.org/jira/browse/FLINK-17378
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> CI run: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=221=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20
> {code}
> 2020-04-25T00:41:01.4191956Z 00:41:01,418 [Source: Custom Source -> Map -> 
> Sink: Unnamed (1/1)] INFO  
> org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer
>  [] - Flushing new partitions
> 2020-04-25T00:41:01.4194268Z 00:41:01,418 [FailingIdentityMapper Status 
> Printer] INFO  
> org.apache.flink.streaming.connectors.kafka.testutils.FailingIdentityMapper 
> [] - > Failing mapper  0: count=690, 
> totalCount=1000
> 2020-04-25T00:41:01.4589519Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-04-25T00:41:01.4590089Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-04-25T00:41:01.4590748Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:659)
> 2020-04-25T00:41:01.4591524Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:77)
> 2020-04-25T00:41:01.4592062Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1643)
> 2020-04-25T00:41:01.4592597Z  at 
> org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:35)
> 2020-04-25T00:41:01.4593092Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testExactlyOnce(KafkaProducerTestBase.java:370)
> 2020-04-25T00:41:01.4593680Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testExactlyOnceCustomOperator(KafkaProducerTestBase.java:317)
> 2020-04-25T00:41:01.4594450Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-04-25T00:41:01.4595076Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-04-25T00:41:01.4595794Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-04-25T00:41:01.4596622Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-04-25T00:41:01.4597501Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-04-25T00:41:01.4598396Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-04-25T00:41:01.460Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-04-25T00:41:01.4603082Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-04-25T00:41:01.4604023Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-04-25T00:41:01.4604590Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-04-25T00:41:01.4605225Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-04-25T00:41:01.4605902Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-04-25T00:41:01.4606591Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-04-25T00:41:01.4607468Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-04-25T00:41:01.4608577Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-04-25T00:41:01.4609030Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-04-25T00:41:01.4609460Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-04-25T00:41:01.4609842Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-04-25T00:41:01.4610270Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-04-25T00:41:01.4610727Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-04-25T00:41:01.4611147Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-04-25T00:41:01.4611628Z  at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #12217: [FLINK-17786][sql-client] Remove dialect from ExecutionEntry

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12217:
URL: https://github.com/apache/flink/pull/12217#issuecomment-630047579


   
   ## CI report:
   
   * 35ccaf955d2d6487f6c5199e337208e2281dc417 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1794)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12212: [FLINK-17626][fs-connector] Fs connector should use FLIP-122 format options style

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12212:
URL: https://github.com/apache/flink/pull/12212#issuecomment-630019032


   
   ## CI report:
   
   * 0ec02542ed4721376e60ea71090cbb335885e6b0 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1698)
 
   * 14ab5f81d7f9d7324e3a0f7ab51d47bf9242abee Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1801)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12181:
URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595


   
   ## CI report:
   
   * 757a2307f027acba2aca26f7f5ec6f9639ce9079 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1721)
 
   * caf4400401fbaffaaeb0e16fafe3de1bcee1cc8b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1800)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12196:
URL: https://github.com/apache/flink/pull/12196#issuecomment-629757606


   
   ## CI report:
   
   * 19f5ffd8eea4707770617a5349a3ef4275708a30 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1793)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong closed pull request #11191: [FLINK-16094][docs] Translate /dev/table/functions/udfs.zh.md

2020-05-18 Thread GitBox


wuchong closed pull request #11191:
URL: https://github.com/apache/flink/pull/11191


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-14255) Integrate hive to streaming file sink

2020-05-18 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109820#comment-17109820
 ] 

Jingsong Lee edited comment on FLINK-14255 at 5/19/20, 4:36 AM:


master(parquet and orc): 1f668dd3df1a3d9bb8837c05ebdd5e473c55b1ea

master(RecordWriter): 4ba1d37a14f9bd7b3596bfe8d5dbe3056bb6214f

release-1.11(RecordWriter): 568f0e68a3c1ce1db64b0d3be50e399fd9c8ea29


was (Author: lzljs3620320):
master(parquet and orc): 1f668dd3df1a3d9bb8837c05ebdd5e473c55b1ea

> Integrate hive to streaming file sink
> -
>
> Key: FLINK-14255
> URL: https://issues.apache.org/jira/browse/FLINK-14255
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Based on StreamingFileSink.
> Extends format support and partition commit support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14255) Integrate hive to streaming file sink

2020-05-18 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-14255.

Resolution: Implemented

> Integrate hive to streaming file sink
> -
>
> Key: FLINK-14255
> URL: https://issues.apache.org/jira/browse/FLINK-14255
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Based on StreamingFileSink.
> Extends format support and partition commit support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi closed pull request #12220: [FLINK-14255][hive] Integrate mapred record writer to hive streaming sink

2020-05-18 Thread GitBox


JingsongLi closed pull request #12220:
URL: https://github.com/apache/flink/pull/12220


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints

2020-05-18 Thread GitBox


wuchong commented on a change in pull request #11906:
URL: https://github.com/apache/flink/pull/11906#discussion_r427022575



##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/catalog/AbstractJdbcCatalog.java
##
@@ -116,6 +122,36 @@ public String getBaseUrl() {
return baseUrl;
}
 
+   // -- Postgres default objects that shouldn't be exposed to users 
--
+
+   /**
+* Retrieve the list of system schemas to ignore.
+*/
+   public abstract Set getBuiltinSchemas();
+
+   /**
+* Retrieve the list of system database to ignore.
+*/
+   public abstract Set getBuiltinDatabases();
+
+   // -- retrieve PK constraint --
+
+   protected Map.Entry> 
getPrimaryKey(DatabaseMetaData metaData, PostgresTablePath pgPath) throws 
SQLException {

Review comment:
   If we want to move this method into the base class, I would suggest to 
change the signature into 
   
   ```java
   UniqueConstraint getPrimaryKey(DatabaseMetaData metaData, String schema, 
String table)
   ```
   
   1. `UniqueConstraint` is a standard public API to describe primary key which 
is exposed by Table API.
   2. `PostgresTablePath` is a postgres speicifc class which shouldn't be in 
the base class.  

##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/catalog/AbstractJdbcCatalog.java
##
@@ -116,6 +122,36 @@ public String getBaseUrl() {
return baseUrl;
}
 
+   // -- Postgres default objects that shouldn't be exposed to users 
--
+
+   /**
+* Retrieve the list of system schemas to ignore.
+*/
+   public abstract Set getBuiltinSchemas();
+
+   /**
+* Retrieve the list of system database to ignore.
+*/
+   public abstract Set getBuiltinDatabases();

Review comment:
   I think we don't need to move this interfaces into base classes and 
`JdbcCatalog`, because they are not needed in `getPrimaryKey`.

##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/constraints/Constraint.java
##
@@ -61,6 +61,6 @@
 */
enum ConstraintType {
PRIMARY_KEY,
-   UNIQUE_KEY
+   UNIQUE

Review comment:
   Could you revert this? We can discuss this and change it when we support 
UNIQUE.. 

##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/catalog/AbstractJdbcCatalog.java
##
@@ -116,6 +122,36 @@ public String getBaseUrl() {
return baseUrl;
}
 
+   // -- Postgres default objects that shouldn't be exposed to users 
--
+
+   /**
+* Retrieve the list of system schemas to ignore.
+*/
+   public abstract Set getBuiltinSchemas();
+
+   /**
+* Retrieve the list of system database to ignore.
+*/
+   public abstract Set getBuiltinDatabases();
+
+   // -- retrieve PK constraint --
+
+   protected Map.Entry> 
getPrimaryKey(DatabaseMetaData metaData, PostgresTablePath pgPath) throws 
SQLException {
+   ResultSet rs = metaData.getPrimaryKeys(null,  
pgPath.getPgSchemaName(), pgPath.getPgTableName());
+   Map.Entry> ret = null;
+   while (rs.next()) {
+   String schema = rs.getString("table_schem");
+   String columnName = rs.getString("column_name");
+   String pkName = rs.getString("pk_name");

Review comment:
   use upper case ?

##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/catalog/AbstractJdbcCatalog.java
##
@@ -116,6 +122,36 @@ public String getBaseUrl() {
return baseUrl;
}
 
+   // -- Postgres default objects that shouldn't be exposed to users 
--
+
+   /**
+* Retrieve the list of system schemas to ignore.
+*/
+   public abstract Set getBuiltinSchemas();
+
+   /**
+* Retrieve the list of system database to ignore.
+*/
+   public abstract Set getBuiltinDatabases();
+
+   // -- retrieve PK constraint --
+
+   protected Map.Entry> 
getPrimaryKey(DatabaseMetaData metaData, PostgresTablePath pgPath) throws 
SQLException {
+   ResultSet rs = metaData.getPrimaryKeys(null,  
pgPath.getPgSchemaName(), pgPath.getPgTableName());
+   Map.Entry> ret = null;
+   while (rs.next()) {
+   String schema = rs.getString("table_schem");
+   String columnName = rs.getString("column_name");
+   String pkName = rs.getString("pk_name");
+   if (!getBuiltinSchemas().contains(schema)) {
+   if (ret == null) {
+   

[jira] [Updated] (FLINK-17803) Flink batch sql job read hive table which contains map type, raise exception: " scala.MatchError: MAP"

2020-05-18 Thread zouyunhe (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zouyunhe updated FLINK-17803:
-
Description: 
We use flink 1.10.0 ,  blink planner,  to  submit a batch sql job to read from 
a hive table which contains map type fields, and then aggregate.   the sql as 
below:

```
 create view aaa
 as select * from table1 where event_id = '0103002' and `day`='2020-05-13'
 and `hour`='13';
 create view view_1
 as
 select
 `day`,
 a.rtime as itime,
 a.uid as uid,
 trim(BOTH a.event.log_1['scene']) as refer_list,
 T.s as abflags,
 a.hdid as hdid,
 a.country as country
 from aaa as a
 left join LATERAL TABLE(splitByChar(trim(BOTH a.event.log_1['abflag]),
 ',')) as T(s) on true;

{color:#172b4d}CREATE VIEW view_6 as {color}
 {color:#172b4d} SELECT{color}
 {color:#172b4d} `uid`,{color}
 {color:#172b4d} `refer_list`,{color}
 {color:#172b4d} `abflag`,{color}
 {color:#172b4d}        last_value(country){color}
 {color:#172b4d} FROM view_1{color}
 {color:#172b4d} where `refer_list` in ('WELOG_NEARBY', 'WELOG_FOLLOW', 
'WELOG_POPULAR'){color}
 {color:#172b4d} GROUP BY  `uid`, `refer_list`, abflag;{color}
 insert into 
 ``` 

when submit the job, the exception occurs as below:
 org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: scala.MatchError: MAP (of class 
org.apache.flink.table.types.logical.LogicalTypeRoot)
         at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
         at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
         at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
         at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
         at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
         at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
         at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
         at java.security.AccessController.doPrivileged(Native Method)
         at javax.security.auth.Subject.doAs(Subject.java:422)
         at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
         at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
         at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
 Caused by: java.lang.RuntimeException: scala.MatchError: MAP (of class 
org.apache.flink.table.types.logical.LogicalTypeRoot)
         at 
sg.bigo.streaming.sql.StreamingSqlRunner.main(StreamingSqlRunner.java:143)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:498)
         at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
         ... 11 more
 Caused by: scala.MatchError: MAP (of class 
org.apache.flink.table.types.logical.LogicalTypeRoot)
         at 
org.apache.flink.table.planner.codegen.CodeGenUtils$.hashCodeForType(CodeGenUtils.scala:212)
         at 
org.apache.flink.table.planner.codegen.HashCodeGenerator$.$anonfun$generateCodeBody$1(HashCodeGenerator.scala:97)
         at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
         at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
  
 and then we found the method hashCodeForType  in the CodeGenUtils class do not 
match MAP type.  and we fix it as below
```
 def hashCodeForType(
 ctx: CodeGeneratorContext, t: LogicalType, term: String): String = 
t.getTypeRoot match

{ case BOOLEAN => s"$\\{className[JBoolean]}

.hashCode($term)"
 case MAP => s"$\{className[BaseMap]}.getHashCode($term)"  //the code we add
 case TINYINT => s"$\{className[JByte]}.hashCode($term)"
 ```


 then the job can be sumitted, it run for a while, another exception occurs:
 java.lang.RuntimeException: Could not instantiate generated class 
'HashAggregateWithKeys$1543'
 at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:67)
 at 
org.apache.flink.table.runtime.operators.CodeGenOperatorFactory.createStreamOperator(CodeGenOperatorFactory.java:46)
 at 
org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:48)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:156)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:433)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
 at 

[GitHub] [flink] flinkbot edited a comment on pull request #12212: [FLINK-17626][fs-connector] Fs connector should use FLIP-122 format options style

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12212:
URL: https://github.com/apache/flink/pull/12212#issuecomment-630019032


   
   ## CI report:
   
   * 0ec02542ed4721376e60ea71090cbb335885e6b0 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1698)
 
   * 14ab5f81d7f9d7324e3a0f7ab51d47bf9242abee UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12237:
URL: https://github.com/apache/flink/pull/12237#issuecomment-630566246


   
   ## CI report:
   
   * 0c911fb0b8ae6fc7dabf929ae10b27b24e0b604d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1797)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12221: [FLINK-17797][FLINK-17798][hbase][jdbc] Align the behavior between the new and legacy HBase/JDBC table source

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12221:
URL: https://github.com/apache/flink/pull/12221#issuecomment-630123655


   
   ## CI report:
   
   * 129d85d4bcc2b032ff5be1999a8a0277f31a2848 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1726)
 
   * bd621a07015e49f48360dd5f60209c0ef0cdaa32 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1796)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17803) Flink batch sql job read hive table which contains map type, raise exception: " scala.MatchError: MAP"

2020-05-18 Thread zouyunhe (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zouyunhe updated FLINK-17803:
-
Description: 
We use flink 1.10.0 ,  blink planner,  to  submit a batch sql job to read from 
a hive table which contains map type fields, and then aggregate.   the sql as 
below:

```
 create view aaa
 as select * from table1 where event_id = '0103002' and `day`='2020-05-13'
 and `hour`='13';
 create view view_1
 as
 select
 `day`,
 a.rtime as itime,
 a.uid as uid,
 trim(BOTH a.event.log_1['scene']) as refer_list,
 T.s as abflags,
 a.hdid as hdid,
 a.country as country
 from aaa as a
 left join LATERAL TABLE(splitByChar(trim(BOTH a.event.log_1['abflag]),
 ',')) as T(s) on true;

{color:#172b4d}CREATE VIEW view_6 as {color}
 {color:#172b4d} SELECT{color}
 {color:#172b4d} `uid`,{color}
 {color:#172b4d} `refer_list`,{color}
 {color:#172b4d} `abflag`,{color}
 {color:#172b4d}        last_value(country){color}
 {color:#172b4d} FROM view_1{color}
 {color:#172b4d} where `refer_list` in ('WELOG_NEARBY', 'WELOG_FOLLOW', 
'WELOG_POPULAR'){color}
 {color:#172b4d} GROUP BY  `uid`, `refer_list`, abflag;{color}
 insert into 
``` 


 when submit the job, the exception occurs as below:
 org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: scala.MatchError: MAP (of class 
org.apache.flink.table.types.logical.LogicalTypeRoot)
         at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
         at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
         at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
         at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
         at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
         at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
         at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
         at java.security.AccessController.doPrivileged(Native Method)
         at javax.security.auth.Subject.doAs(Subject.java:422)
         at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
         at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
         at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
 Caused by: java.lang.RuntimeException: scala.MatchError: MAP (of class 
org.apache.flink.table.types.logical.LogicalTypeRoot)
         at 
sg.bigo.streaming.sql.StreamingSqlRunner.main(StreamingSqlRunner.java:143)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:498)
         at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
         ... 11 more
 Caused by: scala.MatchError: MAP (of class 
org.apache.flink.table.types.logical.LogicalTypeRoot)
         at 
org.apache.flink.table.planner.codegen.CodeGenUtils$.hashCodeForType(CodeGenUtils.scala:212)
         at 
org.apache.flink.table.planner.codegen.HashCodeGenerator$.$anonfun$generateCodeBody$1(HashCodeGenerator.scala:97)
         at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
         at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
  
 and then we found, the hashCodeForType method in the CodeGenUtils do not match 
MAP type, and we fix it as below
  
 def hashCodeForType(
 ctx: CodeGeneratorContext, t: LogicalType, term: String): String = 
t.getTypeRoot match {
 case BOOLEAN => s"$\{className[JBoolean]}.hashCode($term)"
 case MAP => s"$\{className[BaseMap]}.getHashCode($term)"
 case TINYINT => s"$\{className[JByte]}.hashCode($term)"
  
 then the job can be sumitted, it run for a while, another exception occurs:
 java.lang.RuntimeException: Could not instantiate generated class 
'HashAggregateWithKeys$1543'
 at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:67)
 at 
org.apache.flink.table.runtime.operators.CodeGenOperatorFactory.createStreamOperator(CodeGenOperatorFactory.java:46)
 at 
org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:48)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:156)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:433)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
 at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
 at 

[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12181:
URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595


   
   ## CI report:
   
   * 757a2307f027acba2aca26f7f5ec6f9639ce9079 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1721)
 
   * caf4400401fbaffaaeb0e16fafe3de1bcee1cc8b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17803) Flink batch sql job read hive table with map type, raise exception: " scala.MatchError: MAP"

2020-05-18 Thread zouyunhe (Jira)
zouyunhe created FLINK-17803:


 Summary: Flink batch sql job read hive table with map type, raise 
exception: " scala.MatchError: MAP"
 Key: FLINK-17803
 URL: https://issues.apache.org/jira/browse/FLINK-17803
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.10.0
Reporter: zouyunhe


We use flink 1.10.0 ,  blink planner,  to  submit a batch sql job to read from 
a hive table which contains map type fields, and then aggregate.   the sql as 
below:

 
create view aaa
as select * from table1 where event_id = '0103002' and `day`='2020-05-13'
and `hour`='13';
create view view_1
as
select
`day`,
a.rtime as itime,
a.uid as uid,
trim(BOTH a.event.log_1['scene']) as refer_list,
T.s as abflags,
a.hdid as hdid,
a.country as country
from aaa as a
left join LATERAL TABLE(splitByChar(trim(BOTH a.event.log_1['abflag]),
',')) as T(s) on true;


{color:#172b4d}CREATE VIEW view_6 as {color}
{color:#172b4d} SELECT{color}
{color:#172b4d} `uid`,{color}
{color:#172b4d} `refer_list`,{color}
{color:#172b4d} `abflag`,{color}
{color:#172b4d}        last_value(country){color}
{color:#172b4d} FROM view_1{color}
{color:#172b4d} where `refer_list` in ('WELOG_NEARBY', 'WELOG_FOLLOW', 
'WELOG_POPULAR'){color}
{color:#172b4d} GROUP BY  `uid`, `refer_list`, abflag;{color}
 
insert into 
 
when submit the job, the exception occurs as below:
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: scala.MatchError: MAP (of class 
org.apache.flink.table.types.logical.LogicalTypeRoot)
        at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
        at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
        at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
        at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
        at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
        at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
        at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
Caused by: java.lang.RuntimeException: scala.MatchError: MAP (of class 
org.apache.flink.table.types.logical.LogicalTypeRoot)
        at 
sg.bigo.streaming.sql.StreamingSqlRunner.main(StreamingSqlRunner.java:143)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
        ... 11 more
Caused by: scala.MatchError: MAP (of class 
org.apache.flink.table.types.logical.LogicalTypeRoot)
        at 
org.apache.flink.table.planner.codegen.CodeGenUtils$.hashCodeForType(CodeGenUtils.scala:212)
        at 
org.apache.flink.table.planner.codegen.HashCodeGenerator$.$anonfun$generateCodeBody$1(HashCodeGenerator.scala:97)
        at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
        at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
 
and then we found, the hashCodeForType method in the CodeGenUtils do not match 
MAP type, and we fix it as below
 
def hashCodeForType(
 ctx: CodeGeneratorContext, t: LogicalType, term: String): String = 
t.getTypeRoot match {
 case BOOLEAN => s"${className[JBoolean]}.hashCode($term)"
 case MAP => s"${className[BaseMap]}.getHashCode($term)"
 case TINYINT => s"${className[JByte]}.hashCode($term)"
 
then the job can be sumitted, it run for a while, another exception occurs:
java.lang.RuntimeException: Could not instantiate generated class 
'HashAggregateWithKeys$1543'
 at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:67)
 at 
org.apache.flink.table.runtime.operators.CodeGenOperatorFactory.createStreamOperator(CodeGenOperatorFactory.java:46)
 at 
org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:48)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:156)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:433)
 

[GitHub] [flink] flinkbot edited a comment on pull request #12138: [FLINK-16611] [datadog-metrics] Make number of metrics per request configurable

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12138:
URL: https://github.com/apache/flink/pull/12138#issuecomment-628319069


   
   ## CI report:
   
   * a6af75021894504c6515472be86f1e490ae32e27 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1218)
 
   * 3fa364afeda36c6e3de19804317d7470ecfcbd21 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1795)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17803) Flink batch sql job read hive table which contains map type, raise exception: " scala.MatchError: MAP"

2020-05-18 Thread zouyunhe (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zouyunhe updated FLINK-17803:
-
Summary: Flink batch sql job read hive table which contains map type, raise 
exception: " scala.MatchError: MAP"  (was: Flink batch sql job read hive table 
with map type, raise exception: " scala.MatchError: MAP")

> Flink batch sql job read hive table which contains map type, raise exception: 
> " scala.MatchError: MAP"
> --
>
> Key: FLINK-17803
> URL: https://issues.apache.org/jira/browse/FLINK-17803
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: zouyunhe
>Priority: Major
>
> We use flink 1.10.0 ,  blink planner,  to  submit a batch sql job to read 
> from a hive table which contains map type fields, and then aggregate.   the 
> sql as below:
>  
> create view aaa
> as select * from table1 where event_id = '0103002' and `day`='2020-05-13'
> and `hour`='13';
> create view view_1
> as
> select
> `day`,
> a.rtime as itime,
> a.uid as uid,
> trim(BOTH a.event.log_1['scene']) as refer_list,
> T.s as abflags,
> a.hdid as hdid,
> a.country as country
> from aaa as a
> left join LATERAL TABLE(splitByChar(trim(BOTH a.event.log_1['abflag]),
> ',')) as T(s) on true;
> {color:#172b4d}CREATE VIEW view_6 as {color}
> {color:#172b4d} SELECT{color}
> {color:#172b4d} `uid`,{color}
> {color:#172b4d} `refer_list`,{color}
> {color:#172b4d} `abflag`,{color}
> {color:#172b4d}        last_value(country){color}
> {color:#172b4d} FROM view_1{color}
> {color:#172b4d} where `refer_list` in ('WELOG_NEARBY', 'WELOG_FOLLOW', 
> 'WELOG_POPULAR'){color}
> {color:#172b4d} GROUP BY  `uid`, `refer_list`, abflag;{color}
>  
> insert into 
>  
> when submit the job, the exception occurs as below:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: scala.MatchError: MAP (of class 
> org.apache.flink.table.types.logical.LogicalTypeRoot)
>         at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>         at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>         at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>         at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>         at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>         at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
>         at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>         at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>         at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> Caused by: java.lang.RuntimeException: scala.MatchError: MAP (of class 
> org.apache.flink.table.types.logical.LogicalTypeRoot)
>         at 
> sg.bigo.streaming.sql.StreamingSqlRunner.main(StreamingSqlRunner.java:143)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
>         ... 11 more
> Caused by: scala.MatchError: MAP (of class 
> org.apache.flink.table.types.logical.LogicalTypeRoot)
>         at 
> org.apache.flink.table.planner.codegen.CodeGenUtils$.hashCodeForType(CodeGenUtils.scala:212)
>         at 
> org.apache.flink.table.planner.codegen.HashCodeGenerator$.$anonfun$generateCodeBody$1(HashCodeGenerator.scala:97)
>         at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
>         at 
> scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
>  
> and then we found, the hashCodeForType method in the CodeGenUtils do not 
> match MAP type, and we fix it as below
>  
> def hashCodeForType(
>  ctx: CodeGeneratorContext, t: LogicalType, term: String): String = 
> t.getTypeRoot match {
>  case BOOLEAN => s"${className[JBoolean]}.hashCode($term)"
>  case MAP => s"${className[BaseMap]}.getHashCode($term)"
>  case TINYINT => s"${className[JByte]}.hashCode($term)"
>  
> then the job can be sumitted, 

[jira] [Issue Comment Deleted] (FLINK-14255) Integrate hive to streaming file sink

2020-05-18 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-14255:
-
Comment: was deleted

(was: master: 547c168a8978a80072187a80d84d53d6e7f02260)

> Integrate hive to streaming file sink
> -
>
> Key: FLINK-14255
> URL: https://issues.apache.org/jira/browse/FLINK-14255
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Based on StreamingFileSink.
> Extends format support and partition commit support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17594) Support Hadoop path-based PartFileWriter with renaming committer

2020-05-18 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109573#comment-17109573
 ] 

Jingsong Lee edited comment on FLINK-17594 at 5/19/20, 4:11 AM:


master: 3b3ad3d25c79a4aa6e7daeb947654256efd4457b

release-1.11: 01852d7655898e14eec3452967862b924eac2b2f


was (Author: lzljs3620320):
master: 3b3ad3d25c79a4aa6e7daeb947654256efd4457b

> Support Hadoop path-based PartFileWriter with renaming committer
> 
>
> Key: FLINK-17594
> URL: https://issues.apache.org/jira/browse/FLINK-17594
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> To support streaming hive connector with StreamingFileSink, we need to 
> support path-based writer that could write to a specified Hadoop path. At the 
> first step we could use rename mechanism to achieve the file commit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17594) Support Hadoop path-based PartFileWriter with renaming committer

2020-05-18 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109573#comment-17109573
 ] 

Jingsong Lee edited comment on FLINK-17594 at 5/19/20, 4:11 AM:


master: 3b3ad3d25c79a4aa6e7daeb947654256efd4457b


was (Author: lzljs3620320):
master: f850ec78b1554989f39053569ea821b19a2adc34

> Support Hadoop path-based PartFileWriter with renaming committer
> 
>
> Key: FLINK-17594
> URL: https://issues.apache.org/jira/browse/FLINK-17594
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> To support streaming hive connector with StreamingFileSink, we need to 
> support path-based writer that could write to a specified Hadoop path. At the 
> first step we could use rename mechanism to achieve the file commit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #12224: [FLINK-17594][filesystem] Support Hadoop path-based part-file writer.

2020-05-18 Thread GitBox


JingsongLi merged pull request #12224:
URL: https://github.com/apache/flink/pull/12224


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi closed pull request #12235: [FLINK-14255][hive] Integrate mapred record writer to hive streaming sink

2020-05-18 Thread GitBox


JingsongLi closed pull request #12235:
URL: https://github.com/apache/flink/pull/12235


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12221: [FLINK-17797][FLINK-17798][hbase][jdbc] Align the behavior between the new and legacy HBase/JDBC table source

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12221:
URL: https://github.com/apache/flink/pull/12221#issuecomment-630123655


   
   ## CI report:
   
   * 129d85d4bcc2b032ff5be1999a8a0277f31a2848 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1726)
 
   * bd621a07015e49f48360dd5f60209c0ef0cdaa32 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12138: [FLINK-16611] [datadog-metrics] Make number of metrics per request configurable

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12138:
URL: https://github.com/apache/flink/pull/12138#issuecomment-628319069


   
   ## CI report:
   
   * a6af75021894504c6515472be86f1e490ae32e27 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1218)
 
   * 3fa364afeda36c6e3de19804317d7470ecfcbd21 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12139: [FLINK-16076] Translate "Queryable State" page into Chinese

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12139:
URL: https://github.com/apache/flink/pull/12139#issuecomment-628360660


   
   ## CI report:
   
   * 1de151b291c4a24fe9cd32304d033f65d8f465f6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1783)
 
   * 8ef0fd613e57d689c777b2d0473e402f4ea24d4f UNKNOWN
   * a659c9220a856519dc60a18b79438c2ca485fa72 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1792)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…

2020-05-18 Thread GitBox


flinkbot commented on pull request #12237:
URL: https://github.com/apache/flink/pull/12237#issuecomment-630566246


   
   ## CI report:
   
   * 0c911fb0b8ae6fc7dabf929ae10b27b24e0b604d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17802) Set offset commit only if group id is configured for new Kafka Table source

2020-05-18 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-17802:
--

 Summary: Set offset commit only if group id is configured for new 
Kafka Table source
 Key: FLINK-17802
 URL: https://issues.apache.org/jira/browse/FLINK-17802
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.11.0
Reporter: Leonard Xu
 Fix For: 1.11.0


As https://issues.apache.org/jira/browse/FLINK-17619 described,

the new Kafka Table source exits same problem and should fix too.

note: this  fix both for master and release-1.11

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #12221: [FLINK-17797][FLINK-17798][hbase][jdbc] Align the behavior between the new and legacy HBase/JDBC table source

2020-05-18 Thread GitBox


wuchong commented on pull request #12221:
URL: https://github.com/apache/flink/pull/12221#issuecomment-630563803


   Rebased to resolve compile problem and updated.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wanglijie95 commented on a change in pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-18 Thread GitBox


wanglijie95 commented on a change in pull request #12181:
URL: https://github.com/apache/flink/pull/12181#discussion_r427015357



##
File path: 
flink-core/src/main/java/org/apache/flink/core/fs/SafetyNetCloseableRegistry.java
##
@@ -186,6 +212,15 @@ private CloseableReaperThread() {
this.running = true;
}
 
+   @VisibleForTesting
+   CloseableReaperThread(String name) {

Review comment:
   OK, I will change it to protected.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12224: [FLINK-17594][filesystem] Support Hadoop path-based part-file writer.

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12224:
URL: https://github.com/apache/flink/pull/12224#issuecomment-630123879


   
   ## CI report:
   
   * 354620f64883486ddf2c02dddf0633b51d0444d9 UNKNOWN
   * 2712496dd23e43f1660d98dee5bf622a590f1357 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1781)
 
   * 64b01f03d9a850df4e21c014abfad92cfb845de8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1784)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12139: [FLINK-16076] Translate "Queryable State" page into Chinese

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12139:
URL: https://github.com/apache/flink/pull/12139#issuecomment-628360660


   
   ## CI report:
   
   * 1de151b291c4a24fe9cd32304d033f65d8f465f6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1783)
 
   * 8ef0fd613e57d689c777b2d0473e402f4ea24d4f UNKNOWN
   * a659c9220a856519dc60a18b79438c2ca485fa72 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12196:
URL: https://github.com/apache/flink/pull/12196#issuecomment-629757606


   
   ## CI report:
   
   * 8a039dbefa98196466e3226609de0bdcb95bc19a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1744)
 
   * 19f5ffd8eea4707770617a5349a3ef4275708a30 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1793)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12217: [FLINK-17786][sql-client] Remove dialect from ExecutionEntry

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12217:
URL: https://github.com/apache/flink/pull/12217#issuecomment-630047579


   
   ## CI report:
   
   * a9407fa236d3cb710f131c04ced0ae449bbc15a9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1710)
 
   * 35ccaf955d2d6487f6c5199e337208e2281dc417 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1794)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-18 Thread GitBox


zhuzhurk commented on a change in pull request #12181:
URL: https://github.com/apache/flink/pull/12181#discussion_r427012948



##
File path: 
flink-core/src/main/java/org/apache/flink/core/fs/SafetyNetCloseableRegistry.java
##
@@ -186,6 +212,15 @@ private CloseableReaperThread() {
this.running = true;
}
 
+   @VisibleForTesting
+   CloseableReaperThread(String name) {

Review comment:
   Why adding a new constructor here?
   If you are blocked by the existing private constructor, you can just change 
it to protected.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] swhelan091 commented on a change in pull request #12138: [FLINK-16611] [datadog-metrics] Make number of metrics per request configurable

2020-05-18 Thread GitBox


swhelan091 commented on a change in pull request #12138:
URL: https://github.com/apache/flink/pull/12138#discussion_r427013055



##
File path: 
flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DatadogHttpReporter.java
##
@@ -137,15 +140,21 @@ public void report() {
counters.values().forEach(request::addCounter);
meters.values().forEach(request::addMeter);
 
-   try {
-   client.send(request);
-   counters.values().forEach(DCounter::ackReport);
-   LOGGER.debug("Reported series with size {}.", 
request.getSeries().size());
-   } catch (SocketTimeoutException e) {
-   LOGGER.warn("Failed reporting metrics to Datadog 
because of socket timeout: {}", e.getMessage());
-   } catch (Exception e) {
-   LOGGER.warn("Failed reporting metrics to Datadog.", e);
+   int totalMetrics = request.getSeries().size();
+   int fromIndex = 0;
+   while (fromIndex < totalMetrics) {
+   int toIndex = Math.min(fromIndex + 
maxMetricsPerRequestValue, totalMetrics);
+   try {
+   client.send(new 
DSeries(request.getSeries().subList(fromIndex, toIndex)));
+   } catch (SocketTimeoutException e) {
+   LOGGER.warn("Failed reporting metrics to 
Datadog because of socket timeout: {}", e.getMessage());
+   } catch (Exception e) {
+   LOGGER.warn("Failed reporting metrics to 
Datadog.", e);
+   }
+   fromIndex = toIndex;
}
+   LOGGER.debug("Reported series with size {}.", totalMetrics);
+   counters.values().forEach(DCounter::ackReport);

Review comment:
   One question here. Should `ackReport` only be called on counters that 
were successfully emitted to Datadog?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #12221: [FLINK-17797][FLINK-17798][hbase][jdbc] Align the behavior between the new and legacy HBase/JDBC table source

2020-05-18 Thread GitBox


wuchong commented on a change in pull request #12221:
URL: https://github.com/apache/flink/pull/12221#discussion_r427012826



##
File path: 
flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/table/JdbcLookupTableITCase.java
##
@@ -46,24 +46,28 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
 import java.util.List;
+import java.util.stream.Collectors;
 
 import static 
org.apache.flink.connector.jdbc.JdbcTestFixture.DERBY_EBOOKSHOP_DB;
 import static org.apache.flink.table.api.Expressions.$;
+import static org.junit.Assert.assertEquals;
 
 /**
- * IT case for {@link JdbcLookupFunction}.
+ * IT case for lookup source of JDBC connector.
  */
 @RunWith(Parameterized.class)
-public class JdbcLookupFunctionITCase extends AbstractTestBase {
+public class JdbcLookupTableITCase extends AbstractTestBase {

Review comment:
   We don't support projection for lookup source.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-18 Thread GitBox


zhuzhurk commented on a change in pull request #12181:
URL: https://github.com/apache/flink/pull/12181#discussion_r427010925



##
File path: 
flink-core/src/main/java/org/apache/flink/core/fs/SafetyNetCloseableRegistry.java
##
@@ -73,8 +73,34 @@
synchronized (REAPER_THREAD_LOCK) {
if (0 == GLOBAL_SAFETY_NET_REGISTRY_COUNT) {
Preconditions.checkState(null == REAPER_THREAD);
-   REAPER_THREAD = new CloseableReaperThread();
-   REAPER_THREAD.start();
+   try {
+   REAPER_THREAD = new 
CloseableReaperThread();
+   REAPER_THREAD.start();
+   } catch (Throwable throwable) {
+   // thread create or start error.
+   REAPER_THREAD = null;
+   throw throwable;
+   }
+   }
+   ++GLOBAL_SAFETY_NET_REGISTRY_COUNT;
+   }
+   }
+
+   @VisibleForTesting
+   SafetyNetCloseableRegistry(CloseableReaperThread reaperThread) {

Review comment:
   We should avoid duplicating the codes.
   Below is a possible way to make it.
   
   ```
SafetyNetCloseableRegistry() {
this(() -> new CloseableReaperThread());
}
   
@VisibleForTesting
SafetyNetCloseableRegistry(Supplier 
reaperThreadSupplier) {
super(new IdentityHashMap<>());
   
synchronized (REAPER_THREAD_LOCK) {
if (0 == GLOBAL_SAFETY_NET_REGISTRY_COUNT) {
Preconditions.checkState(null == REAPER_THREAD);
try {
REAPER_THREAD = 
reaperThreadSupplier.get();
REAPER_THREAD.start();
} catch (Throwable throwable) {
REAPER_THREAD = null;
throw throwable;
}
}
++GLOBAL_SAFETY_NET_REGISTRY_COUNT;
}
}
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-18 Thread GitBox


zhuzhurk commented on a change in pull request #12181:
URL: https://github.com/apache/flink/pull/12181#discussion_r427010925



##
File path: 
flink-core/src/main/java/org/apache/flink/core/fs/SafetyNetCloseableRegistry.java
##
@@ -73,8 +73,34 @@
synchronized (REAPER_THREAD_LOCK) {
if (0 == GLOBAL_SAFETY_NET_REGISTRY_COUNT) {
Preconditions.checkState(null == REAPER_THREAD);
-   REAPER_THREAD = new CloseableReaperThread();
-   REAPER_THREAD.start();
+   try {
+   REAPER_THREAD = new 
CloseableReaperThread();
+   REAPER_THREAD.start();
+   } catch (Throwable throwable) {
+   // thread create or start error.
+   REAPER_THREAD = null;
+   throw throwable;
+   }
+   }
+   ++GLOBAL_SAFETY_NET_REGISTRY_COUNT;
+   }
+   }
+
+   @VisibleForTesting
+   SafetyNetCloseableRegistry(CloseableReaperThread reaperThread) {

Review comment:
   We should avoid duplicating the codes.
   Below is a possible way to make it.
   
   ```
SafetyNetCloseableRegistry() {
this(() -> new CloseableReaperThread());
}
   
@VisibleForTesting
SafetyNetCloseableRegistry(Supplier 
reaperThreadSupplier) {
super(new IdentityHashMap<>());
   
synchronized (REAPER_THREAD_LOCK) {
if (0 == GLOBAL_SAFETY_NET_REGISTRY_COUNT) {
Preconditions.checkState(null == REAPER_THREAD);
try {
REAPER_THREAD = 
reaperThreadSupplier.get();
REAPER_THREAD.start();
} catch (Throwable throwable) {
// thread create or start error.
REAPER_THREAD = null;
throw throwable;
}
}
++GLOBAL_SAFETY_NET_REGISTRY_COUNT;
}
}
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #12221: [FLINK-17797][FLINK-17798][hbase][jdbc] Align the behavior between the new and legacy HBase/JDBC table source

2020-05-18 Thread GitBox


wuchong commented on a change in pull request #12221:
URL: https://github.com/apache/flink/pull/12221#discussion_r427009783



##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/table/JdbcDynamicTableSource.java
##
@@ -126,9 +128,20 @@ public ChangelogMode getChangelogMode() {
return ChangelogMode.insertOnly();
}
 
+   @Override
+   public boolean supportsNestedProjection() {
+   // JDBC doesn't support nested projection
+   return false;
+   }
+
+   @Override
+   public void applyProjection(int[][] projectedFields) {
+   this.physicalSchema = 
TableSchemaUtils.projectSchema(physicalSchema, projectedFields);

Review comment:
   Removed `selectFields` because we don't need it. The `physicalSchema` is 
the selected schema. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…

2020-05-18 Thread GitBox


flinkbot commented on pull request #12237:
URL: https://github.com/apache/flink/pull/12237#issuecomment-630556678


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 0c911fb0b8ae6fc7dabf929ae10b27b24e0b604d (Tue May 19 
03:27:28 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17290) Translate Streaming Analytics training lesson to Chinese

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17290:
---
Labels: pull-request-available  (was: )

> Translate Streaming Analytics training lesson to Chinese
> 
>
> Key: FLINK-17290
> URL: https://issues.apache.org/jira/browse/FLINK-17290
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation / Training
>Reporter: David Anderson
>Assignee: Herman, Li
>Priority: Major
>  Labels: pull-request-available
>
> The file to be translated is docs/training/streaming-analytics.zh.md. The 
> content covers event time, watermarks, and windowing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] twentyworld opened a new pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…

2020-05-18 Thread GitBox


twentyworld opened a new pull request #12237:
URL: https://github.com/apache/flink/pull/12237


   This PR is used to translate Flink tutorials page to Chinese, as no test 
should be involved. 
   
   It cost a lot of time to weigh every sentence to prevent my translation from 
causing confusion to the reader. As i'm not a native English speaker, so there 
will inevitably have some problems, please correct me, I hope this document is 
polished well enough.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12220: [FLINK-14255][hive] Integrate mapred record writer to hive streaming sink

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12220:
URL: https://github.com/apache/flink/pull/12220#issuecomment-630123571


   
   ## CI report:
   
   * 1f7420e803f6c1a3a4596457b13aa3e5bc125199 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1725)
 
   * 43791487ecb65eea3edac6956fcacb90c0da7e7f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1782)
 
   * 639da607c4f4a3f23bd3ed6635cba211bf6cf8f2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1789)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12217: [FLINK-17786][sql-client] Remove dialect from ExecutionEntry

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12217:
URL: https://github.com/apache/flink/pull/12217#issuecomment-630047579


   
   ## CI report:
   
   * a9407fa236d3cb710f131c04ced0ae449bbc15a9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1710)
 
   * 35ccaf955d2d6487f6c5199e337208e2281dc417 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12236: [FLINK-17527][flink-dist] Make kubernetes-session.sh use session log4j/logback configuration files

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12236:
URL: https://github.com/apache/flink/pull/12236#issuecomment-630547436


   
   ## CI report:
   
   * c3caf6974bdf8298376d150b00fd7e7125089986 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1790)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12139: [FLINK-16076] Translate "Queryable State" page into Chinese

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12139:
URL: https://github.com/apache/flink/pull/12139#issuecomment-628360660


   
   ## CI report:
   
   * 158aea29d67643ce7c7e140f32c32e4c8fc177be Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1702)
 
   * 1de151b291c4a24fe9cd32304d033f65d8f465f6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1783)
 
   * 8ef0fd613e57d689c777b2d0473e402f4ea24d4f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12196:
URL: https://github.com/apache/flink/pull/12196#issuecomment-629757606


   
   ## CI report:
   
   * 8a039dbefa98196466e3226609de0bdcb95bc19a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1744)
 
   * 19f5ffd8eea4707770617a5349a3ef4275708a30 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-17745) PackagedProgram' extractedTempLibraries and jarfiles may be duplicate

2020-05-18 Thread Echo Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110805#comment-17110805
 ] 

Echo Lee edited comment on FLINK-17745 at 5/19/20, 3:08 AM:


[~kkl0u] I don't think nested jar should be removed, because in standalone 
mode, we need it. It can extracted dependent jar and upload to the blob server 
and each task can be access it. If the nested jar is removed, I'm not sure 
whether it works.


was (Author: leeecho):
[~kkl0u] I don't think nested jar should be removed, because in standalone 
mode, we need it. It can extracted dependent jar and upload to the blob file 
and each task can be access it. If the nested jar is removed, I'm not sure 
whether it works.

> PackagedProgram' extractedTempLibraries and jarfiles may be duplicate
> -
>
> Key: FLINK-17745
> URL: https://issues.apache.org/jira/browse/FLINK-17745
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Affects Versions: 1.11.0
>Reporter: lisen
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> When i submit a flink app with a fat jar, PackagedProgram will extracted temp 
> libraries by the fat jar, and add to pipeline.jars, and the pipeline.jars 
> contains  fat jar and temp libraries. I don't think we should add fat jar to 
> the pipeline.jars if extractedTempLibraries is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17745) PackagedProgram' extractedTempLibraries and jarfiles may be duplicate

2020-05-18 Thread Echo Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110805#comment-17110805
 ] 

Echo Lee commented on FLINK-17745:
--

[~kkl0u] I don't think nested jar should be removed, because in standalone 
mode, we need it. It can extracted dependent jar and upload to the blob file 
and each task can be access it. If the nested jar is removed, I'm not sure 
whether it works.

> PackagedProgram' extractedTempLibraries and jarfiles may be duplicate
> -
>
> Key: FLINK-17745
> URL: https://issues.apache.org/jira/browse/FLINK-17745
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Affects Versions: 1.11.0
>Reporter: lisen
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> When i submit a flink app with a fat jar, PackagedProgram will extracted temp 
> libraries by the fat jar, and add to pipeline.jars, and the pipeline.jars 
> contains  fat jar and temp libraries. I don't think we should add fat jar to 
> the pipeline.jars if extractedTempLibraries is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] yangyichao-mango commented on pull request #12196: [FLINK-17353][docs] Fix Broken links in Flink docs master

2020-05-18 Thread GitBox


yangyichao-mango commented on pull request #12196:
URL: https://github.com/apache/flink/pull/12196#issuecomment-630550918


   That is the result of check_links.sh.
   找到 1 个死链接。
   
   http://localhost:4000/zh/ops/memory/mem_detail.html
   
   When I try to fix http://localhost:4000/zh/ops/memory/mem_detail.html, I'm 
blocked by three mds that is not be updated.
   
   **ops/memory/mem_setup_tm.md**
   **ops/memory/mem_trouble.md**
   **ops/memory/mem_migration.md**
   
   I think we should create a new jira issue to update those mds. When this 
issue is done, I can fix this broken link 
(http://localhost:4000/zh/ops/memory/mem_detail.html).
   
   cc @wuchong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on a change in pull request #12139: [FLINK-16076] Translate "Queryable State" page into Chinese

2020-05-18 Thread GitBox


klion26 commented on a change in pull request #12139:
URL: https://github.com/apache/flink/pull/12139#discussion_r426997228



##
File path: docs/dev/stream/state/queryable_state.zh.md
##
@@ -27,75 +27,54 @@ under the License.
 {:toc}
 
 
-  Note: The client APIs for queryable state are currently in 
an evolving state and
-  there are no guarantees made about stability of the 
provided interfaces. It is
-  likely that there will be breaking API changes on the client side in the 
upcoming Flink versions.
+  注意: 目前 querable state 的客户端 API 
还在不断演进,不保证现有接口的稳定性。在后续的 Flink 版本中有可能发生 API 变化。
 
 
-In a nutshell, this feature exposes Flink's managed keyed (partitioned) state
-(see [Working with State]({{ site.baseurl }}/dev/stream/state/state.html)) to 
the outside world and
-allows the user to query a job's state from outside Flink. For some scenarios, 
queryable state
-eliminates the need for distributed operations/transactions with external 
systems such as key-value
-stores which are often the bottleneck in practice. In addition, this feature 
may be particularly
-useful for debugging purposes.
+简而言之, 这个特性将 Flink 的 managed keyed (partitioned) state
+(参考 [Working with State]({{ site.baseurl }}/dev/stream/state/state.html)) 
暴露给外部,从而用户可以在 Flink 外部查询作业 state。
+在某些场景中,Queryable State 消除了对外部系统的分布式操作以及事务的需求,比如 KV 
存储系统,而这些外部系统往往会成为瓶颈。除此之外,这个特性对于调试作业非常有用。
 
 
-  Attention: When querying a state object, that object is 
accessed from a concurrent
-  thread without any synchronization or copying. This is a design choice, as 
any of the above would lead
-  to increased job latency, which we wanted to avoid. Since any state backend 
using Java heap space,
-  e.g. MemoryStateBackend or FsStateBackend, 
does not work
-  with copies when retrieving values but instead directly references the 
stored values, read-modify-write
-  patterns are unsafe and may cause the queryable state server to fail due to 
concurrent modifications.
-  The RocksDBStateBackend is safe from these issues.
+  注意: 进行查询时,state 会在并发线程中被访问,但 state 
不会进行同步和拷贝。这种设计是为了避免同步和拷贝带来的作业延时。对于使用 Java 堆内存的 state backend,
+  比如 MemoryStateBackend 或者 
FsStateBackend,它们获取状态时不会进行拷贝,而是直接引用状态对象,所以对状态的 read-modify-write 
是不安全的,并且
+ 可能会因为并发修改导致查询失败。但 RocksDBStateBackend 是安全的,不会遇到上述问题。
 
 
-## Architecture
+## 架构
 
-Before showing how to use the Queryable State, it is useful to briefly 
describe the entities that compose it.
-The Queryable State feature consists of three main entities:
+在展示如何使用 Queryable State 之前,先简单描述一下该特性的组成部分,主要包括以下三部分:
 
- 1. the `QueryableStateClient`, which (potentially) runs outside the Flink 
cluster and submits the user queries,
- 2. the `QueryableStateClientProxy`, which runs on each `TaskManager` (*i.e.* 
inside the Flink cluster) and is responsible
- for receiving the client's queries, fetching the requested state from the 
responsible Task Manager on his behalf, and
- returning it to the client, and
- 3. the `QueryableStateServer` which runs on each `TaskManager` and is 
responsible for serving the locally stored state.
+ 1. `QueryableStateClient`,默认运行在 Flink 集群外部,负责提交用户的查询请求;
+ 2. `QueryableStateClientProxy`,运行在每个 `TaskManager` 上(*即* Flink 
集群内部),负责接收客户端的查询请求,从所负责的 Task Manager 获取请求的 state,并返回给客户端;
+ 3. `QueryableStateServer`, 运行在 `TaskManager` 上,负责服务本地存储的 state。
 
-The client connects to one of the proxies and sends a request for the state 
associated with a specific
-key, `k`. As stated in [Working with State]({{ site.baseurl 
}}/dev/stream/state/state.html), keyed state is organized in
-*Key Groups*, and each `TaskManager` is assigned a number of these key groups. 
To discover which `TaskManager` is
-responsible for the key group holding `k`, the proxy will ask the 
`JobManager`. Based on the answer, the proxy will
-then query the `QueryableStateServer` running on that `TaskManager` for the 
state associated with `k`, and forward the
-response back to the client.
+客户端连接到一个代理,并发送请求获取特定 `k` 对应的 state。 如 [Working with State]({{ site.baseurl 
}}/dev/stream/state/state.html)所述,keyed state 按照
+*Key Groups* 进行划分,每个 `TaskManager` 会分配其中的一些 key groups。代理会询问 `JobManager` 以找到 
`k` 所属 key group 的 TaskManager。根据返回的结果, 代理
+将会向运行在 `TaskManager` 上的 `QueryableStateServer` 查询 `k` 对应的 state, 并将结果返回给客户端。
 
-## Activating Queryable State
+## 激活 Queryable State
 
-To enable queryable state on your Flink cluster, you need to do the following:
+为了在 Flink 集群上使用 queryable state,需要进行以下操作:
 
- 1. copy the `flink-queryable-state-runtime{{ site.scala_version_suffix 
}}-{{site.version }}.jar`
-from the `opt/` folder of your [Flink 
distribution](https://flink.apache.org/downloads.html "Apache Flink: 
Downloads"),
-to the `lib/` folder.
- 2. set the property `queryable-state.enable` to `true`. See the 
[Configuration]({{ site.baseurl }}/ops/config.html#queryable-state) 
documentation for details and additional parameters.
+ 1. 将 `flink-queryable-state-runtime{{ site.scala_version_suffix 
}}-{{site.version }}.jar`
+从 [Flink 

[jira] [Closed] (FLINK-15813) Set default value of jobmanager.execution.failover-strategy to region

2020-05-18 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-15813.
---
Resolution: Fixed

done via

master 
b425c05d57ace5cf27591dbd6798b6131211f6c1

release-1.1 
6fa0c92567a9d81639481063e422dde19a26e35a

> Set default value of jobmanager.execution.failover-strategy to region
> -
>
> Key: FLINK-15813
> URL: https://issues.apache.org/jira/browse/FLINK-15813
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: Zhu Zhu
>Priority: Blocker
>  Labels: pull-request-available, usability
> Fix For: 1.11.0
>
>
> We should set the default value of {{jobmanager.execution.failover-strategy}} 
> to {{region}}. This might require to adapt existing tests to make them pass.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on a change in pull request #12221: [FLINK-17797][FLINK-17798][hbase][jdbc] Align the behavior between the new and legacy HBase/JDBC table source

2020-05-18 Thread GitBox


godfreyhe commented on a change in pull request #12221:
URL: https://github.com/apache/flink/pull/12221#discussion_r426996670



##
File path: 
flink-connectors/flink-connector-hbase/src/test/java/org/apache/flink/connector/hbase/HBaseTablePlanTest.java
##
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.hbase;
+
+import org.apache.flink.table.api.TableConfig;
+import org.apache.flink.table.planner.utils.StreamTableTestUtil;
+import org.apache.flink.table.planner.utils.TableTestBase;
+
+import org.junit.Test;
+
+/**
+ * Plan tests for HBase connector, for example, testing projection push down.
+ */
+public class HBaseTablePlanTest extends TableTestBase {
+
+   private final StreamTableTestUtil util = streamTestUtil(new 
TableConfig());
+
+   @Test
+   public void testProjectionPushDown() {

Review comment:
   also add some tests for `validatePrimaryKey` case ?

##
File path: 
flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/table/JdbcLookupTableITCase.java
##
@@ -146,13 +150,17 @@ public void clearOutputTable() throws Exception {
public void test() throws Exception {

Review comment:
   give a more meaningful name here ?

##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/TableSchemaUtils.java
##
@@ -61,6 +63,23 @@ public static TableSchema getPhysicalSchema(TableSchema 
tableSchema) {
return builder.build();
}
 
+   /**
+* Creates a new {@link TableSchema} with the projected fields from 
another {@link TableSchema}.
+*
+* @see 
org.apache.flink.table.connector.source.abilities.SupportsProjectionPushDown
+*/
+   public static TableSchema projectSchema(TableSchema tableSchema, 
int[][] projectedFields) {

Review comment:
   please add some test for this method

##
File path: 
flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/table/JdbcLookupTableITCase.java
##
@@ -46,24 +46,28 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
 import java.util.List;
+import java.util.stream.Collectors;
 
 import static 
org.apache.flink.connector.jdbc.JdbcTestFixture.DERBY_EBOOKSHOP_DB;
 import static org.apache.flink.table.api.Expressions.$;
+import static org.junit.Assert.assertEquals;
 
 /**
- * IT case for {@link JdbcLookupFunction}.
+ * IT case for lookup source of JDBC connector.
  */
 @RunWith(Parameterized.class)
-public class JdbcLookupFunctionITCase extends AbstractTestBase {
+public class JdbcLookupTableITCase extends AbstractTestBase {

Review comment:
   add some cases for projection

##
File path: 
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/table/JdbcDynamicTableSource.java
##
@@ -126,9 +128,20 @@ public ChangelogMode getChangelogMode() {
return ChangelogMode.insertOnly();
}
 
+   @Override
+   public boolean supportsNestedProjection() {
+   // JDBC doesn't support nested projection
+   return false;
+   }
+
+   @Override
+   public void applyProjection(int[][] projectedFields) {
+   this.physicalSchema = 
TableSchemaUtils.projectSchema(physicalSchema, projectedFields);

Review comment:
   `selectFields` need to be updated ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12235: [FLINK-14255][hive] Integrate mapred record writer to hive streaming sink

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12235:
URL: https://github.com/apache/flink/pull/12235#issuecomment-630534111


   
   ## CI report:
   
   * 6514d8bd0422cc1b8b5b0769c93306d7f5b5e237 UNKNOWN
   * 0adf4c6ffa6d9ec479607b803b9bc62fdc2e771b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1786)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12236: [FLINK-17527][flink-dist] Make kubernetes-session.sh use session log4j/logback configuration files

2020-05-18 Thread GitBox


flinkbot commented on pull request #12236:
URL: https://github.com/apache/flink/pull/12236#issuecomment-630547436


   
   ## CI report:
   
   * c3caf6974bdf8298376d150b00fd7e7125089986 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12224: [FLINK-17594][filesystem] Support Hadoop path-based part-file writer.

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12224:
URL: https://github.com/apache/flink/pull/12224#issuecomment-630123879


   
   ## CI report:
   
   * 354620f64883486ddf2c02dddf0633b51d0444d9 UNKNOWN
   * c4f7289627bf6830b87866e75fb0cb270e064956 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1756)
 
   * 2712496dd23e43f1660d98dee5bf622a590f1357 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1781)
 
   * 64b01f03d9a850df4e21c014abfad92cfb845de8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1784)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12100: [FLINK-17634][rest] Reject multiple registration for the same endpoint

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12100:
URL: https://github.com/apache/flink/pull/12100#issuecomment-627366153


   
   ## CI report:
   
   * f968f3c2f9f088b5c80b70491d512295ea361004 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1755)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12199: [FLINK-17774] [table] supports all kinds of changes for select result

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12199:
URL: https://github.com/apache/flink/pull/12199#issuecomment-629793563


   
   ## CI report:
   
   * d735316bc8301f79352fe1cc0d05d67d51b52766 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1722)
 
   * cff8424e532db0bfc3ca58b8b6da13d3d69b0559 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1787)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk merged pull request #12223: [FLINK-15813][runtime] Set default value of config “jobmanager.execution.failover-strategy” to “region”

2020-05-18 Thread GitBox


zhuzhurk merged pull request #12223:
URL: https://github.com/apache/flink/pull/12223


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17745) PackagedProgram' extractedTempLibraries and jarfiles may be duplicate

2020-05-18 Thread Kevin Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110793#comment-17110793
 ] 

Kevin Zhang commented on FLINK-17745:
-

[~fly_in_gis] IIUC, the problem that [~Echo Lee] addresses is that both the fat 
jar and the jars extracted from the fat jar are added to `pipeline.jars`. 
Obviously this is retundant, and also I agree with lisen that only jars 
extraced from the fat jar should be added to `pipeline.jars`.

[~kkl0u]'s pr seems only set the fat jar to `pipeline.jars`, I'm not sure 
whether it works


> PackagedProgram' extractedTempLibraries and jarfiles may be duplicate
> -
>
> Key: FLINK-17745
> URL: https://issues.apache.org/jira/browse/FLINK-17745
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Affects Versions: 1.11.0
>Reporter: lisen
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> When i submit a flink app with a fat jar, PackagedProgram will extracted temp 
> libraries by the fat jar, and add to pipeline.jars, and the pipeline.jars 
> contains  fat jar and temp libraries. I don't think we should add fat jar to 
> the pipeline.jars if extractedTempLibraries is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12236: [FLINK-17527][flink-dist] Make kubernetes-session.sh use session log4j/logback configuration files

2020-05-18 Thread GitBox


flinkbot commented on pull request #12236:
URL: https://github.com/apache/flink/pull/12236#issuecomment-630541737


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c3caf6974bdf8298376d150b00fd7e7125089986 (Tue May 19 
02:33:49 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17565) Bump fabric8 version from 4.5.2 to 4.9.1

2020-05-18 Thread Canbin Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110788#comment-17110788
 ] 

Canbin Zheng commented on FLINK-17565:
--

[~fly_in_gis] Previously we discussed it in 
https://issues.apache.org/jira/browse/FLINK-17566?focusedCommentId=17102531=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17102531.
 Though I can't tell exactly what problems 4.10.x may possibly introduce, but 
it seems that bumping to 4.10.x is somewaht aggressive, currently it would be 
safer to use 4.9.x.

> Bump fabric8 version from 4.5.2 to 4.9.1
> 
>
> Key: FLINK-17565
> URL: https://issues.apache.org/jira/browse/FLINK-17565
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently, we are using a version of 4.5.2, it's better that we upgrade it to 
> 4.9.1, some of the reasons are as follows:
> # It removed the use of reapers manually doing cascade deletion of resources, 
> leave it up to Kubernetes APIServer, which solves the issue of FLINK-17566, 
> more info:  https://github.com/fabric8io/kubernetes-client/issues/1880
> # It introduced a regression in building Quantity values in 4.7.0, release 
> note https://github.com/fabric8io/kubernetes-client/issues/1953.
> # It provided better support for K8s 1.17, release note: 
> https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0.
> For more release notes, please refer to [fabric8 
> releases|https://github.com/fabric8io/kubernetes-client/releases].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17015) Fix NPE from NullAwareMapIterator

2020-05-18 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110787#comment-17110787
 ] 

Zhijiang commented on FLINK-17015:
--

If I understand correctly, FLINK-17610 can also be treated as a solution for 
this bug somehow, so +1 from my side to make it for 1.11.

> Fix NPE from NullAwareMapIterator
> -
>
> Key: FLINK-17015
> URL: https://issues.apache.org/jira/browse/FLINK-17015
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: 92164295_3052056384855585_3776552648744894464_o.jpg
>
>
> When using Heap statebackend, the underlying 
> {{org.apache.flink.runtime.state.heap.HeapMapState#iterator}} may return a 
> null iterator. It results in the {{NullAwareMapIterator}} holds a null 
> iterator and throws NPE in the following {{NullAwareMapIterator#hasNext}} 
> invocking. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 opened a new pull request #12236: [FLINK-17527][flink-dist] Make kubernetes-session.sh use session log4j/logback configuration files

2020-05-18 Thread GitBox


wangyang0918 opened a new pull request #12236:
URL: https://github.com/apache/flink/pull/12236


   
   
   ## What is the purpose of the change
   
   Currently, `kubernetes-session.sh` uses the log4j-console.properties and 
`yarn-session.sh` uses `log4j-yarn-session.properties`. They should have the 
same logging configuration. This PR will rename `log4j-yarn-session.properties` 
to `log4j-session.properties`, `logback-yarn.xml` to `logback-session.xml`. 
This also will make the logback configuration aligned with log4j configuration.
   
   
   ## Brief change log
   
   * 21d91c1c0e4c75beebc9b032ca3106977f65b073 Rename yarn log4j/logback 
configuration files so that they could be reused
   * c3caf6974bdf8298376d150b00fd7e7125089986 Make `kubernetes-session.sh` use 
session log4j/logback configuration files
   
   
   ## Verifying this change
   
   * Manually test on a Yarn/K8s cluster, the client console logging should be 
same with before
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (**yes** / no / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17527) kubernetes-session.sh uses log4j-console.properties

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17527:
---
Labels: pull-request-available  (was: )

> kubernetes-session.sh uses log4j-console.properties
> ---
>
> Key: FLINK-17527
> URL: https://issues.apache.org/jira/browse/FLINK-17527
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0, 1.10.2
>
>
> It is a bit confusing that {{kubernetes-session.sh}} uses the 
> {{log4j-console.properties}}.  At the moment, {{flink}} used 
> {{log4j-cli.properties}}, {{yarn-session.sh}} uses 
> {{log4j-yarn-session.properties}} and {{kubernetes-session.sh}} uses 
> {{log4j-console.properties}}.
> I would suggest to let all scripts use the same logger configuration file 
> (e.g. {{logj4-cli.properties}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17789) DelegatingConfiguration should remove prefix instead of add prefix in toMap

2020-05-18 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110781#comment-17110781
 ] 

Jingsong Lee commented on FLINK-17789:
--

Thanks for your detailed analysis.

Now only {{map.get("}}{{prefix.}}{{prefix.k1")}}  can get v1.

I got your point, but I don't think it is good design. DelegatingConfiguration 
is Configuration, and the toMap should be consistent with addAllToProperties in 
Configuration. And for toMap, it should be consistent with get too. I mean, 
DelegatingConfiguration override these methods from base class Configuration.

> DelegatingConfiguration should remove prefix instead of add prefix in toMap
> ---
>
> Key: FLINK-17789
> URL: https://issues.apache.org/jira/browse/FLINK-17789
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> {code:java}
> Configuration conf = new Configuration();
> conf.setString("k0", "v0");
> conf.setString("prefix.k1", "v1");
> DelegatingConfiguration dc = new DelegatingConfiguration(conf, "prefix.");
> System.out.println(dc.getString("k0", "empty")); // empty
> System.out.println(dc.getString("k1", "empty")); // v1
> System.out.println(dc.toMap().get("k1")); // should be v1, but null
> System.out.println(dc.toMap().get("prefix.prefix.k1")); // should be null, 
> but v1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12139: [FLINK-16076] Translate "Queryable State" page into Chinese

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12139:
URL: https://github.com/apache/flink/pull/12139#issuecomment-628360660


   
   ## CI report:
   
   * 158aea29d67643ce7c7e140f32c32e4c8fc177be Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1702)
 
   * 1de151b291c4a24fe9cd32304d033f65d8f465f6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1783)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12224: [FLINK-17594][filesystem] Support Hadoop path-based part-file writer.

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12224:
URL: https://github.com/apache/flink/pull/12224#issuecomment-630123879


   
   ## CI report:
   
   * 354620f64883486ddf2c02dddf0633b51d0444d9 UNKNOWN
   * aa5d10422ec019419efbb3a0e9c1f0900b960529 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1734)
 
   * c4f7289627bf6830b87866e75fb0cb270e064956 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1756)
 
   * 2712496dd23e43f1660d98dee5bf622a590f1357 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1781)
 
   * 64b01f03d9a850df4e21c014abfad92cfb845de8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12188: [FLINK-17728] [sql-client] sql client supports parser statements via sql parser

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12188:
URL: https://github.com/apache/flink/pull/12188#issuecomment-629595773


   
   ## CI report:
   
   * aae4f45fd18320222d790dffcda117e02cb64e5e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1752)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12220: [FLINK-14255][hive] Integrate mapred record writer to hive streaming sink

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12220:
URL: https://github.com/apache/flink/pull/12220#issuecomment-630123571


   
   ## CI report:
   
   * 1f7420e803f6c1a3a4596457b13aa3e5bc125199 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1725)
 
   * 43791487ecb65eea3edac6956fcacb90c0da7e7f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1782)
 
   * 639da607c4f4a3f23bd3ed6635cba211bf6cf8f2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12235: [FLINK-14255][hive] Integrate mapred record writer to hive streaming sink

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12235:
URL: https://github.com/apache/flink/pull/12235#issuecomment-630534111


   
   ## CI report:
   
   * 6514d8bd0422cc1b8b5b0769c93306d7f5b5e237 UNKNOWN
   * 0adf4c6ffa6d9ec479607b803b9bc62fdc2e771b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12199: [FLINK-17774] [table] supports all kinds of changes for select result

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12199:
URL: https://github.com/apache/flink/pull/12199#issuecomment-629793563


   
   ## CI report:
   
   * d735316bc8301f79352fe1cc0d05d67d51b52766 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1722)
 
   * cff8424e532db0bfc3ca58b8b6da13d3d69b0559 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12230:
URL: https://github.com/apache/flink/pull/12230#issuecomment-630205457


   
   ## CI report:
   
   * f1811b6e75ed74d91f7a0a26ac4ab2a89ea088c3 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1747)
 
   * de4251b2c2ee31ababe1c82505c15822546a59b1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1785)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12216: [FLINK-17788][scala-shell] scala shell in yarn mode is broken

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12216:
URL: https://github.com/apache/flink/pull/12216#issuecomment-630047470


   
   ## CI report:
   
   * 4a770b66f5d878e6b1cfa661410febdbc02c44af Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1753)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17565) Bump fabric8 version from 4.5.2 to 4.9.1

2020-05-18 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110779#comment-17110779
 ] 

Yang Wang commented on FLINK-17565:
---

[~felixzheng] Do we have some problems to bump the fabric8 kubernetes-client 
version to 4.10.2? It seems that the author believe that it is stable enough 
and suggest to use the version.

> Bump fabric8 version from 4.5.2 to 4.9.1
> 
>
> Key: FLINK-17565
> URL: https://issues.apache.org/jira/browse/FLINK-17565
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently, we are using a version of 4.5.2, it's better that we upgrade it to 
> 4.9.1, some of the reasons are as follows:
> # It removed the use of reapers manually doing cascade deletion of resources, 
> leave it up to Kubernetes APIServer, which solves the issue of FLINK-17566, 
> more info:  https://github.com/fabric8io/kubernetes-client/issues/1880
> # It introduced a regression in building Quantity values in 4.7.0, release 
> note https://github.com/fabric8io/kubernetes-client/issues/1953.
> # It provided better support for K8s 1.17, release note: 
> https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0.
> For more release notes, please refer to [fabric8 
> releases|https://github.com/fabric8io/kubernetes-client/releases].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17416) Flink-kubernetes doesn't work on java 8 8u252

2020-05-18 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-17416:
--
Priority: Major  (was: Blocker)

> Flink-kubernetes doesn't work on java 8 8u252
> -
>
> Key: FLINK-17416
> URL: https://issues.apache.org/jira/browse/FLINK-17416
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0, 1.11.0
>Reporter: wangxiyuan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: log.k8s.session.8u252
>
>
> When using java-8-8u252 version, the flink container end-to-end failed. The 
> test  `Running 'Run kubernetes session test'` fails with the `Broken pipe` 
> error.
> See:
> [https://logs.openlabtesting.org/logs/periodic-20-flink-mail/github.com/apache/flink/master/flink-end-to-end-test-arm64-container/fcfdd47/job-output.txt.gz]
>  
> Flink Azure CI doesn't hit this problem because it runs under jdk-8-8u242
>  
> The reason is that the okhttp library which flink using doesn't work on 
> java-8-8u252:
> [https://github.com/square/okhttp/issues/5970]
>  
> The problem has been with the PR:
> [https://github.com/square/okhttp/pull/5977]
>  
> Maybe we can wait for a new 3.12.x release and bump the okhttp version in 
> Flink later.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17416) Flink-kubernetes doesn't work on java 8 8u252

2020-05-18 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110778#comment-17110778
 ] 

Yang Wang commented on FLINK-17416:
---

For now, the reasonable solution is bumping the fabric8 kubernetes-client 
version to latest(1.10.2). And then this issue will be fixed. I think it makes 
sense to downgrade the priority just as [~trohrmann] said we could not do 
anything in Flink to avoid this issue.

 

BTW, let's move the discussion about bumping version to FLINK-17565. I will 
also have more tests on the new kubernetes-client version.

> Flink-kubernetes doesn't work on java 8 8u252
> -
>
> Key: FLINK-17416
> URL: https://issues.apache.org/jira/browse/FLINK-17416
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0, 1.11.0
>Reporter: wangxiyuan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: log.k8s.session.8u252
>
>
> When using java-8-8u252 version, the flink container end-to-end failed. The 
> test  `Running 'Run kubernetes session test'` fails with the `Broken pipe` 
> error.
> See:
> [https://logs.openlabtesting.org/logs/periodic-20-flink-mail/github.com/apache/flink/master/flink-end-to-end-test-arm64-container/fcfdd47/job-output.txt.gz]
>  
> Flink Azure CI doesn't hit this problem because it runs under jdk-8-8u242
>  
> The reason is that the okhttp library which flink using doesn't work on 
> java-8-8u252:
> [https://github.com/square/okhttp/issues/5970]
>  
> The problem has been with the PR:
> [https://github.com/square/okhttp/pull/5977]
>  
> Maybe we can wait for a new 3.12.x release and bump the okhttp version in 
> Flink later.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12230:
URL: https://github.com/apache/flink/pull/12230#issuecomment-630205457


   
   ## CI report:
   
   * f1811b6e75ed74d91f7a0a26ac4ab2a89ea088c3 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1747)
 
   * de4251b2c2ee31ababe1c82505c15822546a59b1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12235: [FLINK-14255][hive] Integrate mapred record writer to hive streaming sink

2020-05-18 Thread GitBox


flinkbot commented on pull request #12235:
URL: https://github.com/apache/flink/pull/12235#issuecomment-630534111


   
   ## CI report:
   
   * 6514d8bd0422cc1b8b5b0769c93306d7f5b5e237 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12220: [FLINK-14255][hive] Integrate mapred record writer to hive streaming sink

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12220:
URL: https://github.com/apache/flink/pull/12220#issuecomment-630123571


   
   ## CI report:
   
   * 1f7420e803f6c1a3a4596457b13aa3e5bc125199 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1725)
 
   * 43791487ecb65eea3edac6956fcacb90c0da7e7f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1782)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12139: [FLINK-16076] Translate "Queryable State" page into Chinese

2020-05-18 Thread GitBox


flinkbot edited a comment on pull request #12139:
URL: https://github.com/apache/flink/pull/12139#issuecomment-628360660


   
   ## CI report:
   
   * 158aea29d67643ce7c7e140f32c32e4c8fc177be Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1702)
 
   * 1de151b291c4a24fe9cd32304d033f65d8f465f6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17623) Elasticsearch sink should support user resource cleanup

2020-05-18 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110770#comment-17110770
 ] 

Yangze Guo commented on FLINK-17623:


Sorry for the belated response. In this case, I would choose exposing a close 
method since there is "open" method in it. Not sure who is the owner of ES 
connector. Just try to ping [~tzulitai] according to the commit history.

> Elasticsearch sink should support user resource cleanup
> ---
>
> Key: FLINK-17623
> URL: https://issues.apache.org/jira/browse/FLINK-17623
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Yun Wang
>Priority: Major
>  Labels: usability
>
> There should be a way for an 
> [ElasticsearchSinkFunction|[https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkFunction.java]]
>  implementation to use resources with the same lifecycle as the Elasticsearch 
> sink, for example, an 
> [RestHighLevelClient|[https://www.elastic.co/guide/en/elasticsearch/client/java-rest/master/java-rest-high.html]].
> Currently there is no way to clean up such resources.
> This can be achieved by either of the below:
>  # Expose a `close()` method in the ElasticsearchSinkFunction interface, and 
> invoke the close method from [ElasticsearchSinkBase.close 
> |[https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkBase.java|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkBase.java#L331]].
>  # Make the 
> [ElasticsearchSink|[https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-elasticsearch6/src/main/java/org/apache/flink/streaming/connectors/elasticsearch6/ElasticsearchSink.java]]
>  class extendable.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17623) Elasticsearch sink should support user resource cleanup

2020-05-18 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo updated FLINK-17623:
---
Fix Version/s: 1.12.0

> Elasticsearch sink should support user resource cleanup
> ---
>
> Key: FLINK-17623
> URL: https://issues.apache.org/jira/browse/FLINK-17623
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Yun Wang
>Priority: Major
>  Labels: usability
> Fix For: 1.12.0
>
>
> There should be a way for an 
> [ElasticsearchSinkFunction|[https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkFunction.java]]
>  implementation to use resources with the same lifecycle as the Elasticsearch 
> sink, for example, an 
> [RestHighLevelClient|[https://www.elastic.co/guide/en/elasticsearch/client/java-rest/master/java-rest-high.html]].
> Currently there is no way to clean up such resources.
> This can be achieved by either of the below:
>  # Expose a `close()` method in the ElasticsearchSinkFunction interface, and 
> invoke the close method from [ElasticsearchSinkBase.close 
> |[https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkBase.java|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkBase.java#L331]].
>  # Make the 
> [ElasticsearchSink|[https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-elasticsearch6/src/main/java/org/apache/flink/streaming/connectors/elasticsearch6/ElasticsearchSink.java]]
>  class extendable.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17745) PackagedProgram' extractedTempLibraries and jarfiles may be duplicate

2020-05-18 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110768#comment-17110768
 ] 

Yang Wang commented on FLINK-17745:
---

Do we have a case that the fat contains other jars and some classes? If it is, 
then it should be added to the {{pipeline.jars}}.

> PackagedProgram' extractedTempLibraries and jarfiles may be duplicate
> -
>
> Key: FLINK-17745
> URL: https://issues.apache.org/jira/browse/FLINK-17745
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Affects Versions: 1.11.0
>Reporter: lisen
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> When i submit a flink app with a fat jar, PackagedProgram will extracted temp 
> libraries by the fat jar, and add to pipeline.jars, and the pipeline.jars 
> contains  fat jar and temp libraries. I don't think we should add fat jar to 
> the pipeline.jars if extractedTempLibraries is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] banmoy commented on pull request #12139: [FLINK-16076] Translate "Queryable State" page into Chinese

2020-05-18 Thread GitBox


banmoy commented on pull request #12139:
URL: https://github.com/apache/flink/pull/12139#issuecomment-630530026


   @flinkbot run travis



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   7   8   9   >