[GitHub] [flink] JingsongLi commented on issue #9139: [FLINK-13304][table-runtime-blink] Fix implementation of getString and getBinary method in NestedRow and add tests for complex data formats

2019-07-17 Thread GitBox
JingsongLi commented on issue #9139: [FLINK-13304][table-runtime-blink] Fix 
implementation of getString and getBinary method in NestedRow and add tests for 
complex data formats
URL: https://github.com/apache/flink/pull/9139#issuecomment-512675402
 
 
   LGTM +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #9142: [hotfix][connector] fix typo in filters of maven shade plguin

2019-07-17 Thread GitBox
wuchong commented on issue #9142: [hotfix][connector] fix typo in filters of 
maven shade plguin
URL: https://github.com/apache/flink/pull/9142#issuecomment-512674069
 
 
   @docete , not needed. I will cherry pick to 1.9.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9142: [hotfix][connector] fix typo in filters of maven shade plguin

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #9142: [hotfix][connector] fix typo in 
filters of maven shade plguin
URL: https://github.com/apache/flink/pull/9142#issuecomment-512205921
 
 
   ## CI report:
   
   * 176b9fe0118f9f4d5714489fb793f7a2bed10c8f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119454282)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on issue #9142: [hotfix][connector] fix typo in filters of maven shade plguin

2019-07-17 Thread GitBox
docete commented on issue #9142: [hotfix][connector] fix typo in filters of 
maven shade plguin
URL: https://github.com/apache/flink/pull/9142#issuecomment-512673651
 
 
   > Thanks for the explanation @docete . Looks good to me.
   > Wait the travis compile pass.
   > 
   > Btw, do you intend to merge this to 1.9?
   
   Yes, should i create another PR for 1.9?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TsReaper commented on issue #9139: [FLINK-13304][table-runtime-blink] Fix implementation of getString and getBinary method in NestedRow and add tests for complex data formats

2019-07-17 Thread GitBox
TsReaper commented on issue #9139: [FLINK-13304][table-runtime-blink] Fix 
implementation of getString and getBinary method in NestedRow and add tests for 
complex data formats
URL: https://github.com/apache/flink/pull/9139#issuecomment-512673463
 
 
   Travis passed: https://travis-ci.com/TsReaper/flink/builds/119572124


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10672) Task stuck while writing output to flink

2019-07-17 Thread zhijiang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887642#comment-16887642
 ] 

zhijiang commented on FLINK-10672:
--

[~ibzib] I think it is the case of back pressure. The producer is blocked in 
the process of  `requestMemorySegment` because its produced partition data 
could not be consumed by downstream tasks. In order to trace the root issue, 
the key point is to find the proper downstream task which triggers the back 
pressure.

E.G. for the topology of A-->B–>C—>D, if we found vertex A is blocked by 
`requestMemorySegment` long time, we can trace the state of upstream vertex in 
topology. If vertex B is also blocked, we could continue tracing the upstream 
until we find the vertex which is not blocked any more, assuming vertex C in 
this case.  Then we further check which specific parallelism task in vertex C 
causes the above serious block. Such task has a feature that its inqueue buffer 
size is very high and its out queue size is low even empty, and the relevant 
metrics could help. If such task is found, we could further check its stack to 
confirm where it is stuck.

Maybe you could get the root cause then, or you can provide further findings 
for us. 

 

> Task stuck while writing output to flink
> 
>
> Key: FLINK-10672
> URL: https://issues.apache.org/jira/browse/FLINK-10672
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.5.4
> Environment: OS: Debuan rodente 4.17
> Flink version: 1.5.4
> ||Key||Value||
> |jobmanager.heap.mb|1024|
> |jobmanager.rpc.address|localhost|
> |jobmanager.rpc.port|6123|
> |metrics.reporter.jmx.class|org.apache.flink.metrics.jmx.JMXReporter|
> |metrics.reporter.jmx.port|9250-9260|
> |metrics.reporters|jmx|
> |parallelism.default|1|
> |rest.port|8081|
> |taskmanager.heap.mb|1024|
> |taskmanager.numberOfTaskSlots|1|
> |web.tmpdir|/tmp/flink-web-bdb73d6c-5b9e-47b5-9ebf-eed0a7c82c26|
>  
> h1. Overview
> ||Data Port||All Slots||Free Slots||CPU Cores||Physical Memory||JVM Heap 
> Size||Flink Managed Memory||
> |43501|1|0|12|62.9 GB|922 MB|642 MB|
> h1. Memory
> h2. JVM (Heap/Non-Heap)
> ||Type||Committed||Used||Maximum||
> |Heap|922 MB|575 MB|922 MB|
> |Non-Heap|68.8 MB|64.3 MB|-1 B|
> |Total|991 MB|639 MB|922 MB|
> h2. Outside JVM
> ||Type||Count||Used||Capacity||
> |Direct|3,292|105 MB|105 MB|
> |Mapped|0|0 B|0 B|
> h1. Network
> h2. Memory Segments
> ||Type||Count||
> |Available|3,194|
> |Total|3,278|
> h1. Garbage Collection
> ||Collector||Count||Time||
> |G1_Young_Generation|13|336|
> |G1_Old_Generation|1|21|
>Reporter: Ankur Goenka
>Priority: Major
>  Labels: beam
> Attachments: 1uruvakHxBu.png, 3aDKQ24WvKk.png, Po89UGDn58V.png, 
> jmx_dump.json, jmx_dump_detailed.json, jstack_129827.log, jstack_163822.log, 
> jstack_66985.log
>
>
> I am running a fairly complex pipleline with 200+ task.
> The pipeline works fine with small data (order of 10kb input) but gets stuck 
> with a slightly larger data (300kb input).
>  
> The task gets stuck while writing the output toFlink, more specifically it 
> gets stuck while requesting memory segment in local buffer pool. The Task 
> manager UI shows that it has enough memory and memory segments to work with.
> The relevant stack trace is 
> {quote}"grpc-default-executor-0" #138 daemon prio=5 os_prio=0 
> tid=0x7fedb0163800 nid=0x30b7f in Object.wait() [0x7fedb4f9]
>  java.lang.Thread.State: TIMED_WAITING (on object monitor)
>  at (C/C++) 0x7fef201c7dae (Unknown Source)
>  at (C/C++) 0x7fef1f2aea07 (Unknown Source)
>  at (C/C++) 0x7fef1f241cd3 (Unknown Source)
>  at java.lang.Object.wait(Native Method)
>  - waiting on <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:247)
>  - locked <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:204)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:213)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:144)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:107)
>  at 
> org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65)
>  at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:42)
>  at 
> 

[GitHub] [flink] flinkbot edited a comment on issue #9060: [FLINK-13145][tests] Run HA dataset E2E test with new RestartPipelinedRegionStrategy

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #9060: [FLINK-13145][tests] Run HA dataset 
E2E test with new RestartPipelinedRegionStrategy
URL: https://github.com/apache/flink/pull/9060#issuecomment-512355001
 
 
   ## CI report:
   
   * 46be5eac1dbb7eb35fd3ef22574392f33305fe82 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119505885)
   * e7aa9d7d39a3af3585c8420f4c9bc663197e35b3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119579528)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #9142: [hotfix][connector] fix typo in filters of maven shade plguin

2019-07-17 Thread GitBox
wuchong commented on issue #9142: [hotfix][connector] fix typo in filters of 
maven shade plguin
URL: https://github.com/apache/flink/pull/9142#issuecomment-512672153
 
 
   Thanks for the explanation @docete . Looks good to me. 
   Wait the travis compile pass.
   
   Btw, do you intend to merge this to  1.9?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lincoln-lil commented on issue #9146: [FLINK-13284] [table-planner-blink] Correct some builtin functions' r…

2019-07-17 Thread GitBox
lincoln-lil commented on issue #9146: [FLINK-13284] [table-planner-blink] 
Correct some builtin functions' r…
URL: https://github.com/apache/flink/pull/9146#issuecomment-512670571
 
 
   @godfreyhe @JingsongLi  Agree with you that we should offer a deterministic 
semantic for those 'dirty data', I think we can achieve this for two steps:
   1. unify all the builtin functions' exception handling behavior for blink 
planner(since it differs with flink planner), I found two exception functions 
and will create another issue to fix it.
   2. add a global configuration to support something like [MySQL's 
strict/non-strict sql 
mode](https://dev.mysql.com/doc/refman/5.6/en/sql-mode.html#sql-mode-strict) 
for exception handling includes numeric out-of-range and overflow and illegal 
inputs for sources. We can start a new thread to discuss it, what do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13132) Allow JobClusterEntrypoints use user main method to generate job graph

2019-07-17 Thread Zhenqiu Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenqiu Huang updated FLINK-13132:
--
Summary: Allow JobClusterEntrypoints use user main method to generate job 
graph  (was: Allow ClusterEntrypoints use user main method to generate job 
graph)

> Allow JobClusterEntrypoints use user main method to generate job graph
> --
>
> Key: FLINK-13132
> URL: https://issues.apache.org/jira/browse/FLINK-13132
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.8.0, 1.8.1
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Minor
>
> We are building a service that can transparently deploy a job to different 
> cluster management systems, such as Yarn and another internal system. It is 
> very cost to download the jar and generate JobGraph in the client side. Thus, 
> I want to propose an improvement to make Yarn Entrypoints can be configurable 
> to use either FileJobGraphRetriever or ClassPathJobGraphRetriever. It is 
> actually a long asking TODO in AbstractionYarnClusterDescriptor in line 834.
> https://github.com/apache/flink/blob/21468e0050dc5f97de5cfe39885e0d3fd648e399/flink-yarn/src/main/java/org/apache/flink/yarn/AbstractYarnClusterDescriptor.java#L834



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9128: [FLINK-13256] Ensure periodical checkpoint still valid when region failover abort pending checkpoints

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #9128: [FLINK-13256] Ensure periodical 
checkpoint still valid when region failover abort pending checkpoints
URL: https://github.com/apache/flink/pull/9128#issuecomment-511742081
 
 
   ## CI report:
   
   * 044dfa6e3fbaee8b4fec6ef4232ae539dbc94308 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119280573)
   * be848ebc2ea988242b639bf67c7b764574a83286 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119450131)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9156: [FLINK-12894]add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread GitBox
flinkbot commented on issue #9156: [FLINK-12894]add Chinese documentation of 
how to configure and use catalogs in SQL CLI
URL: https://github.com/apache/flink/pull/9156#issuecomment-512668277
 
 
   ## CI report:
   
   * bc17aec9efac2a454b51e5a6f53196a7d78f0cb9 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119578288)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9157: [FLINK-13086]add Chinese documentation for catalogs

2019-07-17 Thread GitBox
flinkbot commented on issue #9157: [FLINK-13086]add Chinese documentation for 
catalogs
URL: https://github.com/apache/flink/pull/9157#issuecomment-512668282
 
 
   ## CI report:
   
   * bc17aec9efac2a454b51e5a6f53196a7d78f0cb9 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119578288)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-13086) add Chinese documentation for catalogs

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887589#comment-16887589
 ] 

frank wang edited comment on FLINK-13086 at 7/18/19 5:07 AM:
-

https://github.com/apache/flink/pull/9157

plse review it ,thx


was (Author: frank wang):
[https://github.com/apache/flink/pull/9155|https://github.com/apache/flink/pull/9157]

plse review it ,thx

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887588#comment-16887588
 ] 

frank wang edited comment on FLINK-12894 at 7/18/19 5:07 AM:
-

https://github.com/apache/flink/pull/9156

plse review it ,thx


was (Author: frank wang):
[https://github.com/apache/flink/pull/9155]

plse review it ,thx

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9157: [FLINK-13086]add Chinese documentation for catalogs

2019-07-17 Thread GitBox
flinkbot commented on issue #9157: [FLINK-13086]add Chinese documentation for 
catalogs
URL: https://github.com/apache/flink/pull/9157#issuecomment-512666543
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9156: [FLINK-12894]add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread GitBox
flinkbot commented on issue #9156: [FLINK-12894]add Chinese documentation of 
how to configure and use catalogs in SQL CLI
URL: https://github.com/apache/flink/pull/9156#issuecomment-512666549
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-13086) add Chinese documentation for catalogs

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887589#comment-16887589
 ] 

frank wang edited comment on FLINK-13086 at 7/18/19 5:06 AM:
-

[https://github.com/apache/flink/pull/9155|https://github.com/apache/flink/pull/9157]

plse review it ,thx


was (Author: frank wang):
[https://github.com/apache/flink/pull/9155]

plse review it ,thx

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13086) add Chinese documentation for catalogs

2019-07-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13086:
---
Labels: pull-request-available  (was: )

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] yiduwangkai opened a new pull request #9157: [FLINK-13086]add Chinese documentation for catalogs

2019-07-17 Thread GitBox
yiduwangkai opened a new pull request #9157: [FLINK-13086]add Chinese 
documentation for catalogs
URL: https://github.com/apache/flink/pull/9157
 
 
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] yiduwangkai opened a new pull request #9156: [FLINK-12894]add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread GitBox
yiduwangkai opened a new pull request #9156: [FLINK-12894]add Chinese 
documentation of how to configure and use catalogs in SQL CLI
URL: https://github.com/apache/flink/pull/9156
 
 
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink …

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #8966: [FLINK-13074][table-planner-blink] 
Add PartitionableTableSink bridge logic to flink …
URL: https://github.com/apache/flink/pull/8966#issuecomment-511712813
 
 
   ## CI report:
   
   * 7abe4618172b34dfc190c67b3d28422afe4dd0ae : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119414109)
   * 2fb2bdc43576207a69f67769c8ec3bd0b94420db : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441272)
   * ed663900cc30bab9e1763cbaa56ee957f3e3a959 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119443404)
   * 62dd82e8148df1d915e96b23efc096c28ee1a70f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119447553)
   * 05caace66f5b816cd230637f494c91c2d919d43b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119577039)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] yiduwangkai closed pull request #9155: [FLINK-12894]and[FLINK-13086]add Chinese documentation of how to configure and use catalogs in SQL CLI and add Chinese documentation for catalog

2019-07-17 Thread GitBox
yiduwangkai closed pull request #9155: [FLINK-12894]and[FLINK-13086]add Chinese 
documentation of how to configure and use catalogs in SQL CLI and add Chinese 
documentation for catalogs
URL: https://github.com/apache/flink/pull/9155
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file 
replication config for yarn configuration
URL: https://github.com/apache/flink/pull/8303#issuecomment-511684151
 
 
   ## CI report:
   
   * 6a7ca58b4a04f6dce250045e021702e67e82b893 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119421914)
   * 4d38a8df0d59734c4b2386689a2f17b9f2b44b12 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441376)
   * 9c14836f8639e98d58cf7bb32e38b938b3843994 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119577044)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink …

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #8966: [FLINK-13074][table-planner-blink] 
Add PartitionableTableSink bridge logic to flink …
URL: https://github.com/apache/flink/pull/8966#issuecomment-511712813
 
 
   ## CI report:
   
   * 7abe4618172b34dfc190c67b3d28422afe4dd0ae : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119414109)
   * 2fb2bdc43576207a69f67769c8ec3bd0b94420db : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441272)
   * ed663900cc30bab9e1763cbaa56ee957f3e3a959 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119443404)
   * 62dd82e8148df1d915e96b23efc096c28ee1a70f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119447553)
   * 05caace66f5b816cd230637f494c91c2d919d43b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119577039)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on issue #9142: [hotfix][connector] fix typo in filters of maven shade plguin

2019-07-17 Thread GitBox
docete commented on issue #9142: [hotfix][connector] fix typo in filters of 
maven shade plguin
URL: https://github.com/apache/flink/pull/9142#issuecomment-512663388
 
 
   That's why the typo exclude works.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on issue #9142: [hotfix][connector] fix typo in filters of maven shade plguin

2019-07-17 Thread GitBox
docete commented on issue #9142: [hotfix][connector] fix typo in filters of 
maven shade plguin
URL: https://github.com/apache/flink/pull/9142#issuecomment-512663324
 
 
   I check maven plugin code and did some experiments. The conclusion is that 
Maven maps plugin configuration in xml to a Mojo(by @parameter annotation). And 
when the mojo has an array parameter( in this case), maven 
do not check the sub-tag of that parameter.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink …

2019-07-17 Thread GitBox
danny0405 commented on a change in pull request #8966: 
[FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to 
flink …
URL: https://github.com/apache/flink/pull/8966#discussion_r304732700
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/runtime/batch/sql/PartitionableSinkITCase.scala
 ##
 @@ -0,0 +1,278 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.batch.sql
+
+import org.apache.flink.api.common.ExecutionConfig
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.api.common.typeutils.TypeSerializer
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import org.apache.flink.configuration.Configuration
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImpl
+import org.apache.flink.sql.parser.validate.FlinkSqlConformance
+import org.apache.flink.streaming.api.datastream.{DataStream, DataStreamSink}
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction
+import org.apache.flink.table.api.{ExecutionConfigOptions, TableConfig, 
TableException, TableSchema}
+import org.apache.flink.table.calcite.CalciteConfig
+import org.apache.flink.table.runtime.batch.sql.PartitionableSinkITCase._
+import org.apache.flink.table.runtime.utils.BatchTestBase
+import org.apache.flink.table.runtime.utils.BatchTestBase.row
+import org.apache.flink.table.runtime.utils.TestData._
+import org.apache.flink.table.sinks.{PartitionableTableSink, StreamTableSink, 
TableSink}
+import org.apache.flink.table.types.logical.{BigIntType, IntType, VarCharType}
+import org.apache.flink.types.Row
+
+import org.apache.calcite.config.Lex
+import org.apache.calcite.sql.parser.SqlParser
+import org.junit.Assert._
+import org.junit.{Before, Test}
+
+import java.util.concurrent.LinkedBlockingQueue
+import java.util.{LinkedList => JLinkedList, List => JList, Map => JMap}
+
+import scala.collection.JavaConversions._
+import scala.collection.Seq
+
+/**
+  * Test cases for [[org.apache.flink.table.sinks.PartitionableTableSink]].
+  */
+class PartitionableSinkITCase extends BatchTestBase {
 
 Review comment:
   Added


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink …

2019-07-17 Thread GitBox
danny0405 commented on a change in pull request #8966: 
[FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to 
flink …
URL: https://github.com/apache/flink/pull/8966#discussion_r304732748
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/runtime/batch/sql/PartitionableSinkITCase.scala
 ##
 @@ -0,0 +1,278 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.batch.sql
+
+import org.apache.flink.api.common.ExecutionConfig
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.api.common.typeutils.TypeSerializer
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import org.apache.flink.configuration.Configuration
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImpl
+import org.apache.flink.sql.parser.validate.FlinkSqlConformance
+import org.apache.flink.streaming.api.datastream.{DataStream, DataStreamSink}
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction
+import org.apache.flink.table.api.{ExecutionConfigOptions, TableConfig, 
TableException, TableSchema}
+import org.apache.flink.table.calcite.CalciteConfig
+import org.apache.flink.table.runtime.batch.sql.PartitionableSinkITCase._
+import org.apache.flink.table.runtime.utils.BatchTestBase
+import org.apache.flink.table.runtime.utils.BatchTestBase.row
+import org.apache.flink.table.runtime.utils.TestData._
+import org.apache.flink.table.sinks.{PartitionableTableSink, StreamTableSink, 
TableSink}
+import org.apache.flink.table.types.logical.{BigIntType, IntType, VarCharType}
+import org.apache.flink.types.Row
+
+import org.apache.calcite.config.Lex
+import org.apache.calcite.sql.parser.SqlParser
+import org.junit.Assert._
+import org.junit.{Before, Test}
+
+import java.util.concurrent.LinkedBlockingQueue
+import java.util.{LinkedList => JLinkedList, List => JList, Map => JMap}
+
+import scala.collection.JavaConversions._
+import scala.collection.Seq
+
+/**
+  * Test cases for [[org.apache.flink.table.sinks.PartitionableTableSink]].
+  */
+class PartitionableSinkITCase extends BatchTestBase {
+
+  @Before
+  override def before(): Unit = {
+super.before()
+env.setParallelism(3)
+tEnv.getConfig
+  .getConfiguration
+  .setInteger(ExecutionConfigOptions.SQL_RESOURCE_DEFAULT_PARALLELISM, 3)
+registerCollection("nonSortTable", testData, type3, "a, b, c", 
dataNullables)
+registerCollection("sortTable", testData1, type3, "a, b, c", dataNullables)
+PartitionableSinkITCase.init()
+  }
+
+  override def getTableConfig: TableConfig = {
+val parserConfig = SqlParser.configBuilder
+  .setParserFactory(FlinkSqlParserImpl.FACTORY)
+  .setConformance(FlinkSqlConformance.HIVE) // set up hive dialect
+  .setLex(Lex.JAVA)
+  .setIdentifierMaxLength(256).build
+val plannerConfig = CalciteConfig.createBuilder(CalciteConfig.DEFAULT)
+  .replaceSqlParserConfig(parserConfig)
+val tableConfig = new TableConfig
+tableConfig.setPlannerConfig(plannerConfig.build())
+tableConfig
+  }
+
+  @Test
+  def testInsertWithOutPartitionGrouping(): Unit = {
+registerTableSink(grouping = false)
+tEnv.sqlUpdate("insert into sinkTable select a, max(b), c"
+  + " from nonSortTable group by a, c")
+tEnv.execute("testJob")
+val resultSet = List(RESULT1, RESULT2, RESULT3)
+assert(resultSet.exists(l => l.size() == 3))
+resultSet.filter(l => l.size() == 3).foreach{ list =>
+  assert(list.forall(r => r.getField(0).toString == "1"))
+}
+  }
+
+  @Test
+  def testInsertWithPartitionGrouping(): Unit = {
+registerTableSink(grouping = true)
+tEnv.sqlUpdate("insert into sinkTable select a, b, c from sortTable")
+tEnv.execute("testJob")
+val resultSet = List(RESULT1, RESULT2, RESULT3)
+resultSet.foreach(l => assertSortedByFirstNField(l, 1))
+assertEquals(resultSet.map(l => collectDistinctGroupCount(l, 2)).sum, 4)
+  }
+
+  @Test
+  def testInsertWithStaticPartitions(): Unit = {
+val testSink = registerTableSink(grouping = true)
+tEnv.sqlUpdate("insert into sinkTable partition(a=1) select b, c from 
sortTable")
+

[GitHub] [flink] flinkbot edited a comment on issue #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink …

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #8966: [FLINK-13074][table-planner-blink] 
Add PartitionableTableSink bridge logic to flink …
URL: https://github.com/apache/flink/pull/8966#issuecomment-511712813
 
 
   ## CI report:
   
   * 7abe4618172b34dfc190c67b3d28422afe4dd0ae : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119414109)
   * 2fb2bdc43576207a69f67769c8ec3bd0b94420db : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441272)
   * ed663900cc30bab9e1763cbaa56ee957f3e3a959 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119443404)
   * 62dd82e8148df1d915e96b23efc096c28ee1a70f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119447553)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] danny0405 commented on a change in pull request #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink …

2019-07-17 Thread GitBox
danny0405 commented on a change in pull request #8966: 
[FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to 
flink …
URL: https://github.com/apache/flink/pull/8966#discussion_r304310089
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/runtime/batch/sql/PartitionableSinkITCase.scala
 ##
 @@ -0,0 +1,278 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.batch.sql
+
+import org.apache.flink.api.common.ExecutionConfig
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.api.common.typeutils.TypeSerializer
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import org.apache.flink.configuration.Configuration
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImpl
+import org.apache.flink.sql.parser.validate.FlinkSqlConformance
+import org.apache.flink.streaming.api.datastream.{DataStream, DataStreamSink}
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction
+import org.apache.flink.table.api.{ExecutionConfigOptions, TableConfig, 
TableException, TableSchema}
+import org.apache.flink.table.calcite.CalciteConfig
+import org.apache.flink.table.runtime.batch.sql.PartitionableSinkITCase._
+import org.apache.flink.table.runtime.utils.BatchTestBase
+import org.apache.flink.table.runtime.utils.BatchTestBase.row
+import org.apache.flink.table.runtime.utils.TestData._
+import org.apache.flink.table.sinks.{PartitionableTableSink, StreamTableSink, 
TableSink}
+import org.apache.flink.table.types.logical.{BigIntType, IntType, VarCharType}
+import org.apache.flink.types.Row
+
+import org.apache.calcite.config.Lex
+import org.apache.calcite.sql.parser.SqlParser
+import org.junit.Assert._
+import org.junit.{Before, Test}
+
+import java.util.concurrent.LinkedBlockingQueue
+import java.util.{LinkedList => JLinkedList, List => JList, Map => JMap}
+
+import scala.collection.JavaConversions._
+import scala.collection.Seq
+
+/**
+  * Test cases for [[org.apache.flink.table.sinks.PartitionableTableSink]].
+  */
+class PartitionableSinkITCase extends BatchTestBase {
+
+  @Before
+  override def before(): Unit = {
+super.before()
+env.setParallelism(3)
+tEnv.getConfig
+  .getConfiguration
+  .setInteger(ExecutionConfigOptions.SQL_RESOURCE_DEFAULT_PARALLELISM, 3)
+registerCollection("nonSortTable", testData, type3, "a, b, c", 
dataNullables)
+registerCollection("sortTable", testData1, type3, "a, b, c", dataNullables)
+PartitionableSinkITCase.init()
+  }
+
+  override def getTableConfig: TableConfig = {
+val parserConfig = SqlParser.configBuilder
+  .setParserFactory(FlinkSqlParserImpl.FACTORY)
+  .setConformance(FlinkSqlConformance.HIVE) // set up hive dialect
+  .setLex(Lex.JAVA)
+  .setIdentifierMaxLength(256).build
+val plannerConfig = CalciteConfig.createBuilder(CalciteConfig.DEFAULT)
+  .replaceSqlParserConfig(parserConfig)
+val tableConfig = new TableConfig
+tableConfig.setPlannerConfig(plannerConfig.build())
+tableConfig
+  }
+
+  @Test
+  def testInsertWithOutPartitionGrouping(): Unit = {
+registerTableSink(grouping = false)
+tEnv.sqlUpdate("insert into sinkTable select a, max(b), c"
+  + " from nonSortTable group by a, c")
+tEnv.execute("testJob")
+val resultSet = List(RESULT1, RESULT2, RESULT3)
+assert(resultSet.exists(l => l.size() == 3))
+resultSet.filter(l => l.size() == 3).foreach{ list =>
+  assert(list.forall(r => r.getField(0).toString == "1"))
+}
+  }
+
+  @Test
+  def testInsertWithPartitionGrouping(): Unit = {
+registerTableSink(grouping = true)
+tEnv.sqlUpdate("insert into sinkTable select a, b, c from sortTable")
+tEnv.execute("testJob")
+val resultSet = List(RESULT1, RESULT2, RESULT3)
+resultSet.foreach(l => assertSortedByFirstNField(l, 1))
+assertEquals(resultSet.map(l => collectDistinctGroupCount(l, 2)).sum, 4)
+  }
+
+  @Test
+  def testInsertWithStaticPartitions(): Unit = {
+val testSink = registerTableSink(grouping = true)
+tEnv.sqlUpdate("insert into sinkTable partition(a=1) select b, c from 
sortTable")
+

[GitHub] [flink] danny0405 commented on a change in pull request #8966: [FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to flink …

2019-07-17 Thread GitBox
danny0405 commented on a change in pull request #8966: 
[FLINK-13074][table-planner-blink] Add PartitionableTableSink bridge logic to 
flink …
URL: https://github.com/apache/flink/pull/8966#discussion_r304253035
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/runtime/batch/sql/PartitionableSinkITCase.scala
 ##
 @@ -0,0 +1,278 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.batch.sql
+
+import org.apache.flink.api.common.ExecutionConfig
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.api.common.typeutils.TypeSerializer
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import org.apache.flink.configuration.Configuration
+import org.apache.flink.sql.parser.impl.FlinkSqlParserImpl
+import org.apache.flink.sql.parser.validate.FlinkSqlConformance
+import org.apache.flink.streaming.api.datastream.{DataStream, DataStreamSink}
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction
+import org.apache.flink.table.api.{ExecutionConfigOptions, TableConfig, 
TableException, TableSchema}
+import org.apache.flink.table.calcite.CalciteConfig
+import org.apache.flink.table.runtime.batch.sql.PartitionableSinkITCase._
+import org.apache.flink.table.runtime.utils.BatchTestBase
+import org.apache.flink.table.runtime.utils.BatchTestBase.row
+import org.apache.flink.table.runtime.utils.TestData._
+import org.apache.flink.table.sinks.{PartitionableTableSink, StreamTableSink, 
TableSink}
+import org.apache.flink.table.types.logical.{BigIntType, IntType, VarCharType}
+import org.apache.flink.types.Row
+
+import org.apache.calcite.config.Lex
+import org.apache.calcite.sql.parser.SqlParser
+import org.junit.Assert._
+import org.junit.{Before, Test}
+
+import java.util.concurrent.LinkedBlockingQueue
+import java.util.{LinkedList => JLinkedList, List => JList, Map => JMap}
+
+import scala.collection.JavaConversions._
+import scala.collection.Seq
+
+/**
+  * Test cases for [[org.apache.flink.table.sinks.PartitionableTableSink]].
+  */
+class PartitionableSinkITCase extends BatchTestBase {
 
 Review comment:
   Because we do not really implement it yet in flink-planner, will just throw 
some exception.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9150: [FLINK-13227][docs-zh] Translate "asyncio" page into Chinese

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #9150: [FLINK-13227][docs-zh] Translate 
"asyncio" page into Chinese
URL: https://github.com/apache/flink/pull/9150#issuecomment-512293368
 
 
   ## CI report:
   
   * e5f1200fa10103c4997c7c4fc915cb4b071c70f1 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119490342)
   * 656d5ca5ad1525f431e2a94b2895f0d52caed7b8 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119572202)
   * 90912be25c20897655262da2433bc49d9ecfcbf6 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119576078)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13315) Port wmstrategies to api-java-bridge

2019-07-17 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13315:
---

Assignee: Jingsong Lee

> Port wmstrategies to api-java-bridge
> 
>
> Key: FLINK-13315
> URL: https://issues.apache.org/jira/browse/FLINK-13315
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9141: [FLINK-12249][table] Fix type equivalence check problems for Window Aggregates

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #9141: [FLINK-12249][table] Fix type 
equivalence check problems for Window Aggregates
URL: https://github.com/apache/flink/pull/9141#issuecomment-512181311
 
 
   ## CI report:
   
   * 1af43966364d445984524fd3bf51a0f3b8e75cbe : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119443400)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-13161) numBuckets calculate wrong in BinaryHashBucketArea

2019-07-17 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-13161.
-
   Resolution: Fixed
Fix Version/s: 1.10.0

Fixed in 1.10.0: 18ace00455aeba00ac459623c46f762900981aec
Fixed in 1.9.0: ca6016c7878245715301e5dfd1743b18106fd2bb

> numBuckets calculate wrong in BinaryHashBucketArea
> --
>
> Key: FLINK-13161
> URL: https://issues.apache.org/jira/browse/FLINK-13161
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.9.0
>Reporter: Louis Xu
>Assignee: Louis Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
> Attachments: result.log
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The original code is:
>  
> {code:java}
> int minNumBuckets = (int) Math.ceil((estimatedRowCount / loadFactor / 
> NUM_ENTRIES_PER_BUCKET));
> int bucketNumSegs = Math.max(1, Math.min(maxSegs, (minNumBuckets >>> 
> table.bucketsPerSegmentBits) +
>   ((minNumBuckets & table.bucketsPerSegmentMask) == 0 ? 0 : 1)));
> int numBuckets = MathUtils.roundDownToPowerOf2(bucketNumSegs << 
> table.bucketsPerSegmentBits);
> {code}
> default value: loadFactor=0.75, NUM_ENTRIES_PER_BUCKET=15,maxSegs = 
> 33(suppose, only need big than the number which calculated by minBunBuckets)
> We suppose table.bucketsPerSegmentBits = 3, table.bucketsPerSegmentMask = 
> 0b111. It means buckets in a segment is 8.
> When set estimatedRowCount loop from 1 to 1000, we will see the result in 
> attach file.
> I will take an example:
> {code:java}
> estimatedRowCount: 200, minNumBuckets: 18, bucketNumSegs: 3, numBuckets: 16
> {code}
> We can see it request 3 segment, but only 2 segment needed(16 / 8), left one 
> segment wasted. 
> And consider the segment is preallocated, it means some segments will never 
> used.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9027: [FLINK-13161][table-blink-runtime]numBuckets calculate wrong in BinaryHashBucketArea

2019-07-17 Thread GitBox
wuchong commented on issue #9027: [FLINK-13161][table-blink-runtime]numBuckets 
calculate wrong in BinaryHashBucketArea
URL: https://github.com/apache/flink/pull/9027#issuecomment-512659569
 
 
   Hi @zentol , I think this should go into 1.9 too, because this is an 
important fix. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] asfgit closed pull request #9027: [FLINK-13161][table-blink-runtime]numBuckets calculate wrong in BinaryHashBucketArea

2019-07-17 Thread GitBox
asfgit closed pull request #9027: [FLINK-13161][table-blink-runtime]numBuckets 
calculate wrong in BinaryHashBucketArea
URL: https://github.com/apache/flink/pull/9027
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] Translate "asyncio" page into Chinese

2019-07-17 Thread GitBox
YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] 
Translate "asyncio" page into Chinese
URL: https://github.com/apache/flink/pull/9150#discussion_r304728193
 
 

 ##
 File path: docs/dev/stream/operators/asyncio.zh.md
 ##
 @@ -140,130 +124,114 @@ DataStream> resultStream =
 
 {% highlight scala %}
 /**
- * An implementation of the 'AsyncFunction' that sends requests and sets the 
callback.
+ * 实现 'AsyncFunction' 用于发送请求和设置回调。
  */
 class AsyncDatabaseRequest extends AsyncFunction[String, (String, String)] {
 
-/** The database specific client that can issue concurrent requests with 
callbacks */
+/** 使用回调函数来并发发送请求的数据库客户端 */
 lazy val client: DatabaseClient = new DatabaseClient(host, post, 
credentials)
 
-/** The context used for the future callbacks */
+/** 用于 future 回调的上下文环境 */
 implicit lazy val executor: ExecutionContext = 
ExecutionContext.fromExecutor(Executors.directExecutor())
 
 
 override def asyncInvoke(str: String, resultFuture: ResultFuture[(String, 
String)]): Unit = {
 
-// issue the asynchronous request, receive a future for the result
+// 发送异步请求,接收 future 结果
 val resultFutureRequested: Future[String] = client.query(str)
 
-// set the callback to be executed once the request by the client is 
complete
-// the callback simply forwards the result to the result future
+// 设置客户端完成请求后要执行的回调函数
+// 回调函数只是简单地把结果发给 future
 resultFutureRequested.onSuccess {
 case result: String => resultFuture.complete(Iterable((str, 
result)))
 }
 }
 }
 
-// create the original stream
+// 创建初始 DataStream
 val stream: DataStream[String] = ...
 
-// apply the async I/O transformation
+// 应用异步 I/O 转换操作
 val resultStream: DataStream[(String, String)] =
 AsyncDataStream.unorderedWait(stream, new AsyncDatabaseRequest(), 1000, 
TimeUnit.MILLISECONDS, 100)
 
 {% endhighlight %}
 
 
 
-**Important note**: The `ResultFuture` is completed with the first call of 
`ResultFuture.complete`.
-All subsequent `complete` calls will be ignored.
+**重要提示**: 第一次调用 `ResultFuture.complete` 后 `ResultFuture` 就完成了。
+后续的 `complete` 调用都将被忽略。
 
-The following two parameters control the asynchronous operations:
+下面两个参数控制异步操作:
 
-  - **Timeout**: The timeout defines how long an asynchronous request may take 
before it is considered failed. This parameter
-guards against dead/failed requests.
+  - **Timeout**: 超时参数定义了异步请求发出多久后未得到响应即被认定为失败。 它可以防止一直等待得不到响应的请求。
 
-  - **Capacity**: This parameter defines how many asynchronous requests may be 
in progress at the same time.
-Even though the async I/O approach leads typically to much better 
throughput, the operator can still be the bottleneck in
-the streaming application. Limiting the number of concurrent requests 
ensures that the operator will not
-accumulate an ever-growing backlog of pending requests, but that it will 
trigger backpressure once the capacity
-is exhausted.
+  - **Capacity**: 容量参数定义了可以同时进行的异步请求数。
+即使异步 I/O 通常带来更高的吞吐量, 执行异步 I/O  操作的算子仍然可能成为流处理的瓶颈。 
限制并发请求的数量可以确保算子不会持续累积待处理的请求进而造成积压,而是在容量耗尽时触发反压。
 
 Review comment:
   持续积累的请求不会导致反压,容量耗尽会。


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9140: [FLINK-13299][travis][python] fix flink-python failed on Travis because of incompatible virtualenv

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #9140: [FLINK-13299][travis][python] fix 
flink-python failed on Travis because of incompatible virtualenv
URL: https://github.com/apache/flink/pull/9140#issuecomment-512178951
 
 
   ## CI report:
   
   * e6d7ab356d163e51b3d71d07ec877ac4deeb0a53 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119442369)
   * ed26ccb9922a9592b5a9ed75de4646dd0509b8ae : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119463137)
   * 785bf404f25a5bc270691a2170b51131058e68a3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119470407)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] Translate "asyncio" page into Chinese

2019-07-17 Thread GitBox
YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] 
Translate "asyncio" page into Chinese
URL: https://github.com/apache/flink/pull/9150#discussion_r304711786
 
 

 ##
 File path: docs/dev/stream/operators/asyncio.zh.md
 ##
 @@ -26,71 +26,55 @@ under the License.
 * ToC
 {:toc}
 
-This page explains the use of Flink's API for asynchronous I/O with external 
data stores.
-For users not familiar with asynchronous or event-driven programming, an 
article about Futures and
-event-driven programming may be useful preparation.
+本文讲解 Flink 用于访问外部数据存储的异步 I/O API。
+对于不熟悉异步或者事件驱动编程的用户,建议先储备一些关于 Future 和事件驱动编程的知识。
 
-Note: Details about the design and implementation of the asynchronous I/O 
utility can be found in the proposal and design document
-[FLIP-12: Asynchronous I/O Design and 
Implementation](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65870673).
+提示:这篇文档 [FLIP-12: 异步 I/O  
的设计和实现](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65870673)
 介绍了关于设计和实现异步 I/O 功能的细节。
 
+## 对于异步 I/O 操作的需求
 
-## The need for Asynchronous I/O Operations
+在与外部系统交互(用数据库中的数据扩充流数据)的时候,需要考虑与外部系统的通信延迟对整个流处理应用的影响。
 
-When interacting with external systems (for example when enriching stream 
events with data stored in a database), one needs to take care
-that communication delay with the external system does not dominate the 
streaming application's total work.
+简单地访问外部数据库的数据,比如使用 `MapFunction`,通常意味着**同步**交互:
+`MapFunction` 向数据库发送一个请求然后一直等待,直到收到响应。在许多情况下,等待占据了函数运行的大部分时间。
 
-Naively accessing data in the external database, for example in a 
`MapFunction`, typically means **synchronous** interaction:
-A request is sent to the database and the `MapFunction` waits until the 
response has been received. In many cases, this waiting
-makes up the vast majority of the function's time.
-
-Asynchronous interaction with the database means that a single parallel 
function instance can handle many requests concurrently and
-receive the responses concurrently. That way, the waiting time can be 
overlayed with sending other requests and
-receiving responses. At the very least, the waiting time is amortized over 
multiple requests. This leads in most cased to much higher
-streaming throughput.
+与数据库异步交互是指一个并行函数实例可以并发地处理多个请求和接收多个响应。这样,函数在等待的时间可以发送其他请求和接收其他响应。至少等待的时间可以被多个请求摊分。大多数情况下,异步交互可以大幅度提高流处理的吞吐量。
 
 
 
-*Note:* Improving throughput by just scaling the `MapFunction` to a very high 
parallelism is in some cases possible as well, but usually
-comes at a very high resource cost: Having many more parallel MapFunction 
instances means more tasks, threads, Flink-internal network
-connections, network connections to the database, buffers, and general 
internal bookkeeping overhead.
+*注意:*仅仅提高 `MapFunction` 
的并行度(parallelism)在有些情况下也可以提升吞吐量,但是这样做通常会导致非常高的资源消耗:更多的并行 `MapFunction` 实例意味着更多的 
Task、更多的线程、更多的 Flink 内部网络连接、 更多的与数据库的网络连接、更多的缓冲和更多程序内部协调的开销。
 
 
-## Prerequisites
+## 先决条件
 
-As illustrated in the section above, implementing proper asynchronous I/O to a 
database (or key/value store) requires a client
-to that database that supports asynchronous requests. Many popular databases 
offer such a client.
+如上节所述,正确地实现数据库(或键/值存储)的异步 I/O 交互需要支持异步请求的数据库客户端。许多主流数据库都提供了这样的客户端。
 
-In the absence of such a client, one can try and turn a synchronous client 
into a limited concurrent client by creating
-multiple clients and handling the synchronous calls with a thread pool. 
However, this approach is usually less
-efficient than a proper asynchronous client.
+如果没有这样的客户端,可以通过创建多个客户端并使用线程池处理同步调用的方法,将同步客户端转换为有限并发的客户端。然而,这种方法通常比正规的异步客户端效率低。
 
 
-## Async I/O API
+## 异步 I/O API
 
-Flink's Async I/O API allows users to use asynchronous request clients with 
data streams. The API handles the integration with
-data streams, well as handling order, event time, fault tolerance, etc.
+Flink 的异步 I/O API 允许用户在流处理中使用异步请求客户端。API 处理与数据流的集成、顺序、事件时间和容错等。
 
 Review comment:
   我理解这里是指 `order` `event time` `fault tolerance` 这些工作都由API负责处理和保证,well as 
是指”也“的意思。


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] Translate "asyncio" page into Chinese

2019-07-17 Thread GitBox
YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] 
Translate "asyncio" page into Chinese
URL: https://github.com/apache/flink/pull/9150#discussion_r304727169
 
 

 ##
 File path: docs/dev/stream/operators/asyncio.zh.md
 ##
 @@ -140,130 +124,114 @@ DataStream> resultStream =
 
 {% highlight scala %}
 /**
- * An implementation of the 'AsyncFunction' that sends requests and sets the 
callback.
+ * 实现 'AsyncFunction' 用于发送请求和设置回调。
  */
 class AsyncDatabaseRequest extends AsyncFunction[String, (String, String)] {
 
-/** The database specific client that can issue concurrent requests with 
callbacks */
+/** 使用回调函数来并发发送请求的数据库客户端 */
 lazy val client: DatabaseClient = new DatabaseClient(host, post, 
credentials)
 
-/** The context used for the future callbacks */
+/** 用于 future 回调的上下文环境 */
 implicit lazy val executor: ExecutionContext = 
ExecutionContext.fromExecutor(Executors.directExecutor())
 
 
 override def asyncInvoke(str: String, resultFuture: ResultFuture[(String, 
String)]): Unit = {
 
-// issue the asynchronous request, receive a future for the result
+// 发送异步请求,接收 future 结果
 val resultFutureRequested: Future[String] = client.query(str)
 
-// set the callback to be executed once the request by the client is 
complete
-// the callback simply forwards the result to the result future
+// 设置客户端完成请求后要执行的回调函数
+// 回调函数只是简单地把结果发给 future
 resultFutureRequested.onSuccess {
 case result: String => resultFuture.complete(Iterable((str, 
result)))
 }
 }
 }
 
-// create the original stream
+// 创建初始 DataStream
 val stream: DataStream[String] = ...
 
-// apply the async I/O transformation
+// 应用异步 I/O 转换操作
 val resultStream: DataStream[(String, String)] =
 AsyncDataStream.unorderedWait(stream, new AsyncDatabaseRequest(), 1000, 
TimeUnit.MILLISECONDS, 100)
 
 {% endhighlight %}
 
 
 
-**Important note**: The `ResultFuture` is completed with the first call of 
`ResultFuture.complete`.
-All subsequent `complete` calls will be ignored.
+**重要提示**: 第一次调用 `ResultFuture.complete` 后 `ResultFuture` 就完成了。
+后续的 `complete` 调用都将被忽略。
 
-The following two parameters control the asynchronous operations:
+下面两个参数控制异步操作:
 
-  - **Timeout**: The timeout defines how long an asynchronous request may take 
before it is considered failed. This parameter
-guards against dead/failed requests.
+  - **Timeout**: 超时参数定义了异步请求发出多久后未得到响应即被认定为失败。 它可以防止一直等待得不到响应的请求。
 
-  - **Capacity**: This parameter defines how many asynchronous requests may be 
in progress at the same time.
-Even though the async I/O approach leads typically to much better 
throughput, the operator can still be the bottleneck in
-the streaming application. Limiting the number of concurrent requests 
ensures that the operator will not
-accumulate an ever-growing backlog of pending requests, but that it will 
trigger backpressure once the capacity
-is exhausted.
+  - **Capacity**: 容量参数定义了可以同时进行的异步请求数。
+即使异步 I/O 通常带来更高的吞吐量, 执行异步 I/O  操作的算子仍然可能成为流处理的瓶颈。 
限制并发请求的数量可以确保算子不会持续累积待处理的请求进而造成积压,而是在容量耗尽时触发反压。
 
 
-### Timeout Handling
+### 超时处理
 
-When an async I/O request times out, by default an exception is thrown and job 
is restarted.
-If you want to handle timeouts, you can override the `AsyncFunction#timeout` 
method.
+当异步 I/O 请求超时的时候,默认会抛出异常并重启作业。
+如果你想处理超时,可以覆写 `AsyncFunction#timeout` 方法。
 
+### 结果的顺序
 
-### Order of Results
+`AsyncFunction` 发出的并发请求经常以不确定的顺序完成,这取决于请求得到响应的顺序。
+Flink 提供两种模式控制结果记录以何种顺序发出。
 
-The concurrent requests issued by the `AsyncFunction` frequently complete in 
some undefined order, based on which request finished first.
-To control in which order the resulting records are emitted, Flink offers two 
modes:
+  - **无序模式**: 异步请求一结束就立刻发出结果记录。
+流中记录的顺序在经过异步 I/O 算子之后发生了改变。
+当使用 *处理时间* 作为基本时间特征时,这个模式具有最低的延迟和最少的开销。
+此模式使用 `AsyncDataStream.unorderedWait(...)` 方法。
 
-  - **Unordered**: Result records are emitted as soon as the asynchronous 
request finishes.
-The order of the records in the stream is different after the async I/O 
operator than before.
-This mode has the lowest latency and lowest overhead, when used with 
*processing time* as the basic time characteristic.
-Use `AsyncDataStream.unorderedWait(...)` for this mode.
+  - **有序模式**: 
这种模式保持了流的顺序。发出结果记录的顺序与触发异步请求的顺序(记录输入算子的顺序)相同。为了实现这一点,算子将缓冲一个结果记录直到这条记录前面的所有记录都发出(或超时)。由于记录或者结果要在
 checkpoint  的状态中保存更长的时间,所以与无序模式相比,有序模式通常会带来一些额外的延迟和 checkpoint  开销。此模式使用 
`AsyncDataStream.orderedWait(...)` 方法。
 
-  - **Ordered**: In that case, the stream order is preserved. Result records 
are emitted in the same order as the asynchronous
-requests are triggered (the order of the operators input records). To 
achieve that, the operator buffers a result record
-until all its preceding records are emitted (or timed out).
-This usually introduces some amount of extra latency and some overhead in 
checkpointing, because 

[jira] [Commented] (FLINK-13086) add Chinese documentation for catalogs

2019-07-17 Thread Bowen Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887591#comment-16887591
 ] 

Bowen Li commented on FLINK-13086:
--

Hi [~ysn2233], I assigned FLINK-13298 to you since you've showed your interest 
in contributing to Flink.

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] Translate "asyncio" page into Chinese

2019-07-17 Thread GitBox
YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] 
Translate "asyncio" page into Chinese
URL: https://github.com/apache/flink/pull/9150#discussion_r304726583
 
 

 ##
 File path: docs/dev/stream/operators/asyncio.zh.md
 ##
 @@ -140,130 +124,114 @@ DataStream> resultStream =
 
 {% highlight scala %}
 /**
- * An implementation of the 'AsyncFunction' that sends requests and sets the 
callback.
+ * 实现 'AsyncFunction' 用于发送请求和设置回调。
  */
 class AsyncDatabaseRequest extends AsyncFunction[String, (String, String)] {
 
-/** The database specific client that can issue concurrent requests with 
callbacks */
+/** 使用回调函数来并发发送请求的数据库客户端 */
 lazy val client: DatabaseClient = new DatabaseClient(host, post, 
credentials)
 
-/** The context used for the future callbacks */
+/** 用于 future 回调的上下文环境 */
 implicit lazy val executor: ExecutionContext = 
ExecutionContext.fromExecutor(Executors.directExecutor())
 
 
 override def asyncInvoke(str: String, resultFuture: ResultFuture[(String, 
String)]): Unit = {
 
-// issue the asynchronous request, receive a future for the result
+// 发送异步请求,接收 future 结果
 val resultFutureRequested: Future[String] = client.query(str)
 
-// set the callback to be executed once the request by the client is 
complete
-// the callback simply forwards the result to the result future
+// 设置客户端完成请求后要执行的回调函数
+// 回调函数只是简单地把结果发给 future
 resultFutureRequested.onSuccess {
 case result: String => resultFuture.complete(Iterable((str, 
result)))
 }
 }
 }
 
-// create the original stream
+// 创建初始 DataStream
 val stream: DataStream[String] = ...
 
-// apply the async I/O transformation
+// 应用异步 I/O 转换操作
 val resultStream: DataStream[(String, String)] =
 AsyncDataStream.unorderedWait(stream, new AsyncDatabaseRequest(), 1000, 
TimeUnit.MILLISECONDS, 100)
 
 {% endhighlight %}
 
 
 
-**Important note**: The `ResultFuture` is completed with the first call of 
`ResultFuture.complete`.
-All subsequent `complete` calls will be ignored.
+**重要提示**: 第一次调用 `ResultFuture.complete` 后 `ResultFuture` 就完成了。
+后续的 `complete` 调用都将被忽略。
 
-The following two parameters control the asynchronous operations:
+下面两个参数控制异步操作:
 
-  - **Timeout**: The timeout defines how long an asynchronous request may take 
before it is considered failed. This parameter
-guards against dead/failed requests.
+  - **Timeout**: 超时参数定义了异步请求发出多久后未得到响应即被认定为失败。 它可以防止一直等待得不到响应的请求。
 
-  - **Capacity**: This parameter defines how many asynchronous requests may be 
in progress at the same time.
-Even though the async I/O approach leads typically to much better 
throughput, the operator can still be the bottleneck in
-the streaming application. Limiting the number of concurrent requests 
ensures that the operator will not
-accumulate an ever-growing backlog of pending requests, but that it will 
trigger backpressure once the capacity
-is exhausted.
+  - **Capacity**: 容量参数定义了可以同时进行的异步请求数。
+即使异步 I/O 通常带来更高的吞吐量, 执行异步 I/O  操作的算子仍然可能成为流处理的瓶颈。 
限制并发请求的数量可以确保算子不会持续累积待处理的请求进而造成积压,而是在容量耗尽时触发反压。
 
 
-### Timeout Handling
+### 超时处理
 
-When an async I/O request times out, by default an exception is thrown and job 
is restarted.
-If you want to handle timeouts, you can override the `AsyncFunction#timeout` 
method.
+当异步 I/O 请求超时的时候,默认会抛出异常并重启作业。
+如果你想处理超时,可以覆写 `AsyncFunction#timeout` 方法。
 
+### 结果的顺序
 
-### Order of Results
+`AsyncFunction` 发出的并发请求经常以不确定的顺序完成,这取决于请求得到响应的顺序。
+Flink 提供两种模式控制结果记录以何种顺序发出。
 
-The concurrent requests issued by the `AsyncFunction` frequently complete in 
some undefined order, based on which request finished first.
-To control in which order the resulting records are emitted, Flink offers two 
modes:
+  - **无序模式**: 异步请求一结束就立刻发出结果记录。
+流中记录的顺序在经过异步 I/O 算子之后发生了改变。
+当使用 *处理时间* 作为基本时间特征时,这个模式具有最低的延迟和最少的开销。
+此模式使用 `AsyncDataStream.unorderedWait(...)` 方法。
 
-  - **Unordered**: Result records are emitted as soon as the asynchronous 
request finishes.
-The order of the records in the stream is different after the async I/O 
operator than before.
-This mode has the lowest latency and lowest overhead, when used with 
*processing time* as the basic time characteristic.
-Use `AsyncDataStream.unorderedWait(...)` for this mode.
+  - **有序模式**: 
这种模式保持了流的顺序。发出结果记录的顺序与触发异步请求的顺序(记录输入算子的顺序)相同。为了实现这一点,算子将缓冲一个结果记录直到这条记录前面的所有记录都发出(或超时)。由于记录或者结果要在
 checkpoint  的状态中保存更长的时间,所以与无序模式相比,有序模式通常会带来一些额外的延迟和 checkpoint  开销。此模式使用 
`AsyncDataStream.orderedWait(...)` 方法。
 
-  - **Ordered**: In that case, the stream order is preserved. Result records 
are emitted in the same order as the asynchronous
-requests are triggered (the order of the operators input records). To 
achieve that, the operator buffers a result record
-until all its preceding records are emitted (or timed out).
-This usually introduces some amount of extra latency and some overhead in 
checkpointing, because 

[jira] [Assigned] (FLINK-13298) write Chinese documentation and quickstart for Flink-Hive compatibility

2019-07-17 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li reassigned FLINK-13298:


Assignee: Shengnan YU

> write Chinese documentation and quickstart for Flink-Hive compatibility
> ---
>
> Key: FLINK-13298
> URL: https://issues.apache.org/jira/browse/FLINK-13298
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation
>Reporter: Bowen Li
>Assignee: Shengnan YU
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> its corresponding English one is FLINK-13276



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-12894:
-
Fix Version/s: 1.10.0

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13086) add Chinese documentation for catalogs

2019-07-17 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13086:
-
Fix Version/s: 1.10.0

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] Translate "asyncio" page into Chinese

2019-07-17 Thread GitBox
YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] 
Translate "asyncio" page into Chinese
URL: https://github.com/apache/flink/pull/9150#discussion_r304725526
 
 

 ##
 File path: docs/dev/stream/operators/asyncio.zh.md
 ##
 @@ -140,130 +124,114 @@ DataStream> resultStream =
 
 {% highlight scala %}
 /**
- * An implementation of the 'AsyncFunction' that sends requests and sets the 
callback.
+ * 实现 'AsyncFunction' 用于发送请求和设置回调。
  */
 class AsyncDatabaseRequest extends AsyncFunction[String, (String, String)] {
 
-/** The database specific client that can issue concurrent requests with 
callbacks */
+/** 使用回调函数来并发发送请求的数据库客户端 */
 lazy val client: DatabaseClient = new DatabaseClient(host, post, 
credentials)
 
-/** The context used for the future callbacks */
+/** 用于 future 回调的上下文环境 */
 implicit lazy val executor: ExecutionContext = 
ExecutionContext.fromExecutor(Executors.directExecutor())
 
 
 override def asyncInvoke(str: String, resultFuture: ResultFuture[(String, 
String)]): Unit = {
 
-// issue the asynchronous request, receive a future for the result
+// 发送异步请求,接收 future 结果
 val resultFutureRequested: Future[String] = client.query(str)
 
-// set the callback to be executed once the request by the client is 
complete
-// the callback simply forwards the result to the result future
+// 设置客户端完成请求后要执行的回调函数
+// 回调函数只是简单地把结果发给 future
 resultFutureRequested.onSuccess {
 case result: String => resultFuture.complete(Iterable((str, 
result)))
 }
 }
 }
 
-// create the original stream
+// 创建初始 DataStream
 val stream: DataStream[String] = ...
 
-// apply the async I/O transformation
+// 应用异步 I/O 转换操作
 val resultStream: DataStream[(String, String)] =
 AsyncDataStream.unorderedWait(stream, new AsyncDatabaseRequest(), 1000, 
TimeUnit.MILLISECONDS, 100)
 
 {% endhighlight %}
 
 
 
-**Important note**: The `ResultFuture` is completed with the first call of 
`ResultFuture.complete`.
-All subsequent `complete` calls will be ignored.
+**重要提示**: 第一次调用 `ResultFuture.complete` 后 `ResultFuture` 就完成了。
+后续的 `complete` 调用都将被忽略。
 
-The following two parameters control the asynchronous operations:
+下面两个参数控制异步操作:
 
-  - **Timeout**: The timeout defines how long an asynchronous request may take 
before it is considered failed. This parameter
-guards against dead/failed requests.
+  - **Timeout**: 超时参数定义了异步请求发出多久后未得到响应即被认定为失败。 它可以防止一直等待得不到响应的请求。
 
-  - **Capacity**: This parameter defines how many asynchronous requests may be 
in progress at the same time.
-Even though the async I/O approach leads typically to much better 
throughput, the operator can still be the bottleneck in
-the streaming application. Limiting the number of concurrent requests 
ensures that the operator will not
-accumulate an ever-growing backlog of pending requests, but that it will 
trigger backpressure once the capacity
-is exhausted.
+  - **Capacity**: 容量参数定义了可以同时进行的异步请求数。
+即使异步 I/O 通常带来更高的吞吐量, 执行异步 I/O  操作的算子仍然可能成为流处理的瓶颈。 
限制并发请求的数量可以确保算子不会持续累积待处理的请求进而造成积压,而是在容量耗尽时触发反压。
 
 
-### Timeout Handling
+### 超时处理
 
-When an async I/O request times out, by default an exception is thrown and job 
is restarted.
-If you want to handle timeouts, you can override the `AsyncFunction#timeout` 
method.
+当异步 I/O 请求超时的时候,默认会抛出异常并重启作业。
+如果你想处理超时,可以覆写 `AsyncFunction#timeout` 方法。
 
+### 结果的顺序
 
-### Order of Results
+`AsyncFunction` 发出的并发请求经常以不确定的顺序完成,这取决于请求得到响应的顺序。
+Flink 提供两种模式控制结果记录以何种顺序发出。
 
-The concurrent requests issued by the `AsyncFunction` frequently complete in 
some undefined order, based on which request finished first.
-To control in which order the resulting records are emitted, Flink offers two 
modes:
+  - **无序模式**: 异步请求一结束就立刻发出结果记录。
+流中记录的顺序在经过异步 I/O 算子之后发生了改变。
+当使用 *处理时间* 作为基本时间特征时,这个模式具有最低的延迟和最少的开销。
+此模式使用 `AsyncDataStream.unorderedWait(...)` 方法。
 
-  - **Unordered**: Result records are emitted as soon as the asynchronous 
request finishes.
-The order of the records in the stream is different after the async I/O 
operator than before.
-This mode has the lowest latency and lowest overhead, when used with 
*processing time* as the basic time characteristic.
-Use `AsyncDataStream.unorderedWait(...)` for this mode.
+  - **有序模式**: 
这种模式保持了流的顺序。发出结果记录的顺序与触发异步请求的顺序(记录输入算子的顺序)相同。为了实现这一点,算子将缓冲一个结果记录直到这条记录前面的所有记录都发出(或超时)。由于记录或者结果要在
 checkpoint  的状态中保存更长的时间,所以与无序模式相比,有序模式通常会带来一些额外的延迟和 checkpoint  开销。此模式使用 
`AsyncDataStream.orderedWait(...)` 方法。
 
-  - **Ordered**: In that case, the stream order is preserved. Result records 
are emitted in the same order as the asynchronous
-requests are triggered (the order of the operators input records). To 
achieve that, the operator buffers a result record
-until all its preceding records are emitted (or timed out).
-This usually introduces some amount of extra latency and some overhead in 
checkpointing, because 

[jira] [Commented] (FLINK-13086) add Chinese documentation for catalogs

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887589#comment-16887589
 ] 

frank wang commented on FLINK-13086:


[https://github.com/apache/flink/pull/9155]

plse review it ,thx

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0
>
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887588#comment-16887588
 ] 

frank wang commented on FLINK-12894:


[https://github.com/apache/flink/pull/9155]

plse review it ,thx

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887587#comment-16887587
 ] 

frank wang commented on FLINK-12894:


I have submitted

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13086) add Chinese documentation for catalogs

2019-07-17 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887586#comment-16887586
 ] 

frank wang commented on FLINK-13086:


I have submitted

> add Chinese documentation for catalogs
> --
>
> Key: FLINK-13086
> URL: https://issues.apache.org/jira/browse/FLINK-13086
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / API
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
> Fix For: 1.9.0
>
>
> the ticket for corresponding English documentation is FLINK-12277



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9155: [FLINK-12894]add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread GitBox
flinkbot commented on issue #9155: [FLINK-12894]add Chinese documentation of 
how to configure and use catalogs in SQL CLI
URL: https://github.com/apache/flink/pull/9155#issuecomment-512653964
 
 
   ## CI report:
   
   * 60292c020c35dd1d35f806c169ea1f2afe14a337 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119573805)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] Translate "asyncio" page into Chinese

2019-07-17 Thread GitBox
YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] 
Translate "asyncio" page into Chinese
URL: https://github.com/apache/flink/pull/9150#discussion_r304724423
 
 

 ##
 File path: docs/dev/stream/operators/asyncio.zh.md
 ##
 @@ -140,130 +124,114 @@ DataStream> resultStream =
 
 {% highlight scala %}
 /**
- * An implementation of the 'AsyncFunction' that sends requests and sets the 
callback.
+ * 实现 'AsyncFunction' 用于发送请求和设置回调。
  */
 class AsyncDatabaseRequest extends AsyncFunction[String, (String, String)] {
 
-/** The database specific client that can issue concurrent requests with 
callbacks */
+/** 使用回调函数来并发发送请求的数据库客户端 */
 lazy val client: DatabaseClient = new DatabaseClient(host, post, 
credentials)
 
-/** The context used for the future callbacks */
+/** 用于 future 回调的上下文环境 */
 implicit lazy val executor: ExecutionContext = 
ExecutionContext.fromExecutor(Executors.directExecutor())
 
 
 override def asyncInvoke(str: String, resultFuture: ResultFuture[(String, 
String)]): Unit = {
 
-// issue the asynchronous request, receive a future for the result
+// 发送异步请求,接收 future 结果
 val resultFutureRequested: Future[String] = client.query(str)
 
-// set the callback to be executed once the request by the client is 
complete
-// the callback simply forwards the result to the result future
+// 设置客户端完成请求后要执行的回调函数
+// 回调函数只是简单地把结果发给 future
 resultFutureRequested.onSuccess {
 case result: String => resultFuture.complete(Iterable((str, 
result)))
 }
 }
 }
 
-// create the original stream
+// 创建初始 DataStream
 val stream: DataStream[String] = ...
 
-// apply the async I/O transformation
+// 应用异步 I/O 转换操作
 val resultStream: DataStream[(String, String)] =
 AsyncDataStream.unorderedWait(stream, new AsyncDatabaseRequest(), 1000, 
TimeUnit.MILLISECONDS, 100)
 
 {% endhighlight %}
 
 
 
-**Important note**: The `ResultFuture` is completed with the first call of 
`ResultFuture.complete`.
-All subsequent `complete` calls will be ignored.
+**重要提示**: 第一次调用 `ResultFuture.complete` 后 `ResultFuture` 就完成了。
+后续的 `complete` 调用都将被忽略。
 
-The following two parameters control the asynchronous operations:
+下面两个参数控制异步操作:
 
-  - **Timeout**: The timeout defines how long an asynchronous request may take 
before it is considered failed. This parameter
-guards against dead/failed requests.
+  - **Timeout**: 超时参数定义了异步请求发出多久后未得到响应即被认定为失败。 它可以防止一直等待得不到响应的请求。
 
-  - **Capacity**: This parameter defines how many asynchronous requests may be 
in progress at the same time.
-Even though the async I/O approach leads typically to much better 
throughput, the operator can still be the bottleneck in
-the streaming application. Limiting the number of concurrent requests 
ensures that the operator will not
-accumulate an ever-growing backlog of pending requests, but that it will 
trigger backpressure once the capacity
-is exhausted.
+  - **Capacity**: 容量参数定义了可以同时进行的异步请求数。
+即使异步 I/O 通常带来更高的吞吐量, 执行异步 I/O  操作的算子仍然可能成为流处理的瓶颈。 
限制并发请求的数量可以确保算子不会持续累积待处理的请求进而造成积压,而是在容量耗尽时触发反压。
 
 
-### Timeout Handling
+### 超时处理
 
-When an async I/O request times out, by default an exception is thrown and job 
is restarted.
-If you want to handle timeouts, you can override the `AsyncFunction#timeout` 
method.
+当异步 I/O 请求超时的时候,默认会抛出异常并重启作业。
+如果你想处理超时,可以覆写 `AsyncFunction#timeout` 方法。
 
+### 结果的顺序
 
-### Order of Results
+`AsyncFunction` 发出的并发请求经常以不确定的顺序完成,这取决于请求得到响应的顺序。
+Flink 提供两种模式控制结果记录以何种顺序发出。
 
-The concurrent requests issued by the `AsyncFunction` frequently complete in 
some undefined order, based on which request finished first.
-To control in which order the resulting records are emitted, Flink offers two 
modes:
+  - **无序模式**: 异步请求一结束就立刻发出结果记录。
+流中记录的顺序在经过异步 I/O 算子之后发生了改变。
+当使用 *处理时间* 作为基本时间特征时,这个模式具有最低的延迟和最少的开销。
+此模式使用 `AsyncDataStream.unorderedWait(...)` 方法。
 
-  - **Unordered**: Result records are emitted as soon as the asynchronous 
request finishes.
-The order of the records in the stream is different after the async I/O 
operator than before.
-This mode has the lowest latency and lowest overhead, when used with 
*processing time* as the basic time characteristic.
-Use `AsyncDataStream.unorderedWait(...)` for this mode.
+  - **有序模式**: 
这种模式保持了流的顺序。发出结果记录的顺序与触发异步请求的顺序(记录输入算子的顺序)相同。为了实现这一点,算子将缓冲一个结果记录直到这条记录前面的所有记录都发出(或超时)。由于记录或者结果要在
 checkpoint  的状态中保存更长的时间,所以与无序模式相比,有序模式通常会带来一些额外的延迟和 checkpoint  开销。此模式使用 
`AsyncDataStream.orderedWait(...)` 方法。
 
-  - **Ordered**: In that case, the stream order is preserved. Result records 
are emitted in the same order as the asynchronous
-requests are triggered (the order of the operators input records). To 
achieve that, the operator buffers a result record
-until all its preceding records are emitted (or timed out).
-This usually introduces some amount of extra latency and some overhead in 
checkpointing, because 

[GitHub] [flink] flinkbot commented on issue #9155: [FLINK-12894]add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread GitBox
flinkbot commented on issue #9155: [FLINK-12894]add Chinese documentation of 
how to configure and use catalogs in SQL CLI
URL: https://github.com/apache/flink/pull/9155#issuecomment-512653102
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi edited a comment on issue #9146: [FLINK-13284] [table-planner-blink] Correct some builtin functions' r…

2019-07-17 Thread GitBox
JingsongLi edited a comment on issue #9146: [FLINK-13284] [table-planner-blink] 
Correct some builtin functions' r…
URL: https://github.com/apache/flink/pull/9146#issuecomment-512653019
 
 
   > Thanks for this PR @lincoln-lil . for stream job, I think "return null" is 
the general case (maybe for some special cases, users also want "throw 
exception" when processing illegal data). while for batch job, "throw 
exception" is more general ?! for the long term, maybe we should add a config 
to let users choose the mode. for this PR, it looks good to me.
   
   For batch job, "return null" is more popular too in hadoop sql engines.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on issue #9146: [FLINK-13284] [table-planner-blink] Correct some builtin functions' r…

2019-07-17 Thread GitBox
JingsongLi commented on issue #9146: [FLINK-13284] [table-planner-blink] 
Correct some builtin functions' r…
URL: https://github.com/apache/flink/pull/9146#issuecomment-512653019
 
 
   > Thanks for this PR @lincoln-lil . for stream job, I think "return null" is 
the general case (maybe for some special cases, users also want "throw 
exception" when processing illegal data). while for batch job, "throw 
exception" is more general ?! for the long term, maybe we should add a config 
to let users choose the mode. for this PR, it looks good to me.
   
   For batch join, "return null" is more popular too in hadoop sql engines.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file 
replication config for yarn configuration
URL: https://github.com/apache/flink/pull/8303#issuecomment-511684151
 
 
   ## CI report:
   
   * 6a7ca58b4a04f6dce250045e021702e67e82b893 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119421914)
   * 4d38a8df0d59734c4b2386689a2f17b9f2b44b12 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441376)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9130: [FLINK-13274]Refactor HiveTableSourceTest using HiveRunner

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #9130: [FLINK-13274]Refactor 
HiveTableSourceTest using HiveRunner
URL: https://github.com/apache/flink/pull/9130#issuecomment-511810910
 
 
   ## CI report:
   
   * 979a2e7f99fc3c479c65f58dcd06df2943b6a7e3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119311556)
   * 07f36fd9d97dbab3a1a84a566ebdc4fb782eaaab : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119573450)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12894) add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12894:
---
Labels: pull-request-available  (was: )

> add Chinese documentation of how to configure and use catalogs in SQL CLI
> -
>
> Key: FLINK-12894
> URL: https://issues.apache.org/jira/browse/FLINK-12894
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / Hive, Documentation, 
> Table SQL / Client
>Reporter: Bowen Li
>Assignee: frank wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> Ticket of its corresponding English version is FLINK-12627.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] yiduwangkai opened a new pull request #9155: [FLINK-12894]add Chinese documentation of how to configure and use catalogs in SQL CLI

2019-07-17 Thread GitBox
yiduwangkai opened a new pull request #9155: [FLINK-12894]add Chinese 
documentation of how to configure and use catalogs in SQL CLI
URL: https://github.com/apache/flink/pull/9155
 
 
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13315) Port wmstrategies to api-java-bridge

2019-07-17 Thread Jingsong Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887582#comment-16887582
 ] 

Jingsong Lee commented on FLINK-13315:
--

[~jark] Can you take this ticket to me?

> Port wmstrategies to api-java-bridge
> 
>
> Key: FLINK-13315
> URL: https://issues.apache.org/jira/browse/FLINK-13315
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] godfreyhe edited a comment on issue #9146: [FLINK-13284] [table-planner-blink] Correct some builtin functions' r…

2019-07-17 Thread GitBox
godfreyhe edited a comment on issue #9146: [FLINK-13284] [table-planner-blink] 
Correct some builtin functions' r…
URL: https://github.com/apache/flink/pull/9146#issuecomment-512651957
 
 
   Thanks for this PR @lincoln-lil . for stream job, I think "return null" is 
the general case (maybe for some special cases, users also want "throw 
exception" when processing illegal data). while for batch job, "throw 
exception" is more general ?!  for the long term, maybe we should add a config 
to let users choose the mode. for this PR, it looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on issue #9146: [FLINK-13284] [table-planner-blink] Correct some builtin functions' r…

2019-07-17 Thread GitBox
godfreyhe commented on issue #9146: [FLINK-13284] [table-planner-blink] Correct 
some builtin functions' r…
URL: https://github.com/apache/flink/pull/9146#issuecomment-512651957
 
 
   Thanks for this PR @lincoln-lil . for stream job, I think "return null" is 
the general case (maybe some case users also want "throw exception" when 
processing illegal data). while for batch job, "throw exception" is more 
general ?!  for the long term, maybe we should add a config to let users choose 
the mode. for this PR, it looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #9152: [FLINK-13314][table-planner-blink] Correct resultType of some PlannerExpression when operands contains DecimalTypeInfo or BigDe

2019-07-17 Thread GitBox
JingsongLi commented on a change in pull request #9152: 
[FLINK-13314][table-planner-blink] Correct resultType of some PlannerExpression 
when operands contains DecimalTypeInfo or BigDecimalTypeInfo in Blink planner
URL: https://github.com/apache/flink/pull/9152#discussion_r304722257
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/expressions/ReturnTypeInference.scala
 ##
 @@ -0,0 +1,216 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.expressions
+
+import org.apache.flink.api.common.typeinfo.{BasicTypeInfo, TypeInformation}
+import org.apache.flink.table.api.TableException
+import org.apache.flink.table.calcite.{FlinkTypeFactory, FlinkTypeSystem}
+import 
org.apache.flink.table.types.TypeInfoLogicalTypeConverter.{fromLogicalTypeToTypeInfo,
 fromTypeInfoToLogicalType}
+import org.apache.flink.table.types.logical.{DecimalType, LogicalType}
+import org.apache.flink.table.typeutils.{BigDecimalTypeInfo, DecimalTypeInfo, 
TypeCoercion}
+
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.sql.`type`.SqlTypeUtil
+
+import scala.collection.JavaConverters._
+
+object ReturnTypeInference {
+
+  private lazy val typeSystem = new FlinkTypeSystem
+  private lazy val typeFactory = new FlinkTypeFactory(typeSystem)
+
+  /**
+* Infer resultType of [[Minus]] expression.
+* The decimal type inference keeps consistent with Calcite
+* [[org.apache.calcite.sql.type.ReturnTypes]].NULLABLE_SUM
+*
+* @param minus minus Expression
+* @return result type
+*/
+  def infer(minus: Minus): TypeInformation[_] = inferSum(minus)
+
+  /**
+* Infer resultType of [[Plus]] expression.
+* The decimal type inference keeps consistent with Calcite
+* [[org.apache.calcite.sql.type.ReturnTypes]].NULLABLE_SUM
+*
+* @param plus plus Expression
+* @return result type
+*/
+  def infer(plus: Plus): TypeInformation[_] = inferSum(plus)
+
+  private def inferSum(op: BinaryArithmetic): TypeInformation[_] = {
+val decimalTypeInference = (
+  leftType: RelDataType,
+  rightType: RelDataType,
+  wideResultType: LogicalType) => {
+  if (SqlTypeUtil.isExactNumeric(leftType) &&
+SqlTypeUtil.isExactNumeric(rightType) &&
+(SqlTypeUtil.isDecimal(leftType) || SqlTypeUtil.isDecimal(rightType))) 
{
+val lp = leftType.getPrecision
+val ls = leftType.getScale
+val rp = rightType.getPrecision
+val rs = rightType.getScale
+val scale = Math.max(ls, rs)
+assert(scale <= typeSystem.getMaxNumericScale)
+var precision = Math.max(lp - ls, rp - rs) + scale + 1
+precision = Math.min(precision, typeSystem.getMaxNumericPrecision)
+assert(precision > 0)
+fromLogicalTypeToTypeInfo(wideResultType) match {
+  case _: DecimalTypeInfo => DecimalTypeInfo.of(precision, scale)
+  case _: BigDecimalTypeInfo => BigDecimalTypeInfo.of(precision, scale)
+}
+  } else {
+val resultType = typeFactory.leastRestrictive(
+  List(leftType, rightType).asJava)
+fromLogicalTypeToTypeInfo(FlinkTypeFactory.toLogicalType(resultType))
+  }
+}
+infer(op, decimalTypeInference)
+  }
+
+  /**
+* Infer resultType of [[Mul]] expression.
+* The decimal type inference keeps consistent with Calcite
+* [[org.apache.calcite.sql.type.ReturnTypes]].PRODUCT_NULLABLE
+*
+* @param mul mul Expression
+* @return result type
+*/
+  def infer(mul: Mul): TypeInformation[_] = {
+val decimalTypeInference = (
+  leftType: RelDataType,
+  rightType: RelDataType) => typeFactory.createDecimalProduct(leftType, 
rightType)
+inferDivOrMul(mul, decimalTypeInference)
+  }
+
+  /**
+* Infer resultType of [[Div]] expression.
+* The decimal type inference keeps consistent with
+* 
[[org.apache.flink.table.calcite.type.FlinkReturnTypes]].FLINK_QUOTIENT_NULLABLE
+*
+* @param div div Expression
+* @return result type
+*/
+  def infer(div: Div): TypeInformation[_] = {
+val decimalTypeInference = (
+  leftType: RelDataType,
+  rightType: RelDataType) 

[GitHub] [flink] JingsongLi commented on a change in pull request #9152: [FLINK-13314][table-planner-blink] Correct resultType of some PlannerExpression when operands contains DecimalTypeInfo or BigDe

2019-07-17 Thread GitBox
JingsongLi commented on a change in pull request #9152: 
[FLINK-13314][table-planner-blink] Correct resultType of some PlannerExpression 
when operands contains DecimalTypeInfo or BigDecimalTypeInfo in Blink planner
URL: https://github.com/apache/flink/pull/9152#discussion_r304722135
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/expressions/ReturnTypeInference.scala
 ##
 @@ -0,0 +1,216 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.expressions
+
+import org.apache.flink.api.common.typeinfo.{BasicTypeInfo, TypeInformation}
+import org.apache.flink.table.api.TableException
+import org.apache.flink.table.calcite.{FlinkTypeFactory, FlinkTypeSystem}
+import 
org.apache.flink.table.types.TypeInfoLogicalTypeConverter.{fromLogicalTypeToTypeInfo,
 fromTypeInfoToLogicalType}
+import org.apache.flink.table.types.logical.{DecimalType, LogicalType}
+import org.apache.flink.table.typeutils.{BigDecimalTypeInfo, DecimalTypeInfo, 
TypeCoercion}
+
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.sql.`type`.SqlTypeUtil
+
+import scala.collection.JavaConverters._
+
+object ReturnTypeInference {
+
+  private lazy val typeSystem = new FlinkTypeSystem
+  private lazy val typeFactory = new FlinkTypeFactory(typeSystem)
+
+  /**
+* Infer resultType of [[Minus]] expression.
+* The decimal type inference keeps consistent with Calcite
+* [[org.apache.calcite.sql.type.ReturnTypes]].NULLABLE_SUM
+*
+* @param minus minus Expression
+* @return result type
+*/
+  def infer(minus: Minus): TypeInformation[_] = inferSum(minus)
 
 Review comment:
   Can you modify `infer` to `inferMinus` and etc..
   Because these classes are inherited, this can easily lead to overloading 
problems.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9154: [FLINK-13304][table-runtime-blink] Fix implementation of getString and getBinary method in NestedRow and add tests for complex data formats

2019-07-17 Thread GitBox
flinkbot commented on issue #9154: [FLINK-13304][table-runtime-blink] Fix 
implementation of getString and getBinary method in NestedRow and add tests for 
complex data formats
URL: https://github.com/apache/flink/pull/9154#issuecomment-512650847
 
 
   ## CI report:
   
   * cb3568112c5f74d14b192fa49b78029937e94623 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119572579)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjuwangg commented on issue #9118: [FLINK-13206][sql client]replace `use database xxx` with `use xxx` in sql client parser

2019-07-17 Thread GitBox
zjuwangg commented on issue #9118: [FLINK-13206][sql client]replace `use 
database xxx` with `use xxx` in sql client parser
URL: https://github.com/apache/flink/pull/9118#issuecomment-512650255
 
 
   > can you rerun the tests. seems to have some failures
   
   It's caused by the python module, I just rebase the master.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9150: [FLINK-13227][docs-zh] Translate "asyncio" page into Chinese

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #9150: [FLINK-13227][docs-zh] Translate 
"asyncio" page into Chinese
URL: https://github.com/apache/flink/pull/9150#issuecomment-512293368
 
 
   ## CI report:
   
   * e5f1200fa10103c4997c7c4fc915cb4b071c70f1 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119490342)
   * 656d5ca5ad1525f431e2a94b2895f0d52caed7b8 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119572202)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9139: [FLINK-13304][table-runtime-blink] Fix implementation of getString and getBinary method in NestedRow and add tests for complex data formats

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #9139: [FLINK-13304][table-runtime-blink] 
Fix implementation of getString and getBinary method in NestedRow and add tests 
for complex data formats
URL: https://github.com/apache/flink/pull/9139#issuecomment-512166499
 
 
   ## CI report:
   
   * 22609f1b6271176affec70926b89b4730451568a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119436577)
   * 7dbec40e5321c5ffcaae89c8716851a8a1248519 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119572205)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8471: [FLINK-12529][runtime] Release record-deserializer buffers timely to improve the efficiency of heap usage on taskmanager

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #8471: [FLINK-12529][runtime] Release 
record-deserializer buffers timely to improve the efficiency of heap usage on 
taskmanager
URL: https://github.com/apache/flink/pull/8471#issuecomment-510870797
 
 
   ## CI report:
   
   * 3a6874ec1f1e40441b068868a16570e0b96f083c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054587)
   * 9b083e9be3176b039595d1a2479bf2e14f6f83fd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119409287)
   * 6e68c2bbdaf04f9ba8f622690d029eadfc772a1a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441363)
   * 273affb19a0b757622ebd838fc9d9dffb97ccb4b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119571886)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9154: [FLINK-13304][table-runtime-blink] Fix implementation of getString and getBinary method in NestedRow and add tests for complex data formats

2019-07-17 Thread GitBox
flinkbot commented on issue #9154: [FLINK-13304][table-runtime-blink] Fix 
implementation of getString and getBinary method in NestedRow and add tests for 
complex data formats
URL: https://github.com/apache/flink/pull/9154#issuecomment-512649707
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] TsReaper opened a new pull request #9154: [FLINK-13304][table-runtime-blink] Fix implementation of getString and getBinary method in NestedRow and add tests for complex data formats

2019-07-17 Thread GitBox
TsReaper opened a new pull request #9154: [FLINK-13304][table-runtime-blink] 
Fix implementation of getString and getBinary method in NestedRow and add tests 
for complex data formats
URL: https://github.com/apache/flink/pull/9154
 
 
   ## What is the purpose of the change
   
   The `getString` and `getBinary` methods are not implemented correctly in 
`NestedRow`. This PR fixes the implementation and adds test cases for complex 
data formats.
   
   ## Brief change log
   
- Fix implementation of `getString` and `getBinary` in `NestedRow`.
- Add tests for complex data formats.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows: run the newly added 
`ComplexTest`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8471: [FLINK-12529][runtime] Release record-deserializer buffers timely to improve the efficiency of heap usage on taskmanager

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #8471: [FLINK-12529][runtime] Release 
record-deserializer buffers timely to improve the efficiency of heap usage on 
taskmanager
URL: https://github.com/apache/flink/pull/8471#issuecomment-510870797
 
 
   ## CI report:
   
   * 3a6874ec1f1e40441b068868a16570e0b96f083c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054587)
   * 9b083e9be3176b039595d1a2479bf2e14f6f83fd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119409287)
   * 6e68c2bbdaf04f9ba8f622690d029eadfc772a1a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441363)
   * 273affb19a0b757622ebd838fc9d9dffb97ccb4b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119571886)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13242) StandaloneResourceManagerTest fails on travis

2019-07-17 Thread Haibo Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887573#comment-16887573
 ] 

Haibo Sun commented on FLINK-13242:
---

Another instance: [https://api.travis-ci.com/v3/job/216807085/log.txt]

> StandaloneResourceManagerTest fails on travis
> -
>
> Key: FLINK-13242
> URL: https://issues.apache.org/jira/browse/FLINK-13242
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Andrey Zagrebin
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://travis-ci.org/apache/flink/jobs/557696989
> {code}
> 08:28:06.475 [ERROR] 
> testStartupPeriod(org.apache.flink.runtime.resourcemanager.StandaloneResourceManagerTest)
>   Time elapsed: 10.276 s  <<< FAILURE!
> java.lang.AssertionError: condition was not fulfilled before the deadline
>   at 
> org.apache.flink.runtime.resourcemanager.StandaloneResourceManagerTest.assertHappensUntil(StandaloneResourceManagerTest.java:114)
>   at 
> org.apache.flink.runtime.resourcemanager.StandaloneResourceManagerTest.testStartupPeriod(StandaloneResourceManagerTest.java:60)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] JingsongLi commented on a change in pull request #9139: [FLINK-13304][table-runtime-blink] Fix implementation of getString and getBinary method in NestedRow and add tests for complex

2019-07-17 Thread GitBox
JingsongLi commented on a change in pull request #9139: 
[FLINK-13304][table-runtime-blink] Fix implementation of getString and 
getBinary method in NestedRow and add tests for complex data formats
URL: https://github.com/apache/flink/pull/9139#discussion_r304717224
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/NestedRow.java
 ##
 @@ -25,8 +25,12 @@
 import static org.apache.flink.util.Preconditions.checkArgument;
 
 /**
- * Its memory storage structure and {@link BinaryRow} exactly the same, the 
only different is it supports
- * all bytes in variable MemorySegments.
+ * Its memory storage structure is exactly the same with {@link BinaryRow}.
+ * The only different is that, as {@link NestedRow} is used
+ * to store row value in the variable-length part of {@link BinaryRow},
+ * every field (including both fixed-length part and variable-length part) of 
{@link NestedRow}
+ * has a possibility to cross the boundary of a segment, while the 
fixed-length part of {@link BinaryRow}
+ * must fir into its first memory segment.
 
 Review comment:
   fir -> fit?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8621: [FLINK-12682][connectors] StringWriter support custom row delimiter

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #8621: [FLINK-12682][connectors] 
StringWriter support custom row delimiter
URL: https://github.com/apache/flink/pull/8621#issuecomment-511183092
 
 
   ## CI report:
   
   * e21d12fa2c9b6305d90502ae05a9d574ce712fd1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119433787)
   * 7c87f4e61e2b6afce95f2c883b901e8dd9cfa0aa : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441352)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9153: [FLINK-13315][api-java] Port wmstrategies to api-java-bridge

2019-07-17 Thread GitBox
flinkbot commented on issue #9153: [FLINK-13315][api-java] Port wmstrategies to 
api-java-bridge
URL: https://github.com/apache/flink/pull/9153#issuecomment-512644370
 
 
   ## CI report:
   
   * a520fe727f09dbf4f2d0a9f8cf91506ef90e4e3d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119570552)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13284) Correct some builtin functions' return type inference in Blink planner

2019-07-17 Thread Jingsong Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887560#comment-16887560
 ] 

Jingsong Lee commented on FLINK-13284:
--

Maybe Jira title should be Correct some builtin time functions' return type 
inference in Blink planner?

> Correct some builtin functions' return type inference in Blink planner
> --
>
> Key: FLINK-13284
> URL: https://issues.apache.org/jira/browse/FLINK-13284
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
> Fix For: 1.9.0
>
>
> Several builtin functions in Blink planner such as 
> {code:java}
> DATE_FORMAT{code}
> declares 
> {code:java}
> ReturnTypes.cascade(ReturnTypes.explicit(SqlTypeName.VARCHAR), 
> SqlTypeTransforms.TO_NULLABLE){code}
> which should be 
> {code:java}
> ReturnTypes.cascade(ReturnTypes.explicit(SqlTypeName.VARCHAR), 
> SqlTypeTransforms.FORCE_NULLABLE){code}
> instead.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8756: [FLINK-12406] [Runtime] Report BLOCKING_PERSISTENT result partition meta back to client

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #8756: [FLINK-12406] [Runtime] Report 
BLOCKING_PERSISTENT result partition meta back to client
URL: https://github.com/apache/flink/pull/8756#issuecomment-511260167
 
 
   ## CI report:
   
   * e290f6b65758fc7d199277e4345a75335de981b2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119409271)
   * b524e182e50281ce878f2b855b64a012337b39e1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441331)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9129: [FLINK-13287][table-api] Port ExistingField to api-java and use new Expression in FieldComputer

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #9129: [FLINK-13287][table-api] Port 
ExistingField to api-java and use new Expression in FieldComputer
URL: https://github.com/apache/flink/pull/9129#issuecomment-511810897
 
 
   ## CI report:
   
   * c3a620e6d2abbe5faac85b2ebcdb5340d06f8e8c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119414043)
   * 56811a683c5905dba453933022e39cf61773f1b2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441086)
   * b83051e20c73fb600a548478c5ceb794cb73aafc : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119466658)
   * e6eaad2f1a5fa1d04a192e20570c17519fe0f05c : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119570139)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9153: [FLINK-13315][api-java] Port wmstrategies to api-java-bridge

2019-07-17 Thread GitBox
flinkbot commented on issue #9153: [FLINK-13315][api-java] Port wmstrategies to 
api-java-bridge
URL: https://github.com/apache/flink/pull/9153#issuecomment-512642559
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13315) Port wmstrategies to api-java-bridge

2019-07-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13315:
---
Labels: pull-request-available  (was: )

> Port wmstrategies to api-java-bridge
> 
>
> Key: FLINK-13315
> URL: https://issues.apache.org/jira/browse/FLINK-13315
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] JingsongLi opened a new pull request #9153: [FLINK-13315][api-java] Port wmstrategies to api-java-bridge

2019-07-17 Thread GitBox
JingsongLi opened a new pull request #9153: [FLINK-13315][api-java] Port 
wmstrategies to api-java-bridge
URL: https://github.com/apache/flink/pull/9153
 
 
   
   ## What is the purpose of the change
   
   Port wmstrategies to api-java-bridge for connectors dependent free and avoid 
class conflict of flink-planner and blink-planner.
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] Translate "asyncio" page into Chinese

2019-07-17 Thread GitBox
YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] 
Translate "asyncio" page into Chinese
URL: https://github.com/apache/flink/pull/9150#discussion_r304714405
 
 

 ##
 File path: docs/dev/stream/operators/asyncio.zh.md
 ##
 @@ -140,130 +124,114 @@ DataStream> resultStream =
 
 {% highlight scala %}
 /**
- * An implementation of the 'AsyncFunction' that sends requests and sets the 
callback.
+ * 实现 'AsyncFunction' 用于发送请求和设置回调。
  */
 class AsyncDatabaseRequest extends AsyncFunction[String, (String, String)] {
 
-/** The database specific client that can issue concurrent requests with 
callbacks */
+/** 使用回调函数来并发发送请求的数据库客户端 */
 lazy val client: DatabaseClient = new DatabaseClient(host, post, 
credentials)
 
-/** The context used for the future callbacks */
+/** 用于 future 回调的上下文环境 */
 implicit lazy val executor: ExecutionContext = 
ExecutionContext.fromExecutor(Executors.directExecutor())
 
 
 override def asyncInvoke(str: String, resultFuture: ResultFuture[(String, 
String)]): Unit = {
 
-// issue the asynchronous request, receive a future for the result
+// 发送异步请求,接收 future 结果
 val resultFutureRequested: Future[String] = client.query(str)
 
-// set the callback to be executed once the request by the client is 
complete
-// the callback simply forwards the result to the result future
+// 设置客户端完成请求后要执行的回调函数
+// 回调函数只是简单地把结果发给 future
 resultFutureRequested.onSuccess {
 case result: String => resultFuture.complete(Iterable((str, 
result)))
 }
 }
 }
 
-// create the original stream
+// 创建初始 DataStream
 val stream: DataStream[String] = ...
 
-// apply the async I/O transformation
+// 应用异步 I/O 转换操作
 val resultStream: DataStream[(String, String)] =
 AsyncDataStream.unorderedWait(stream, new AsyncDatabaseRequest(), 1000, 
TimeUnit.MILLISECONDS, 100)
 
 {% endhighlight %}
 
 
 
-**Important note**: The `ResultFuture` is completed with the first call of 
`ResultFuture.complete`.
-All subsequent `complete` calls will be ignored.
+**重要提示**: 第一次调用 `ResultFuture.complete` 后 `ResultFuture` 就完成了。
+后续的 `complete` 调用都将被忽略。
 
-The following two parameters control the asynchronous operations:
+下面两个参数控制异步操作:
 
-  - **Timeout**: The timeout defines how long an asynchronous request may take 
before it is considered failed. This parameter
-guards against dead/failed requests.
+  - **Timeout**: 超时参数定义了异步请求发出多久后未得到响应即被认定为失败。 它可以防止一直等待得不到响应的请求。
 
-  - **Capacity**: This parameter defines how many asynchronous requests may be 
in progress at the same time.
-Even though the async I/O approach leads typically to much better 
throughput, the operator can still be the bottleneck in
-the streaming application. Limiting the number of concurrent requests 
ensures that the operator will not
-accumulate an ever-growing backlog of pending requests, but that it will 
trigger backpressure once the capacity
-is exhausted.
+  - **Capacity**: 容量参数定义了可以同时进行的异步请求数。
+即使异步 I/O 通常带来更高的吞吐量, 执行异步 I/O  操作的算子仍然可能成为流处理的瓶颈。 
限制并发请求的数量可以确保算子不会持续累积待处理的请求进而造成积压,而是在容量耗尽时触发反压。
 
 
-### Timeout Handling
+### 超时处理
 
-When an async I/O request times out, by default an exception is thrown and job 
is restarted.
-If you want to handle timeouts, you can override the `AsyncFunction#timeout` 
method.
+当异步 I/O 请求超时的时候,默认会抛出异常并重启作业。
+如果你想处理超时,可以覆写 `AsyncFunction#timeout` 方法。
 
 Review comment:
   我查了一下概念定义,原文的override是重写,而overload是重载,此处我改为重写吧。


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #9149: [FLINK-13303][hive] Add e2e test for hive connector

2019-07-17 Thread GitBox
lirui-apache commented on a change in pull request #9149: [FLINK-13303][hive] 
Add e2e test for hive connector
URL: https://github.com/apache/flink/pull/9149#discussion_r304714039
 
 

 ##
 File path: flink-connectors/flink-connector-hive/pom.xml
 ##
 @@ -241,7 +241,6 @@ under the License.
org.apache.hive
hive-exec
${hive.version}
-   provided
 
 Review comment:
   It makes hive-exec a transitive dependency. It won't be packed into hive 
connector because we no longer build a shaded jar.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #9149: [FLINK-13303][hive] Add e2e test for hive connector

2019-07-17 Thread GitBox
lirui-apache commented on a change in pull request #9149: [FLINK-13303][hive] 
Add e2e test for hive connector
URL: https://github.com/apache/flink/pull/9149#discussion_r304713622
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/gateway/utils/LocalExecutorTestUtil.java
 ##
 @@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.gateway.utils;
+
+import org.apache.flink.table.client.gateway.Executor;
+import org.apache.flink.table.client.gateway.SessionContext;
+import org.apache.flink.table.client.gateway.TypedResult;
+import org.apache.flink.types.Row;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.stream.IntStream;
+
+/**
+ * Test util for LocalExecutor.
+ */
+public class LocalExecutorTestUtil {
+
+   public static List retrieveTableResult(
+   Executor executor,
+   SessionContext session,
+   String resultID) throws InterruptedException {
+
+   final List actualResults = new ArrayList<>();
+   while (true) {
+   Thread.sleep(50); // slow the processing down
+   final TypedResult result = 
executor.snapshotResult(session, resultID, 2);
 
 Review comment:
   It's copied from LocalExecutorITCase. I think any small number will do.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #9149: [FLINK-13303][hive] Add e2e test for hive connector

2019-07-17 Thread GitBox
lirui-apache commented on a change in pull request #9149: [FLINK-13303][hive] 
Add e2e test for hive connector
URL: https://github.com/apache/flink/pull/9149#discussion_r304713189
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/gateway/utils/LocalExecutorTestUtil.java
 ##
 @@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.client.gateway.utils;
+
+import org.apache.flink.table.client.gateway.Executor;
+import org.apache.flink.table.client.gateway.SessionContext;
+import org.apache.flink.table.client.gateway.TypedResult;
+import org.apache.flink.types.Row;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.stream.IntStream;
+
+/**
+ * Test util for LocalExecutor.
+ */
+public class LocalExecutorTestUtil {
 
 Review comment:
   Indeed. Will remove it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #9149: [FLINK-13303][hive] Add e2e test for hive connector

2019-07-17 Thread GitBox
lirui-apache commented on a change in pull request #9149: [FLINK-13303][hive] 
Add e2e test for hive connector
URL: https://github.com/apache/flink/pull/9149#discussion_r304712989
 
 

 ##
 File path: flink-end-to-end-tests/flink-connector-hive-test/pom.xml
 ##
 @@ -0,0 +1,231 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd;>
+
+   4.0.0
+
+   
+   org.apache.flink
+   flink-end-to-end-tests
+   1.10-SNAPSHOT
+   ..
+   
+
+   flink-connector-hive-test
+   flink-connector-hive-test
+   jar
+
+   
+
+   
+   org.apache.flink
+   
flink-connector-hive_${scala.binary.version}
+   ${project.version}
+   
+
+   
+   org.apache.flink
+   
flink-sql-client_${scala.binary.version}
+   ${project.version}
+   
+
+   
+   org.apache.flink
+   
flink-connector-hive_${scala.binary.version}
+   ${project.version}
+   test-jar
+   test
+   
+
+   
+   org.apache.hive
+   hive-contrib
+   ${hive.version}
+   test
+   
+
+   
+   org.apache.flink
+   
flink-sql-client_${scala.binary.version}
+   ${project.version}
+   test-jar
+   test
+   
+
+   
+   org.apache.hive.hcatalog
+   hive-hcatalog-core
+   ${hive.version}
+   test
+   
+   
+   com.google.guava
+   guava
+   
+   
+   
+
+   
+   org.apache.hive.hcatalog
+   hive-hcatalog-core
 
 Review comment:
   No, this is the test-jar of hive-hcatalog-core


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9152: [FLINK-13314][table-planner-blink] Correct resultType of some PlannerExpression when operands contains DecimalTypeInfo or BigDecimalTypeInfo in Blin

2019-07-17 Thread GitBox
flinkbot commented on issue #9152: [FLINK-13314][table-planner-blink] Correct 
resultType of some PlannerExpression when operands contains DecimalTypeInfo or 
BigDecimalTypeInfo in Blink planner
URL: https://github.com/apache/flink/pull/9152#issuecomment-512640394
 
 
   ## CI report:
   
   * 7187e4872abf0c514523a00cc9525d82f80bec6e : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119569472)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13315) Port wmstrategies to api-java-bridge

2019-07-17 Thread Jingsong Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-13315:
-
Summary: Port wmstrategies to api-java-bridge  (was: Port wmstrategies to 
api-java-java-bridge)

> Port wmstrategies to api-java-bridge
> 
>
> Key: FLINK-13315
> URL: https://issues.apache.org/jira/browse/FLINK-13315
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13315) Port wmstrategies to api-java-java-bridge

2019-07-17 Thread Jingsong Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-13315:
-
Summary: Port wmstrategies to api-java-java-bridge  (was: Port wmstrategies 
to api-java)

> Port wmstrategies to api-java-java-bridge
> -
>
> Key: FLINK-13315
> URL: https://issues.apache.org/jira/browse/FLINK-13315
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] Translate "asyncio" page into Chinese

2019-07-17 Thread GitBox
YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] 
Translate "asyncio" page into Chinese
URL: https://github.com/apache/flink/pull/9150#discussion_r304712136
 
 

 ##
 File path: docs/dev/stream/operators/asyncio.zh.md
 ##
 @@ -26,71 +26,55 @@ under the License.
 * ToC
 {:toc}
 
-This page explains the use of Flink's API for asynchronous I/O with external 
data stores.
-For users not familiar with asynchronous or event-driven programming, an 
article about Futures and
-event-driven programming may be useful preparation.
+本文讲解 Flink 用于访问外部数据存储的异步 I/O API。
+对于不熟悉异步或者事件驱动编程的用户,建议先储备一些关于 Future 和事件驱动编程的知识。
 
-Note: Details about the design and implementation of the asynchronous I/O 
utility can be found in the proposal and design document
-[FLIP-12: Asynchronous I/O Design and 
Implementation](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65870673).
+提示:这篇文档 [FLIP-12: 异步 I/O  
的设计和实现](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65870673)
 介绍了关于设计和实现异步 I/O 功能的细节。
 
+## 对于异步 I/O 操作的需求
 
-## The need for Asynchronous I/O Operations
+在与外部系统交互(用数据库中的数据扩充流数据)的时候,需要考虑与外部系统的通信延迟对整个流处理应用的影响。
 
-When interacting with external systems (for example when enriching stream 
events with data stored in a database), one needs to take care
-that communication delay with the external system does not dominate the 
streaming application's total work.
+简单地访问外部数据库的数据,比如使用 `MapFunction`,通常意味着**同步**交互:
+`MapFunction` 向数据库发送一个请求然后一直等待,直到收到响应。在许多情况下,等待占据了函数运行的大部分时间。
 
-Naively accessing data in the external database, for example in a 
`MapFunction`, typically means **synchronous** interaction:
-A request is sent to the database and the `MapFunction` waits until the 
response has been received. In many cases, this waiting
-makes up the vast majority of the function's time.
-
-Asynchronous interaction with the database means that a single parallel 
function instance can handle many requests concurrently and
-receive the responses concurrently. That way, the waiting time can be 
overlayed with sending other requests and
-receiving responses. At the very least, the waiting time is amortized over 
multiple requests. This leads in most cased to much higher
-streaming throughput.
+与数据库异步交互是指一个并行函数实例可以并发地处理多个请求和接收多个响应。这样,函数在等待的时间可以发送其他请求和接收其他响应。至少等待的时间可以被多个请求摊分。大多数情况下,异步交互可以大幅度提高流处理的吞吐量。
 
 
 
-*Note:* Improving throughput by just scaling the `MapFunction` to a very high 
parallelism is in some cases possible as well, but usually
-comes at a very high resource cost: Having many more parallel MapFunction 
instances means more tasks, threads, Flink-internal network
-connections, network connections to the database, buffers, and general 
internal bookkeeping overhead.
+*注意:*仅仅提高 `MapFunction` 
的并行度(parallelism)在有些情况下也可以提升吞吐量,但是这样做通常会导致非常高的资源消耗:更多的并行 `MapFunction` 实例意味着更多的 
Task、更多的线程、更多的 Flink 内部网络连接、 更多的与数据库的网络连接、更多的缓冲和更多程序内部协调的开销。
 
 
-## Prerequisites
+## 先决条件
 
-As illustrated in the section above, implementing proper asynchronous I/O to a 
database (or key/value store) requires a client
-to that database that supports asynchronous requests. Many popular databases 
offer such a client.
+如上节所述,正确地实现数据库(或键/值存储)的异步 I/O 交互需要支持异步请求的数据库客户端。许多主流数据库都提供了这样的客户端。
 
-In the absence of such a client, one can try and turn a synchronous client 
into a limited concurrent client by creating
-multiple clients and handling the synchronous calls with a thread pool. 
However, this approach is usually less
-efficient than a proper asynchronous client.
+如果没有这样的客户端,可以通过创建多个客户端并使用线程池处理同步调用的方法,将同步客户端转换为有限并发的客户端。然而,这种方法通常比正规的异步客户端效率低。
 
 
-## Async I/O API
+## 异步 I/O API
 
-Flink's Async I/O API allows users to use asynchronous request clients with 
data streams. The API handles the integration with
-data streams, well as handling order, event time, fault tolerance, etc.
+Flink 的异步 I/O API 允许用户在流处理中使用异步请求客户端。API 处理与数据流的集成、顺序、事件时间和容错等。
 
-Assuming one has an asynchronous client for the target database, three parts 
are needed to implement a stream transformation
-with asynchronous I/O against the database:
+在具备异步数据库客户端的基础上,实现数据流转换操作与数据库的异步 I/O 交互需要以下三部分:
 
-  - An implementation of `AsyncFunction` that dispatches the requests
-  - A *callback* that takes the result of the operation and hands it to the 
`ResultFuture`
-  - Applying the async I/O operation on a DataStream as a transformation
+- 实现分发请求的 `AsyncFunction`
+- 获取数据库交互的结果并发送给 `ResultFuture` 的 *回调* 函数
+- 将异步 I/O 操作应用于 `DataStream` 作为 `DataStream` 的一次转换操作。
 
-The following code example illustrates the basic pattern:
+下面是基本的代码模板:
 
 
 
 {% highlight java %}
-// This example implements the asynchronous request and callback with Futures 
that have the
-// interface of Java 8's futures (which is the same one followed by Flink's 
Future)
+// 这个例子使用 Java 8 的 Future 接口(与 Flink 的 Future 相同)实现了异步请求和回调。
 
 /**
- * An implementation of the 'AsyncFunction' that sends requests and sets the 
callback.
+ * 实现 'AsyncFunction' 用于发送请求和设置回调。
  */
 

[GitHub] [flink] flinkbot commented on issue #9152: [FLINK-13314][table-planner-blink] Correct resultType of some PlannerExpression when operands contains DecimalTypeInfo or BigDecimalTypeInfo in Blin

2019-07-17 Thread GitBox
flinkbot commented on issue #9152: [FLINK-13314][table-planner-blink] Correct 
resultType of some PlannerExpression when operands contains DecimalTypeInfo or 
BigDecimalTypeInfo in Blink planner
URL: https://github.com/apache/flink/pull/9152#issuecomment-512638804
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] beyond1920 commented on issue #9152: [FLINK-13314][table-planner-blink] Correct resultType of some PlannerExpression when operands contains DecimalTypeInfo or BigDecimalTypeInfo in Bl

2019-07-17 Thread GitBox
beyond1920 commented on issue #9152: [FLINK-13314][table-planner-blink] Correct 
resultType of some PlannerExpression when operands contains DecimalTypeInfo or 
BigDecimalTypeInfo in Blink planner
URL: https://github.com/apache/flink/pull/9152#issuecomment-512638693
 
 
   @JingsongLi @wuchong Please have a look at the pr when you have time. Thanks 
a lot.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13314) Correct resultType of some PlannerExpression when operands contains DecimalTypeInfo or BigDecimalTypeInfo in Blink planner

2019-07-17 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13314:
---
Labels: pull-request-available  (was: )

> Correct resultType of some PlannerExpression when operands contains 
> DecimalTypeInfo or BigDecimalTypeInfo in Blink planner
> --
>
> Key: FLINK-13314
> URL: https://issues.apache.org/jira/browse/FLINK-13314
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.9
>
>
> Correct resultType of the following PlannerExpression when operands contains 
> DecimalTypeInfo or BigDecimalTypeInfo in Blink planner:
> Minus/plus/Div/Mul/Ceil/Floor/Round
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] Translate "asyncio" page into Chinese

2019-07-17 Thread GitBox
YngwieWang commented on a change in pull request #9150: [FLINK-13227][docs-zh] 
Translate "asyncio" page into Chinese
URL: https://github.com/apache/flink/pull/9150#discussion_r304711786
 
 

 ##
 File path: docs/dev/stream/operators/asyncio.zh.md
 ##
 @@ -26,71 +26,55 @@ under the License.
 * ToC
 {:toc}
 
-This page explains the use of Flink's API for asynchronous I/O with external 
data stores.
-For users not familiar with asynchronous or event-driven programming, an 
article about Futures and
-event-driven programming may be useful preparation.
+本文讲解 Flink 用于访问外部数据存储的异步 I/O API。
+对于不熟悉异步或者事件驱动编程的用户,建议先储备一些关于 Future 和事件驱动编程的知识。
 
-Note: Details about the design and implementation of the asynchronous I/O 
utility can be found in the proposal and design document
-[FLIP-12: Asynchronous I/O Design and 
Implementation](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65870673).
+提示:这篇文档 [FLIP-12: 异步 I/O  
的设计和实现](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65870673)
 介绍了关于设计和实现异步 I/O 功能的细节。
 
+## 对于异步 I/O 操作的需求
 
-## The need for Asynchronous I/O Operations
+在与外部系统交互(用数据库中的数据扩充流数据)的时候,需要考虑与外部系统的通信延迟对整个流处理应用的影响。
 
-When interacting with external systems (for example when enriching stream 
events with data stored in a database), one needs to take care
-that communication delay with the external system does not dominate the 
streaming application's total work.
+简单地访问外部数据库的数据,比如使用 `MapFunction`,通常意味着**同步**交互:
+`MapFunction` 向数据库发送一个请求然后一直等待,直到收到响应。在许多情况下,等待占据了函数运行的大部分时间。
 
-Naively accessing data in the external database, for example in a 
`MapFunction`, typically means **synchronous** interaction:
-A request is sent to the database and the `MapFunction` waits until the 
response has been received. In many cases, this waiting
-makes up the vast majority of the function's time.
-
-Asynchronous interaction with the database means that a single parallel 
function instance can handle many requests concurrently and
-receive the responses concurrently. That way, the waiting time can be 
overlayed with sending other requests and
-receiving responses. At the very least, the waiting time is amortized over 
multiple requests. This leads in most cased to much higher
-streaming throughput.
+与数据库异步交互是指一个并行函数实例可以并发地处理多个请求和接收多个响应。这样,函数在等待的时间可以发送其他请求和接收其他响应。至少等待的时间可以被多个请求摊分。大多数情况下,异步交互可以大幅度提高流处理的吞吐量。
 
 
 
-*Note:* Improving throughput by just scaling the `MapFunction` to a very high 
parallelism is in some cases possible as well, but usually
-comes at a very high resource cost: Having many more parallel MapFunction 
instances means more tasks, threads, Flink-internal network
-connections, network connections to the database, buffers, and general 
internal bookkeeping overhead.
+*注意:*仅仅提高 `MapFunction` 
的并行度(parallelism)在有些情况下也可以提升吞吐量,但是这样做通常会导致非常高的资源消耗:更多的并行 `MapFunction` 实例意味着更多的 
Task、更多的线程、更多的 Flink 内部网络连接、 更多的与数据库的网络连接、更多的缓冲和更多程序内部协调的开销。
 
 
-## Prerequisites
+## 先决条件
 
-As illustrated in the section above, implementing proper asynchronous I/O to a 
database (or key/value store) requires a client
-to that database that supports asynchronous requests. Many popular databases 
offer such a client.
+如上节所述,正确地实现数据库(或键/值存储)的异步 I/O 交互需要支持异步请求的数据库客户端。许多主流数据库都提供了这样的客户端。
 
-In the absence of such a client, one can try and turn a synchronous client 
into a limited concurrent client by creating
-multiple clients and handling the synchronous calls with a thread pool. 
However, this approach is usually less
-efficient than a proper asynchronous client.
+如果没有这样的客户端,可以通过创建多个客户端并使用线程池处理同步调用的方法,将同步客户端转换为有限并发的客户端。然而,这种方法通常比正规的异步客户端效率低。
 
 
-## Async I/O API
+## 异步 I/O API
 
-Flink's Async I/O API allows users to use asynchronous request clients with 
data streams. The API handles the integration with
-data streams, well as handling order, event time, fault tolerance, etc.
+Flink 的异步 I/O API 允许用户在流处理中使用异步请求客户端。API 处理与数据流的集成、顺序、事件时间和容错等。
 
 Review comment:
   我理解这里是指 `order` `event time` `fault tolerance` 这些工作都由API负责处理和保证,修改为“API  
处理与数据流的集成、顺序、事件时间和容错等”是否合适?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] beyond1920 opened a new pull request #9152: [FLINK-13314][table-planner-blink] Correct resultType of some PlannerExpression when operands contains DecimalTypeInfo or BigDecimalTypeInf

2019-07-17 Thread GitBox
beyond1920 opened a new pull request #9152: [FLINK-13314][table-planner-blink] 
Correct resultType of some PlannerExpression when operands contains 
DecimalTypeInfo or BigDecimalTypeInfo in Blink planner
URL: https://github.com/apache/flink/pull/9152
 
 
   ## What is the purpose of the change
   
   Correct resultType of some 
PlannerExpression(`Minus`/`plus`/`Div`/`Mul`/`Ceil`/`Floor`/`Round`..,) when 
operands contains DecimalTypeInfo or BigDecimalTypeInfo in Blink planner.
   
   ## Brief change log
   
   *(for example:)*
 - Fix minor bug in RexNodeConverter when convert `between` and 
`notBetween` to RexNode.
 - Fix minor bug in PlannerExpressionConverter when convert DataType to 
TypeInformation.
 - Correct resultType of 
PlannerExpression(`Minus`/`plus`/`Div`/`Mul`/`Ceil`/`Floor`/`Round`..,)  in 
Blink planner when operands contains DecimalTypeInfo or BigDecimalTypeInfo
   
   
   ## Verifying this change
   
   ITCase
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9105: [FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory size into wrong configuration instance.

2019-07-17 Thread GitBox
flinkbot edited a comment on issue #9105: [FLINK-13241][Yarn/Mesos] Fix 
Yarn/MesosResourceManager setting managed memory size into wrong configuration 
instance.
URL: https://github.com/apache/flink/pull/9105#issuecomment-511499675
 
 
   ## CI report:
   
   * 7b6a81ed94056dd217f7feba9b155158fcfbfc4a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119404317)
   * 1e289e4de143a390eac564e38c6d5439c650f17a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441215)
   * dceeac0b80e6695d534e55fbbe3e83aa4262dbc9 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/119568870)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13314) Correct resultType of some PlannerExpression when operands contains DecimalTypeInfo or BigDecimalTypeInfo in Blink planner

2019-07-17 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13314:
---

Assignee: Jing Zhang

> Correct resultType of some PlannerExpression when operands contains 
> DecimalTypeInfo or BigDecimalTypeInfo in Blink planner
> --
>
> Key: FLINK-13314
> URL: https://issues.apache.org/jira/browse/FLINK-13314
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
> Fix For: 1.10.0, 1.9
>
>
> Correct resultType of the following PlannerExpression when operands contains 
> DecimalTypeInfo or BigDecimalTypeInfo in Blink planner:
> Minus/plus/Div/Mul/Ceil/Floor/Round
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


  1   2   3   4   5   >