[GitHub] [flink] dianfu commented on issue #8840: [FLINK-12931][python] Fix lint-python.sh cannot find flake8

2019-06-22 Thread GitBox
dianfu commented on issue #8840: [FLINK-12931][python] Fix lint-python.sh 
cannot find flake8
URL: https://github.com/apache/flink/pull/8840#issuecomment-504719457
 
 
   @bowenli86 Thanks a lot for reporting this issue. Could you check if this PR 
can solve the problems you encountered?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #8840: [FLINK-12931][python] Fix lint-python.sh cannot find flake8

2019-06-22 Thread GitBox
flinkbot commented on issue #8840: [FLINK-12931][python] Fix lint-python.sh 
cannot find flake8
URL: https://github.com/apache/flink/pull/8840#issuecomment-504719384
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12931) lint-python.sh cannot find flake8

2019-06-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12931:
---
Labels: pull-request-available  (was: )

> lint-python.sh cannot find flake8
> -
>
> Key: FLINK-12931
> URL: https://issues.apache.org/jira/browse/FLINK-12931
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> Hi guys,
> I tried to run tests for flink-python with {{./dev/lint-python.sh}} by 
> following README. But it reported it couldn't find flake8, error as
> {code:java}
> ./dev/lint-python.sh: line 490: 
> /.../flink/flink-python/dev/.conda/bin/flake8: No such file or directory
> {code}
> I've tried {{./dev/lint-python.sh -f}}, also didn't work.
> I suspect the reason may be that I already have an anaconda3 installed and it 
> conflicts with the miniconda installed by flink-python somehow. I'm not fully 
> sure about that.
> If that's the reason, I think we need to try to resolve the conflict because 
> anaconda is a pretty common package that developers install and use. We 
> shouldn't require devs to uninstall their existing conda environment in order 
> to develop flink-python and run its tests. It's better if flink-python can 
> have a well isolated environment on machines.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] dianfu opened a new pull request #8840: [FLINK-12931][python] Fix lint-python.sh cannot find flake8

2019-06-22 Thread GitBox
dianfu opened a new pull request #8840: [FLINK-12931][python] Fix 
lint-python.sh cannot find flake8
URL: https://github.com/apache/flink/pull/8840
 
 
   ## What is the purpose of the change
   
   *The lint-python.sh will execute failed during installing flake8 when there 
is anaconda2/anaconda3 installed. We need to solve this issue because anaconda 
is a pretty common package that developers install and use. 
   The reason to this issue is that `conda install flake8` will try to install 
flake8 into the conda home which may be the conda installed by user, not the 
miniconda installed by lint-python.sh.*
   
   
   ## Brief change log
   
 - *Updates lint-python.sh to specify the conda home explicitly when 
executing `conda remove` and `conda install`*
   
   ## Verifying this change
   
   This change can be verified as follows:
   
 - *Executes ./dev/lint-python.sh and it will succeed*
 - *Installs anaconda2/anaconda3, starts a new shell and makes sure the 
newly installed anaconda2/anaconda3 is used.*
 - *Executes ./dev/lint-python.sh -f and it will fail without the changes 
of this PR*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-12931) lint-python.sh cannot find flake8

2019-06-22 Thread Dian Fu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-12931:
---

Assignee: Dian Fu  (was: sunjincheng)

> lint-python.sh cannot find flake8
> -
>
> Key: FLINK-12931
> URL: https://issues.apache.org/jira/browse/FLINK-12931
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Dian Fu
>Priority: Major
> Fix For: 1.9.0
>
>
> Hi guys,
> I tried to run tests for flink-python with {{./dev/lint-python.sh}} by 
> following README. But it reported it couldn't find flake8, error as
> {code:java}
> ./dev/lint-python.sh: line 490: 
> /.../flink/flink-python/dev/.conda/bin/flake8: No such file or directory
> {code}
> I've tried {{./dev/lint-python.sh -f}}, also didn't work.
> I suspect the reason may be that I already have an anaconda3 installed and it 
> conflicts with the miniconda installed by flink-python somehow. I'm not fully 
> sure about that.
> If that's the reason, I think we need to try to resolve the conflict because 
> anaconda is a pretty common package that developers install and use. We 
> shouldn't require devs to uninstall their existing conda environment in order 
> to develop flink-python and run its tests. It's better if flink-python can 
> have a well isolated environment on machines.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] bowenli86 closed pull request #8814: [FLINK-12917][hive] support complex type of array, map, struct for Hive functions

2019-06-22 Thread GitBox
bowenli86 closed pull request #8814: [FLINK-12917][hive] support complex type 
of array, map, struct for Hive functions
URL: https://github.com/apache/flink/pull/8814
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #8814: [FLINK-12917][hive] support complex type of array, map, struct for Hive functions

2019-06-22 Thread GitBox
bowenli86 commented on issue #8814: [FLINK-12917][hive] support complex type of 
array, map, struct for Hive functions
URL: https://github.com/apache/flink/pull/8814#issuecomment-504706102
 
 
   merged in 1.9.0: 9bdf6b64202d177f890e7fee1dec16ddd02864f7


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #8815: [FLINK-12918][table][hive] unify GenericCatalogTable, HiveCatalogTable and AbstractCatalogTable into CatalogTableImpl

2019-06-22 Thread GitBox
bowenli86 commented on issue #8815: [FLINK-12918][table][hive] unify 
GenericCatalogTable, HiveCatalogTable and AbstractCatalogTable into 
CatalogTableImpl
URL: https://github.com/apache/flink/pull/8815#issuecomment-504705358
 
 
   @xuefuz thanks for your review! I updated the PR


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 edited a comment on issue #8830: [FLINK-12933][sql client] support 'use catalog' and 'use database' in SQL CLI

2019-06-22 Thread GitBox
bowenli86 edited a comment on issue #8830: [FLINK-12933][sql client] support 
'use catalog' and 'use database' in SQL CLI
URL: https://github.com/apache/flink/pull/8830#issuecomment-504704488
 
 
   @KurtYoung Extremely simple commands like this can be on sql cli itself IMO. 
DDL/DML is better to go thru sql parser.
   
   To be clear, I'm not against migrating these simple commands to sql parser 
*later* if we want, though I don't think it's that necessary given the 
complexity. Right now it's about making e2e user experience complete for 
catalogs in sql cli 1.9, and sql parser is too uncertain at this point for me 
to rely on.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 edited a comment on issue #8830: [FLINK-12933][sql client] support 'use catalog' and 'use database' in SQL CLI

2019-06-22 Thread GitBox
bowenli86 edited a comment on issue #8830: [FLINK-12933][sql client] support 
'use catalog' and 'use database' in SQL CLI
URL: https://github.com/apache/flink/pull/8830#issuecomment-504704488
 
 
   @KurtYoung Extremely simple commands like this can be on sql cli itself IMO. 
DDL/DML is better to go thru sql parser.
   
   To be clear, I'm not against migrating these simple commands to sql parser 
*later* when it is ready if we want. Right now it's about making end-to-end 
user experience complete for catalogs in sql cli in 1.9.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 edited a comment on issue #8830: [FLINK-12933][sql client] support 'use catalog' and 'use database' in SQL CLI

2019-06-22 Thread GitBox
bowenli86 edited a comment on issue #8830: [FLINK-12933][sql client] support 
'use catalog' and 'use database' in SQL CLI
URL: https://github.com/apache/flink/pull/8830#issuecomment-504704488
 
 
   @KurtYoung Extremely simple commands like this can be on sql cli itself IMO. 
DDL/DML is better to go thru sql parser.
   
   To be clear, I'm not against migrating these simple commands to sql parser 
*later* when it is ready if we want. Right now it's about making end-to-end 
user experience complete for catalogs in sql cli in 1.9, and sql parser is too 
uncertain at this point for me to rely on.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 edited a comment on issue #8830: [FLINK-12933][sql client] support 'use catalog' and 'use database' in SQL CLI

2019-06-22 Thread GitBox
bowenli86 edited a comment on issue #8830: [FLINK-12933][sql client] support 
'use catalog' and 'use database' in SQL CLI
URL: https://github.com/apache/flink/pull/8830#issuecomment-504704488
 
 
   @KurtYoung Extremely simple commands like this can be on sql cli itself IMO. 
DDL/DML is better to go thru sql parser.
   
   To be clear, I'm not against migrating these simple commands to sql parser 
*later* when it is ready if we want. Right now it's about making end-to-end 
user experience complete for catalogs in sql cli.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-12917) support complex type of array, map, struct for Hive functions

2019-06-22 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-12917.

Resolution: Fixed

merged in 1.9.0: 9bdf6b64202d177f890e7fee1dec16ddd02864f7

> support complex type of array, map, struct for Hive functions
> -
>
> Key: FLINK-12917
> URL: https://issues.apache.org/jira/browse/FLINK-12917
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] bowenli86 commented on a change in pull request #8814: [FLINK-12917][hive] support complex type of array, map, struct for Hive functions

2019-06-22 Thread GitBox
bowenli86 commented on a change in pull request #8814: [FLINK-12917][hive] 
support complex type of array, map, struct for Hive functions
URL: https://github.com/apache/flink/pull/8814#discussion_r296459451
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/functions/hive/conversion/HiveInspectors.java
 ##
 @@ -181,15 +191,72 @@ public static HiveObjectConversion 
getConversion(ObjectInspector inspector, Data
inspector instanceof 
BinaryObjectInspector) {
return IdentityConversion.INSTANCE;
} else if (inspector instanceof 
HiveCharObjectInspector) {
-   return o -> new HiveChar((String) o, 
((CharType) dataType.getLogicalType()).getLength());
+   return o -> new HiveChar((String) o, 
((CharType) dataType).getLength());
} else if (inspector instanceof 
HiveVarcharObjectInspector) {
-   return o -> new HiveVarchar((String) o, 
((VarCharType) dataType.getLogicalType()).getLength());
+   return o -> new HiveVarchar((String) o, 
((VarCharType) dataType).getLength());
}
 
// TODO: handle decimal type
}
 
-   // TODO: handle complex types like struct, list, and map
+   if (inspector instanceof ListObjectInspector) {
+   HiveObjectConversion eleConvert = getConversion(
+   ((ListObjectInspector) 
inspector).getListElementObjectInspector(),
+   ((ArrayType) dataType).getElementType());
+   return o -> {
+   Object[] array = (Object[]) o;
+   List result = new ArrayList<>();
 
 Review comment:
   unfortunately we don't know because array data can be of variable size, and 
ArrayType also doesn't allow setting size.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #8814: [FLINK-12917][hive] support complex type of array, map, struct for Hive functions

2019-06-22 Thread GitBox
bowenli86 commented on a change in pull request #8814: [FLINK-12917][hive] 
support complex type of array, map, struct for Hive functions
URL: https://github.com/apache/flink/pull/8814#discussion_r296459451
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/functions/hive/conversion/HiveInspectors.java
 ##
 @@ -181,15 +191,72 @@ public static HiveObjectConversion 
getConversion(ObjectInspector inspector, Data
inspector instanceof 
BinaryObjectInspector) {
return IdentityConversion.INSTANCE;
} else if (inspector instanceof 
HiveCharObjectInspector) {
-   return o -> new HiveChar((String) o, 
((CharType) dataType.getLogicalType()).getLength());
+   return o -> new HiveChar((String) o, 
((CharType) dataType).getLength());
} else if (inspector instanceof 
HiveVarcharObjectInspector) {
-   return o -> new HiveVarchar((String) o, 
((VarCharType) dataType.getLogicalType()).getLength());
+   return o -> new HiveVarchar((String) o, 
((VarCharType) dataType).getLength());
}
 
// TODO: handle decimal type
}
 
-   // TODO: handle complex types like struct, list, and map
+   if (inspector instanceof ListObjectInspector) {
+   HiveObjectConversion eleConvert = getConversion(
+   ((ListObjectInspector) 
inspector).getListElementObjectInspector(),
+   ((ArrayType) dataType).getElementType());
+   return o -> {
+   Object[] array = (Object[]) o;
+   List result = new ArrayList<>();
 
 Review comment:
   unfortunately we don't know because array data can be of variable size, and 
ArrayType also doesn't doesn't allow setting size.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #8830: [FLINK-12933][sql client] support 'use catalog' and 'use database' in SQL CLI

2019-06-22 Thread GitBox
bowenli86 commented on issue #8830: [FLINK-12933][sql client] support 'use 
catalog' and 'use database' in SQL CLI
URL: https://github.com/apache/flink/pull/8830#issuecomment-504704488
 
 
   @KurtYoung Extremely simple commands like this can be on sql cli itself IMO. 
DDL/DML is better to go thru sql parser.
   
   I also not against to migrating these simple commands to sql parser *later* 
when it is ready. Right now it's about making end-to-end user experience 
complete for catalogs in sql cli.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #8814: [FLINK-12917][hive] support complex type of array, map, struct for Hive functions

2019-06-22 Thread GitBox
bowenli86 commented on issue #8814: [FLINK-12917][hive] support complex type of 
array, map, struct for Hive functions
URL: https://github.com/apache/flink/pull/8814#issuecomment-504704060
 
 
   @xuefuz thanks for your review! 
   
   Merging


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #8814: [FLINK-12917][hive] support complex type of array, map, struct for Hive functions

2019-06-22 Thread GitBox
bowenli86 commented on a change in pull request #8814: [FLINK-12917][hive] 
support complex type of array, map, struct for Hive functions
URL: https://github.com/apache/flink/pull/8814#discussion_r296459451
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/functions/hive/conversion/HiveInspectors.java
 ##
 @@ -181,15 +191,72 @@ public static HiveObjectConversion 
getConversion(ObjectInspector inspector, Data
inspector instanceof 
BinaryObjectInspector) {
return IdentityConversion.INSTANCE;
} else if (inspector instanceof 
HiveCharObjectInspector) {
-   return o -> new HiveChar((String) o, 
((CharType) dataType.getLogicalType()).getLength());
+   return o -> new HiveChar((String) o, 
((CharType) dataType).getLength());
} else if (inspector instanceof 
HiveVarcharObjectInspector) {
-   return o -> new HiveVarchar((String) o, 
((VarCharType) dataType.getLogicalType()).getLength());
+   return o -> new HiveVarchar((String) o, 
((VarCharType) dataType).getLength());
}
 
// TODO: handle decimal type
}
 
-   // TODO: handle complex types like struct, list, and map
+   if (inspector instanceof ListObjectInspector) {
+   HiveObjectConversion eleConvert = getConversion(
+   ((ListObjectInspector) 
inspector).getListElementObjectInspector(),
+   ((ArrayType) dataType).getElementType());
+   return o -> {
+   Object[] array = (Object[]) o;
+   List result = new ArrayList<>();
 
 Review comment:
   unfortunately we don't know because array data can be variable size, and 
ArrayType also doesn't doesn't allow setting size.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #8814: [FLINK-12917][hive] support complex type of array, map, struct for Hive functions

2019-06-22 Thread GitBox
bowenli86 commented on a change in pull request #8814: [FLINK-12917][hive] 
support complex type of array, map, struct for Hive functions
URL: https://github.com/apache/flink/pull/8814#discussion_r296459451
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/functions/hive/conversion/HiveInspectors.java
 ##
 @@ -181,15 +191,72 @@ public static HiveObjectConversion 
getConversion(ObjectInspector inspector, Data
inspector instanceof 
BinaryObjectInspector) {
return IdentityConversion.INSTANCE;
} else if (inspector instanceof 
HiveCharObjectInspector) {
-   return o -> new HiveChar((String) o, 
((CharType) dataType.getLogicalType()).getLength());
+   return o -> new HiveChar((String) o, 
((CharType) dataType).getLength());
} else if (inspector instanceof 
HiveVarcharObjectInspector) {
-   return o -> new HiveVarchar((String) o, 
((VarCharType) dataType.getLogicalType()).getLength());
+   return o -> new HiveVarchar((String) o, 
((VarCharType) dataType).getLength());
}
 
// TODO: handle decimal type
}
 
-   // TODO: handle complex types like struct, list, and map
+   if (inspector instanceof ListObjectInspector) {
+   HiveObjectConversion eleConvert = getConversion(
+   ((ListObjectInspector) 
inspector).getListElementObjectInspector(),
+   ((ArrayType) dataType).getElementType());
+   return o -> {
+   Object[] array = (Object[]) o;
+   List result = new ArrayList<>();
 
 Review comment:
   unfortunately we don't know the size because array data can be variable size.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10206) Add hbase sink connector

2019-06-22 Thread Chance Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870332#comment-16870332
 ] 

Chance Li commented on FLINK-10206:
---

[~dangdangdang] How is this going so far? this Jira seems to be a long time. 
Would you like to tell some plan about it?

> Add hbase sink connector
> 
>
> Key: FLINK-10206
> URL: https://issues.apache.org/jira/browse/FLINK-10206
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / HBase
>Affects Versions: 1.6.0
>Reporter: Igloo
>Assignee: Shimin Yang
>Priority: Major
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Now, there is a hbase source connector for batch operation. 
>  
> In some cases, we need to save Streaming/Batch results into hbase.  Just like 
> cassandra streaming/Batch sink implementations. 
>  
> Design documentation: 
> [https://docs.google.com/document/d/1of0cYd73CtKGPt-UL3WVFTTBsVEre-TNRzoAt5u2PdQ/edit?usp=sharing]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #8839: [FLINK-12952][docs] Minor doc correction to incremental window functions

2019-06-22 Thread GitBox
flinkbot commented on issue #8839: [FLINK-12952][docs] Minor doc correction to 
incremental window functions
URL: https://github.com/apache/flink/pull/8839#issuecomment-504689924
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12952) Minor doc correction regarding incremental window functions

2019-06-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12952:
---
Labels: doc pull-request-available windows  (was: doc windows)

> Minor doc correction regarding incremental window functions
> ---
>
> Key: FLINK-12952
> URL: https://issues.apache.org/jira/browse/FLINK-12952
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.8.0
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Trivial
>  Labels: doc, pull-request-available, windows
>
> The Flink documentation [Window 
> Function|https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/operators/windows.html#window-functions],
>  mentions that 
> bq. The window function can be one of ReduceFunction, AggregateFunction, 
> FoldFunction or ProcessWindowFunction. The first two can be executed more 
> efficiently
> It should perhaps state (since FoldFunction, though deprecated, is also 
> incremental):
> bq.  The first *three* can be executed more efficiently



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] mans2singh opened a new pull request #8839: [FLINK-12952][docs] Minor doc correction to incremental window functions

2019-06-22 Thread GitBox
mans2singh opened a new pull request #8839: [FLINK-12952][docs] Minor doc 
correction to incremental window functions
URL: https://github.com/apache/flink/pull/8839
 
 
   
   ## What is the purpose of the change
   
   Minor correction to documentation regarding incremental windows functions
   
   ## Brief change log
   
   The Flink documentation [Window 
Functions](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/operators/windows.html#window-functions)
 mentions that
   
   > The window function can be one of ReduceFunction, AggregateFunction, 
FoldFunction or ProcessWindowFunction. The first two can be executed more 
efficiently
   
   
   It should perhaps state (since FoldFunction, though deprecated, is also 
incremental):
   
   > The first three can be executed more efficiently


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-12952) Minor doc correction regarding incremental window functions

2019-06-22 Thread Mans Singh (JIRA)
Mans Singh created FLINK-12952:
--

 Summary: Minor doc correction regarding incremental window 
functions
 Key: FLINK-12952
 URL: https://issues.apache.org/jira/browse/FLINK-12952
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.8.0
Reporter: Mans Singh
Assignee: Mans Singh


The Flink documentation [Window 
Function|https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/operators/windows.html#window-functions],
 mentions that 

bq. The window function can be one of ReduceFunction, AggregateFunction, 
FoldFunction or ProcessWindowFunction. The first two can be executed more 
efficiently

It should perhaps state (since FoldFunction, though deprecated, is also 
incremental):
bq.  The first *three* can be executed more efficiently





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] bowenli86 commented on issue #8831: [FLINK-12935][hive] package flink-connector-hive into flink distribution

2019-06-22 Thread GitBox
bowenli86 commented on issue #8831: [FLINK-12935][hive] package 
flink-connector-hive into flink distribution
URL: https://github.com/apache/flink/pull/8831#issuecomment-504685718
 
 
   > we don't have _any_ other connector in the distribution, what's the 
justification for doing so now?
   
   Hi @zentol, its purpose is mainly to include `HiveCatalog` for users to 
manage their metadata, rather than including the Hive data connector 
(source/sink/format)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #8838: [FLINK-12946][docs]Translate Apache NiFi Connector page into Chinese

2019-06-22 Thread GitBox
flinkbot commented on issue #8838: [FLINK-12946][docs]Translate Apache NiFi 
Connector page into Chinese
URL: https://github.com/apache/flink/pull/8838#issuecomment-504682018
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12946) Translate "Apache NiFi Connector" page into Chinese

2019-06-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12946:
---
Labels: pull-request-available  (was: )

> Translate "Apache NiFi Connector" page into Chinese
> ---
>
> Key: FLINK-12946
> URL: https://issues.apache.org/jira/browse/FLINK-12946
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: aloyszhang
>Priority: Minor
>  Labels: pull-request-available
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/nifi.htmll;
>  into Chinese.
> The doc located in "flink/docs/dev/connectors/nifi.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] aloyszhang opened a new pull request #8838: [FLINK-12946][docs]Translate Apache NiFi Connector page into Chinese

2019-06-22 Thread GitBox
aloyszhang opened a new pull request #8838: [FLINK-12946][docs]Translate Apache 
NiFi Connector page into Chinese
URL: https://github.com/apache/flink/pull/8838
 
 
   
   ## What is the purpose of the change
   
   Translate Apache NiFi Connector page into Chinese
   
   ## Brief change log
   
   Translate Apache NiFi Connector page into Chinese
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: ( no)
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no )
 - The S3 file system connector: (no )
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #8837: [FLINK-12938][docs]Translate "Streaming Connectors" page into Chinese

2019-06-22 Thread GitBox
flinkbot commented on issue #8837: [FLINK-12938][docs]Translate "Streaming 
Connectors" page into Chinese
URL: https://github.com/apache/flink/pull/8837#issuecomment-504677260
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aloyszhang opened a new pull request #8837: [FLINK-12938][docs]Translate "Streaming Connectors" page into Chinese

2019-06-22 Thread GitBox
aloyszhang opened a new pull request #8837: [FLINK-12938][docs]Translate 
"Streaming Connectors" page into Chinese
URL: https://github.com/apache/flink/pull/8837
 
 
   ## What is the purpose of the change
   
   Translate "Streaming Connectors" page into Chinese
   
   ## Brief change log
   
   Translate "Streaming Connectors" page into Chinese
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: ( no )
 - The S3 file system connector: ( no )
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12938) Translate "Streaming Connectors" page into Chinese

2019-06-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12938:
---
Labels: pull-request-available  (was: )

> Translate "Streaming Connectors" page into Chinese
> --
>
> Key: FLINK-12938
> URL: https://issues.apache.org/jira/browse/FLINK-12938
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: aloyszhang
>Priority: Major
>  Labels: pull-request-available
>
> Translate the page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/; into 
> Chinese.
> The doc located in "flink/docs/dev/connectors/index.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] sjwiesman closed pull request #8827: [FLINK-12928][docs] Remove old Flink ML docs

2019-06-22 Thread GitBox
sjwiesman closed pull request #8827: [FLINK-12928][docs] Remove old Flink ML 
docs
URL: https://github.com/apache/flink/pull/8827
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sjwiesman commented on issue #8827: [FLINK-12928][docs] Remove old Flink ML docs

2019-06-22 Thread GitBox
sjwiesman commented on issue #8827: [FLINK-12928][docs] Remove old Flink ML docs
URL: https://github.com/apache/flink/pull/8827#issuecomment-504675396
 
 
   Well this is embarrassing, closing. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #8836: Update README.md

2019-06-22 Thread GitBox
flinkbot commented on issue #8836: Update README.md
URL: https://github.com/apache/flink/pull/8836#issuecomment-504665818
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] cool321 commented on issue #8836: Update README.md

2019-06-22 Thread GitBox
cool321 commented on issue #8836: Update README.md
URL: https://github.com/apache/flink/pull/8836#issuecomment-504665763
 
 
   this is just a test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] cool321 opened a new pull request #8836: Update README.md

2019-06-22 Thread GitBox
cool321 opened a new pull request #8836: Update README.md
URL: https://github.com/apache/flink/pull/8836
 
 
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-12951) Add logic to bridge DDL to table source(sink)

2019-06-22 Thread Danny Chan (JIRA)
Danny Chan created FLINK-12951:
--

 Summary: Add logic to bridge DDL to table source(sink)
 Key: FLINK-12951
 URL: https://issues.apache.org/jira/browse/FLINK-12951
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Affects Versions: 1.9.0
Reporter: Danny Chan
Assignee: Danny Chan
 Fix For: 1.9.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12942) Translate "Elasticsearch Connector" page into Chinese

2019-06-22 Thread Forward Xu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Forward Xu reassigned FLINK-12942:
--

Assignee: Forward Xu

> Translate "Elasticsearch Connector" page into Chinese
> -
>
> Key: FLINK-12942
> URL: https://issues.apache.org/jira/browse/FLINK-12942
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Forward Xu
>Priority: Minor
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/elasticsearch.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/elasticsearch.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12940) Translate "Apache Cassandra Connector" page into Chinese

2019-06-22 Thread Forward Xu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Forward Xu reassigned FLINK-12940:
--

Assignee: Forward Xu

> Translate "Apache Cassandra Connector" page into Chinese
> 
>
> Key: FLINK-12940
> URL: https://issues.apache.org/jira/browse/FLINK-12940
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Forward Xu
>Priority: Minor
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/cassandra.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/cassandra.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] JingsongLi commented on issue #8792: [FLINK-12743][table-runtime-blink] Introduce unbounded streaming anti/semi join operator

2019-06-22 Thread GitBox
JingsongLi commented on issue #8792: [FLINK-12743][table-runtime-blink] 
Introduce unbounded streaming anti/semi join operator
URL: https://github.com/apache/flink/pull/8792#issuecomment-504654580
 
 
   LGTM +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #8792: [FLINK-12743][table-runtime-blink] Introduce unbounded streaming anti/semi join operator

2019-06-22 Thread GitBox
wuchong commented on a change in pull request #8792: 
[FLINK-12743][table-runtime-blink] Introduce unbounded streaming anti/semi join 
operator
URL: https://github.com/apache/flink/pull/8792#discussion_r296441333
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/join/stream/StreamingSemiAntiJoinOperator.java
 ##
 @@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.join.stream;
+
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.table.dataformat.BaseRow;
+import org.apache.flink.table.dataformat.util.BaseRowUtil;
+import org.apache.flink.table.generated.GeneratedJoinCondition;
+import org.apache.flink.table.runtime.join.stream.state.JoinInputSideSpec;
+import org.apache.flink.table.runtime.join.stream.state.JoinRecordStateView;
+import org.apache.flink.table.runtime.join.stream.state.JoinRecordStateViews;
+import 
org.apache.flink.table.runtime.join.stream.state.OuterJoinRecordStateView;
+import 
org.apache.flink.table.runtime.join.stream.state.OuterJoinRecordStateViews;
+import org.apache.flink.table.typeutils.BaseRowTypeInfo;
+
+/**
+ * Streaming unbounded Join operator which supports SEMI/ANTI JOIN.
+ */
+public class StreamingSemiAntiJoinOperator extends 
AbstractStreamingJoinOperator {
+
+   private static final long serialVersionUID = -3135772379944924519L;
+
+   // true if it is anti join, otherwise is semi joinp
+   private final boolean isAntiJoin;
+
+   // left join state
+   private transient OuterJoinRecordStateView leftRecordStateView;
+   // right join state
+   private transient JoinRecordStateView rightRecordStateView;
+
+   public StreamingSemiAntiJoinOperator(
+   boolean isAntiJoin,
+   BaseRowTypeInfo leftType,
+   BaseRowTypeInfo rightType,
+   GeneratedJoinCondition generatedJoinCondition,
+   JoinInputSideSpec leftInputSideSpec,
+   JoinInputSideSpec rightInputSideSpec,
+   boolean[] filterNullKeys,
+   long minRetentionTime) {
+   super(leftType, rightType, generatedJoinCondition, 
leftInputSideSpec, rightInputSideSpec, filterNullKeys, minRetentionTime);
+   this.isAntiJoin = isAntiJoin;
+   }
+
+   @Override
+   public void open() throws Exception {
+   super.open();
+
+   this.leftRecordStateView = OuterJoinRecordStateViews.create(
+   getRuntimeContext(),
+   LEFT_RECORDS_STATE_NAME,
+   leftInputSideSpec,
+   leftType,
+   minRetentionTime,
+   stateCleaningEnabled);
+
+   this.rightRecordStateView = JoinRecordStateViews.create(
+   getRuntimeContext(),
+   RIGHT_RECORDS_STATE_NAME,
+   rightInputSideSpec,
+   rightType,
+   minRetentionTime,
+   stateCleaningEnabled);
+   }
+
+   /**
+* Process an input element and output incremental joined records, 
retraction messages will
+* be sent in some scenarios.
+*
+* Following is the pseudo code to describe the core logic of this 
method.
+*
+* 
+* if there is no matched rows on the other side
+*   if anti join, send input record
+* if there are matched rows on the other side
+*   if semi join, send input record
+* if the input record is accumulate, state.add(record, matched size)
+* if the input record is retract, state.retract(record)
+* 
+*/
+   @Override
+   public void processElement1(StreamRecord element) throws 
Exception {
+   BaseRow input = element.getValue();
+   AssociatedRecords associatedRecords = 
AssociatedRecords.of(input, true, rightRecordStateView, joinCondition);
+   if (associatedRecords.isEmpty()) {
+   if 

[GitHub] [flink] flinkbot commented on issue #8834: [FlinkFLINK-12948][connector] remove legacy FlinkKafkaConsumer081

2019-06-22 Thread GitBox
flinkbot commented on issue #8834: [FlinkFLINK-12948][connector] remove legacy 
FlinkKafkaConsumer081
URL: https://github.com/apache/flink/pull/8834#issuecomment-504649807
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #8835: [FLINK-12950][connector] remove legacy FlinkKafkaProducer

2019-06-22 Thread GitBox
flinkbot commented on issue #8835: [FLINK-12950][connector] remove legacy 
FlinkKafkaProducer
URL: https://github.com/apache/flink/pull/8835#issuecomment-504649806
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #8833: [FLINK-12949][connector] remove legacy FlinkKafkaConsumer082

2019-06-22 Thread GitBox
flinkbot commented on issue #8833: [FLINK-12949][connector] remove legacy 
FlinkKafkaConsumer082
URL: https://github.com/apache/flink/pull/8833#issuecomment-504649723
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12950) remove legacy FlinkKafkaProducer

2019-06-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12950:
---
Labels: pull-request-available  (was: )

> remove legacy FlinkKafkaProducer
> 
>
> Key: FLINK-12950
> URL: https://issues.apache.org/jira/browse/FLINK-12950
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.9.0
>Reporter: aloyszhang
>Assignee: aloyszhang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] aloyszhang opened a new pull request #8835: [FLINK-12950][connector] remove legacy FlinkKafkaProducer

2019-06-22 Thread GitBox
aloyszhang opened a new pull request #8835: [FLINK-12950][connector] remove 
legacy FlinkKafkaProducer
URL: https://github.com/apache/flink/pull/8835
 
 
   ## What is the purpose of the change
   
   remove legacy FlinkKafkaProducer
   
   ## Brief change log
   
   remove legacy FlinkKafkaProducer
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no )
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aloyszhang opened a new pull request #8834: [FlinkFLINK-12948][connector] remove legacy FlinkKafkaConsumer081

2019-06-22 Thread GitBox
aloyszhang opened a new pull request #8834: [FlinkFLINK-12948][connector] 
remove legacy FlinkKafkaConsumer081
URL: https://github.com/apache/flink/pull/8834
 
 
   ## What is the purpose of the change
   
   remove legacy FlinkKafkaConsumer081
   
   ## Brief change log
   
   remove legacy FlinkKafkaConsumer081
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no )
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aloyszhang opened a new pull request #8833: [FLINK-12949][connector] remove legacy FlinkKafkaConsumer082

2019-06-22 Thread GitBox
aloyszhang opened a new pull request #8833: [FLINK-12949][connector] remove 
legacy FlinkKafkaConsumer082
URL: https://github.com/apache/flink/pull/8833
 
 
   ## What is the purpose of the change
   
   remove legacy FlinkKafkaConsumer082
   
   ## Brief change log
   
   remove legacy FlinkKafkaConsumer082
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no )
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12949) remove legacy FlinkKafkaConsumer082

2019-06-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12949:
---
Labels: pull-request-available  (was: )

> remove legacy FlinkKafkaConsumer082
> ---
>
> Key: FLINK-12949
> URL: https://issues.apache.org/jira/browse/FLINK-12949
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.9.0
>Reporter: aloyszhang
>Assignee: aloyszhang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12950) remove legacy FlinkKafkaProducer

2019-06-22 Thread aloyszhang (JIRA)
aloyszhang created FLINK-12950:
--

 Summary: remove legacy FlinkKafkaProducer
 Key: FLINK-12950
 URL: https://issues.apache.org/jira/browse/FLINK-12950
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Affects Versions: 1.9.0
Reporter: aloyszhang
Assignee: aloyszhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12949) remove legacy FlinkKafkaConsumer082

2019-06-22 Thread aloyszhang (JIRA)
aloyszhang created FLINK-12949:
--

 Summary: remove legacy FlinkKafkaConsumer082
 Key: FLINK-12949
 URL: https://issues.apache.org/jira/browse/FLINK-12949
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Affects Versions: 1.9.0
Reporter: aloyszhang
Assignee: aloyszhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12948) remove legacy FlinkKafkaConsumer081

2019-06-22 Thread aloyszhang (JIRA)
aloyszhang created FLINK-12948:
--

 Summary: remove legacy FlinkKafkaConsumer081
 Key: FLINK-12948
 URL: https://issues.apache.org/jira/browse/FLINK-12948
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Reporter: aloyszhang
Assignee: aloyszhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12944) Translate "Streaming File Sink" page into Chinese

2019-06-22 Thread aloyszhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aloyszhang reassigned FLINK-12944:
--

Assignee: aloyszhang

> Translate "Streaming File Sink" page into Chinese
> -
>
> Key: FLINK-12944
> URL: https://issues.apache.org/jira/browse/FLINK-12944
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: aloyszhang
>Priority: Minor
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/streamfile_sink.html;
>  into Chinese.
> The doc located in "flink/docs/dev/connectors/streamfile_sink.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12938) Translate "Streaming Connectors" page into Chinese

2019-06-22 Thread aloyszhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aloyszhang reassigned FLINK-12938:
--

Assignee: aloyszhang

> Translate "Streaming Connectors" page into Chinese
> --
>
> Key: FLINK-12938
> URL: https://issues.apache.org/jira/browse/FLINK-12938
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: aloyszhang
>Priority: Major
>
> Translate the page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/; into 
> Chinese.
> The doc located in "flink/docs/dev/connectors/index.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12947) Translate "Twitter Connector" page into Chinese

2019-06-22 Thread aloyszhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aloyszhang reassigned FLINK-12947:
--

Assignee: aloyszhang

> Translate "Twitter Connector" page into Chinese
> ---
>
> Key: FLINK-12947
> URL: https://issues.apache.org/jira/browse/FLINK-12947
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: aloyszhang
>Priority: Minor
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/twitter.html;
>  into Chinese.
> The doc located in "flink/docs/dev/connectors/twitter.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12946) Translate "Apache NiFi Connector" page into Chinese

2019-06-22 Thread aloyszhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aloyszhang reassigned FLINK-12946:
--

Assignee: aloyszhang

> Translate "Apache NiFi Connector" page into Chinese
> ---
>
> Key: FLINK-12946
> URL: https://issues.apache.org/jira/browse/FLINK-12946
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: aloyszhang
>Priority: Minor
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/nifi.htmll;
>  into Chinese.
> The doc located in "flink/docs/dev/connectors/nifi.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12945) Translate "RabbitMQ Connector" page into Chinese

2019-06-22 Thread aloyszhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aloyszhang reassigned FLINK-12945:
--

Assignee: aloyszhang

> Translate "RabbitMQ Connector" page into Chinese
> 
>
> Key: FLINK-12945
> URL: https://issues.apache.org/jira/browse/FLINK-12945
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: aloyszhang
>Priority: Minor
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/rabbitmq.html;
>  into Chinese.
> The doc located in "flink/docs/dev/connectors/rabbitmq.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12943) Translate "HDFS Connector" page into Chinese

2019-06-22 Thread aloyszhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aloyszhang reassigned FLINK-12943:
--

Assignee: aloyszhang

> Translate "HDFS Connector" page into Chinese
> 
>
> Key: FLINK-12943
> URL: https://issues.apache.org/jira/browse/FLINK-12943
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: aloyszhang
>Priority: Minor
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/filesystem_sink.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/filesystem_sink.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12939) Translate "Apache Kafka Connector" page into Chinese

2019-06-22 Thread aloyszhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aloyszhang reassigned FLINK-12939:
--

Assignee: aloyszhang

> Translate "Apache Kafka Connector" page into Chinese
> 
>
> Key: FLINK-12939
> URL: https://issues.apache.org/jira/browse/FLINK-12939
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: aloyszhang
>Priority: Minor
>
> Translate the page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kafka.html;
>  into Chinese.
> The doc located in "flink/docs/dev/connectors/kafka.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] zentol commented on issue #8831: [FLINK-12935][hive] package flink-connector-hive into flink distribution

2019-06-22 Thread GitBox
zentol commented on issue #8831: [FLINK-12935][hive] package 
flink-connector-hive into flink distribution
URL: https://github.com/apache/flink/pull/8831#issuecomment-504643987
 
 
   we don't have _any_ other connector in the distribution, what's the 
justification for doing so now?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on issue #8827: [FLINK-12928][docs] Remove old Flink ML docs

2019-06-22 Thread GitBox
zentol commented on issue #8827: [FLINK-12928][docs] Remove old Flink ML docs
URL: https://github.com/apache/flink/pull/8827#issuecomment-504638222
 
 
   The old flink-ml library totally still exists: 
https://github.com/apache/flink/tree/master/flink-libraries/flink-ml, #8526 has 
not been merged yet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-12903) Remove legacy flink-python APIs

2019-06-22 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-12903.

  Resolution: Fixed
Release Note: 
The older Python APIs for Batch & Streaming have been removed.

A new API is being developed based on the Table API as part of FLINK-12308.

Existing users may continue to use these APIs with future versions of Flink by 
copying both the flink-(streaming)-python jars into the /lib directory of the 
distribution and the corresponding start-scripts (pyflink(-stream).sh) into the 
/bin directory of the distribution.

master: 8ab2f9320c6ba10c09c97a264fc3e0ce45efebc6

> Remove legacy flink-python APIs
> ---
>
> Key: FLINK-12903
> URL: https://issues.apache.org/jira/browse/FLINK-12903
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Build System
>Affects Versions: 1.9.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> As discussed on the [mailing 
> list|http://mail-archives.apache.org/mod_mbox/flink-user/201906.mbox/%3cCANC1h_uSoBi0nG1wL-4EATBSU_h2t46g=b9i8teusmxrmxr...@mail.gmail.com%3e],
>  remove the older batch python API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] zentol merged pull request #8803: [FLINK-12903][py] Remove old Python APIs

2019-06-22 Thread GitBox
zentol merged pull request #8803: [FLINK-12903][py] Remove old Python APIs
URL: https://github.com/apache/flink/pull/8803
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #8792: [FLINK-12743][table-runtime-blink] Introduce unbounded streaming anti/semi join operator

2019-06-22 Thread GitBox
JingsongLi commented on a change in pull request #8792: 
[FLINK-12743][table-runtime-blink] Introduce unbounded streaming anti/semi join 
operator
URL: https://github.com/apache/flink/pull/8792#discussion_r296435653
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/join/stream/StreamingSemiAntiJoinOperator.java
 ##
 @@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.join.stream;
+
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.table.dataformat.BaseRow;
+import org.apache.flink.table.dataformat.util.BaseRowUtil;
+import org.apache.flink.table.generated.GeneratedJoinCondition;
+import org.apache.flink.table.runtime.join.stream.state.JoinInputSideSpec;
+import org.apache.flink.table.runtime.join.stream.state.JoinRecordStateView;
+import org.apache.flink.table.runtime.join.stream.state.JoinRecordStateViews;
+import 
org.apache.flink.table.runtime.join.stream.state.OuterJoinRecordStateView;
+import 
org.apache.flink.table.runtime.join.stream.state.OuterJoinRecordStateViews;
+import org.apache.flink.table.typeutils.BaseRowTypeInfo;
+
+/**
+ * Streaming unbounded Join operator which supports SEMI/ANTI JOIN.
+ */
+public class StreamingSemiAntiJoinOperator extends 
AbstractStreamingJoinOperator {
+
+   private static final long serialVersionUID = -3135772379944924519L;
+
+   // true if it is anti join, otherwise is semi joinp
+   private final boolean isAntiJoin;
+
+   // left join state
+   private transient OuterJoinRecordStateView leftRecordStateView;
+   // right join state
+   private transient JoinRecordStateView rightRecordStateView;
+
+   public StreamingSemiAntiJoinOperator(
+   boolean isAntiJoin,
+   BaseRowTypeInfo leftType,
+   BaseRowTypeInfo rightType,
+   GeneratedJoinCondition generatedJoinCondition,
+   JoinInputSideSpec leftInputSideSpec,
+   JoinInputSideSpec rightInputSideSpec,
+   boolean[] filterNullKeys,
+   long minRetentionTime) {
+   super(leftType, rightType, generatedJoinCondition, 
leftInputSideSpec, rightInputSideSpec, filterNullKeys, minRetentionTime);
+   this.isAntiJoin = isAntiJoin;
+   }
+
+   @Override
+   public void open() throws Exception {
+   super.open();
+
+   this.leftRecordStateView = OuterJoinRecordStateViews.create(
+   getRuntimeContext(),
+   LEFT_RECORDS_STATE_NAME,
+   leftInputSideSpec,
+   leftType,
+   minRetentionTime,
+   stateCleaningEnabled);
+
+   this.rightRecordStateView = JoinRecordStateViews.create(
+   getRuntimeContext(),
+   RIGHT_RECORDS_STATE_NAME,
+   rightInputSideSpec,
+   rightType,
+   minRetentionTime,
+   stateCleaningEnabled);
+   }
+
+   /**
+* Process an input element and output incremental joined records, 
retraction messages will
+* be sent in some scenarios.
+*
+* Following is the pseudo code to describe the core logic of this 
method.
+*
+* 
+* if there is no matched rows on the other side
+*   if anti join, send input record
+* if there are matched rows on the other side
+*   if semi join, send input record
+* if the input record is accumulate, state.add(record, matched size)
+* if the input record is retract, state.retract(record)
+* 
+*/
+   @Override
+   public void processElement1(StreamRecord element) throws 
Exception {
+   BaseRow input = element.getValue();
+   AssociatedRecords associatedRecords = 
AssociatedRecords.of(input, true, rightRecordStateView, joinCondition);
+   if (associatedRecords.isEmpty()) {
+   if 

[GitHub] [flink] JingsongLi commented on a change in pull request #8792: [FLINK-12743][table-runtime-blink] Introduce unbounded streaming anti/semi join operator

2019-06-22 Thread GitBox
JingsongLi commented on a change in pull request #8792: 
[FLINK-12743][table-runtime-blink] Introduce unbounded streaming anti/semi join 
operator
URL: https://github.com/apache/flink/pull/8792#discussion_r296435562
 
 

 ##
 File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/join/stream/StreamingSemiAntiJoinOperator.java
 ##
 @@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.join.stream;
+
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.table.dataformat.BaseRow;
+import org.apache.flink.table.dataformat.util.BaseRowUtil;
+import org.apache.flink.table.generated.GeneratedJoinCondition;
+import org.apache.flink.table.runtime.join.stream.state.JoinInputSideSpec;
+import org.apache.flink.table.runtime.join.stream.state.JoinRecordStateView;
+import org.apache.flink.table.runtime.join.stream.state.JoinRecordStateViews;
+import 
org.apache.flink.table.runtime.join.stream.state.OuterJoinRecordStateView;
+import 
org.apache.flink.table.runtime.join.stream.state.OuterJoinRecordStateViews;
+import org.apache.flink.table.typeutils.BaseRowTypeInfo;
+
+/**
+ * Streaming unbounded Join operator which supports SEMI/ANTI JOIN.
+ */
+public class StreamingSemiAntiJoinOperator extends 
AbstractStreamingJoinOperator {
+
+   private static final long serialVersionUID = -3135772379944924519L;
+
+   // true if it is anti join, otherwise is semi joinp
+   private final boolean isAntiJoin;
+
+   // left join state
+   private transient OuterJoinRecordStateView leftRecordStateView;
+   // right join state
+   private transient JoinRecordStateView rightRecordStateView;
+
+   public StreamingSemiAntiJoinOperator(
+   boolean isAntiJoin,
+   BaseRowTypeInfo leftType,
+   BaseRowTypeInfo rightType,
+   GeneratedJoinCondition generatedJoinCondition,
+   JoinInputSideSpec leftInputSideSpec,
+   JoinInputSideSpec rightInputSideSpec,
+   boolean[] filterNullKeys,
+   long minRetentionTime) {
+   super(leftType, rightType, generatedJoinCondition, 
leftInputSideSpec, rightInputSideSpec, filterNullKeys, minRetentionTime);
+   this.isAntiJoin = isAntiJoin;
+   }
+
+   @Override
+   public void open() throws Exception {
+   super.open();
+
+   this.leftRecordStateView = OuterJoinRecordStateViews.create(
+   getRuntimeContext(),
+   LEFT_RECORDS_STATE_NAME,
+   leftInputSideSpec,
+   leftType,
+   minRetentionTime,
+   stateCleaningEnabled);
+
+   this.rightRecordStateView = JoinRecordStateViews.create(
+   getRuntimeContext(),
+   RIGHT_RECORDS_STATE_NAME,
+   rightInputSideSpec,
+   rightType,
+   minRetentionTime,
+   stateCleaningEnabled);
+   }
+
+   /**
+* Process an input element and output incremental joined records, 
retraction messages will
+* be sent in some scenarios.
+*
+* Following is the pseudo code to describe the core logic of this 
method.
+*
+* 
+* if there is no matched rows on the other side
+*   if anti join, send input record
+* if there are matched rows on the other side
+*   if semi join, send input record
+* if the input record is accumulate, state.add(record, matched size)
+* if the input record is retract, state.retract(record)
+* 
+*/
+   @Override
+   public void processElement1(StreamRecord element) throws 
Exception {
+   BaseRow input = element.getValue();
+   AssociatedRecords associatedRecords = 
AssociatedRecords.of(input, true, rightRecordStateView, joinCondition);
+   if (associatedRecords.isEmpty()) {
+   if