[jira] [Commented] (FLINK-20879) Use MemorySize type instead of String type for memory ConfigOption in ExecutionConfigOptions

2021-01-11 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17263134#comment-17263134
 ] 

jiawen xiao commented on FLINK-20879:
-

If possible, I want to try to modify, please assign to me!  [~godfreyhe]

> Use MemorySize type instead of String type for memory ConfigOption in 
> ExecutionConfigOptions
> 
>
> Key: FLINK-20879
> URL: https://issues.apache.org/jira/browse/FLINK-20879
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.13.0
>
>
> Currently, There are memory ConfigOptions in ExecutionConfigOptions such as 
> {{table.exec.resource.external-buffer-memory}}, 
> {{table.exec.resource.hash-agg.memory}}. They are all {{String}} type now. 
> While when we need to get the memory size value, the String value should be 
> converted to {{MemorySize}} type and then getting bytes value. Code likes:
> {code:java}
> val memoryBytes = MemorySize.parse(config.getConfiguration.getString(
>   ExecutionConfigOptions.TABLE_EXEC_RESOURCE_HASH_AGG_MEMORY)).getBytes
> {code}
> The above code can be simplified if we change the {{ConfigOption}} type from 
> {{String}} to {{MemorySize}} type. Many runtime {{ConfigOption}} s also use 
> {{MemorySize}} type to define memory config. So I suggest we use 
> {{MemorySize}} type instead of {{String}} type for memory {{ConfigOption}} in 
> {{ExecutionConfigOptions}}.
> Note: this is an incompatible change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-30 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17256395#comment-17256395
 ] 

jiawen xiao commented on FLINK-20777:
-

ok, i will create a pr for it

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20026) Jdbc connector support regular expression

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17256337#comment-17256337
 ] 

jiawen xiao commented on FLINK-20026:
-

SGTM,please a assign , I want to try

> Jdbc connector support regular expression
> -
>
> Key: FLINK-20026
> URL: https://issues.apache.org/jira/browse/FLINK-20026
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.11.2
>Reporter: Peihui He
>Priority: Major
> Fix For: 1.13.0
>
>
> When there is a large amount of data, we divide the tables by month.
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#table-name]
> so it's nice to support regular expression for table-name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17256334#comment-17256334
 ] 

jiawen xiao commented on FLINK-20777:
-

hi , [~renqs] , Maybe we need more people’s opinions

hi ,[~jark]   WDYT?

I'm not sure whether constant checking of kafka meta will bring performance 
impact?

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20810) abstract JdbcSchema structure and derive it from JdbcDialect.

2020-12-29 Thread jiawen xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiawen xiao updated FLINK-20810:

Description: related 
https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963

>  abstract JdbcSchema structure and derive it from JdbcDialect.
> --
>
> Key: FLINK-20810
> URL: https://issues.apache.org/jira/browse/FLINK-20810
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Affects Versions: 1.11.1, 1.11.2, 1.11.3
> Environment: related 
> https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963
>Reporter: jiawen xiao
>Priority: Major
> Fix For: 1.13.0
>
>
> related https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20810) abstract JdbcSchema structure and derive it from JdbcDialect.

2020-12-29 Thread jiawen xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiawen xiao updated FLINK-20810:

Environment: (was: related 
https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963)

>  abstract JdbcSchema structure and derive it from JdbcDialect.
> --
>
> Key: FLINK-20810
> URL: https://issues.apache.org/jira/browse/FLINK-20810
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Affects Versions: 1.11.1, 1.11.2, 1.11.3
>Reporter: jiawen xiao
>Priority: Major
> Fix For: 1.13.0
>
>
> related https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20810) abstract JdbcSchema structure and derive it from JdbcDialect.

2020-12-29 Thread jiawen xiao (Jira)
jiawen xiao created FLINK-20810:
---

 Summary:  abstract JdbcSchema structure and derive it from 
JdbcDialect.
 Key: FLINK-20810
 URL: https://issues.apache.org/jira/browse/FLINK-20810
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Affects Versions: 1.11.3, 1.11.2, 1.11.1
 Environment: related 
https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963
Reporter: jiawen xiao
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20026) Jdbc connector support regular expression

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17256243#comment-17256243
 ] 

jiawen xiao commented on FLINK-20026:
-

cc,[~jark]. WDYT?

> Jdbc connector support regular expression
> -
>
> Key: FLINK-20026
> URL: https://issues.apache.org/jira/browse/FLINK-20026
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.11.2
>Reporter: Peihui He
>Priority: Major
> Fix For: 1.13.0
>
>
> When there is a large amount of data, we divide the tables by month.
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#table-name]
> so it's nice to support regular expression for table-name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18654) Correct missleading documentation in "Partitioned Scan" section of JDBC connector

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17256241#comment-17256241
 ] 

jiawen xiao commented on FLINK-18654:
-

Hi,[~jark],I am happy to solve this problem .i will fix  it

> Correct missleading documentation in "Partitioned Scan" section of JDBC 
> connector
> -
>
> Key: FLINK-18654
> URL: https://issues.apache.org/jira/browse/FLINK-18654
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC, Documentation, Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.13.0, 1.11.4
>
>
> In 
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/jdbc.html#partitioned-scan
> > Notice that scan.partition.lower-bound and scan.partition.upper-bound are 
> > just used to decide the partition stride, not for filtering the rows in 
> > table. So all rows in the table will be partitioned and returned.
> The "not for filtering the rows in table" is not correct, actually, if 
> partition bounds is defined, it only scans rows in the bound range. 
> Besides, maybe it would be better to add some practice suggestion, for 
> example, 
> "If it is a batch job, I think it also doable to get the max and min value 
> first before submitting the flink job."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15352) develop MySQLCatalog to connect Flink with MySQL tables and ecosystem

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17256234#comment-17256234
 ] 

jiawen xiao commented on FLINK-15352:
-

hi,[~phoenixjiangnan],I think this is a clear task and I am very 
interested.please assign to me

> develop MySQLCatalog  to connect Flink with MySQL tables and ecosystem
> --
>
> Key: FLINK-15352
> URL: https://issues.apache.org/jira/browse/FLINK-15352
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17255887#comment-17255887
 ] 

jiawen xiao commented on FLINK-20777:
-

Actually,these scenarios are determined according to your production 
environment.it is optional when you need.

You can first experiment to determine its impact on performance, when it is 
turned on by default.

it need to judge whether this change is necessary.

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17255865#comment-17255865
 ] 

jiawen xiao commented on FLINK-20777:
-

Yes ,i know what your said.

In my opinion, Turning off dynamic partition by default is the design idea. 
Under this premise, the default value of property 
"partition.discovery.interval.ms"  is only for reference.

WDYT?

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-28 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17255847#comment-17255847
 ] 

jiawen xiao edited comment on FLINK-20777 at 12/29/20, 7:12 AM:


sorry [~renqs],i think this is not a problem. you can read code in 
{{KafkaSourceBuilder}} .

The reason you will find "If the source is bounded, do not run periodic 
partition discovery."

so it's a check for bounded streaming which can prevent users from enabling 
dynamic partition discovery in the case of bounded sources.


was (Author: 873925...@qq.com):
sorry [~renqs],i think this is not a problem. your can read code in 
{{KafkaSourceBuilder}} .

The reason you will find "If the source is bounded, do not run periodic 
partition discovery."

so it's a check for bounded streaming which can prevent users from enabling 
dynamic partition discovery in the case of bounded sources.

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-28 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17255847#comment-17255847
 ] 

jiawen xiao commented on FLINK-20777:
-

sorry [~renqs],i think this is not a problem. your can read code in 
{{KafkaSourceBuilder}} .

The reason you will find "If the source is bounded, do not run periodic 
partition discovery."

so it's a check for bounded streaming which can prevent users from enabling 
dynamic partition discovery in the case of bounded sources.

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20778) the comment for kafka split offset type is wrong

2020-12-27 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17255360#comment-17255360
 ] 

jiawen xiao commented on FLINK-20778:
-

[~meijies],please wait ,Jark can assign to you.i think you can fix it directly 

> the comment for kafka split offset type is wrong
> 
>
> Key: FLINK-20778
> URL: https://issues.apache.org/jira/browse/FLINK-20778
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy Mei
>Priority: Major
> Attachments: Screen Capture_select-area_20201228011944.png
>
>
> the current code:
> {code:java}
>   // Indicating the split should consume from the earliest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the latest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}
> should be adjusted as blew
> {code:java}
>   // Indicating the split should consume from the latest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the earliest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20778) the comment for kafka split offset type is wrong

2020-12-27 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17255358#comment-17255358
 ] 

jiawen xiao commented on FLINK-20778:
-

hi,[~jark],cc

> the comment for kafka split offset type is wrong
> 
>
> Key: FLINK-20778
> URL: https://issues.apache.org/jira/browse/FLINK-20778
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy Mei
>Priority: Major
> Attachments: Screen Capture_select-area_20201228011944.png
>
>
> the current code:
> {code:java}
>   // Indicating the split should consume from the earliest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the latest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}
> should be adjusted as blew
> {code:java}
>   // Indicating the split should consume from the latest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the earliest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-27 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17255350#comment-17255350
 ] 

jiawen xiao commented on FLINK-20777:
-

hi,[~renqs] thanks for raising this question. which version did  you find?

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20778) the comment for kafka split offset type is wrong

2020-12-27 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17255349#comment-17255349
 ] 

jiawen xiao commented on FLINK-20778:
-

[~meijies],you are right. could you create a pr to fix? 

> the comment for kafka split offset type is wrong
> 
>
> Key: FLINK-20778
> URL: https://issues.apache.org/jira/browse/FLINK-20778
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Jeremy Mei
>Priority: Major
> Attachments: Screen Capture_select-area_20201228011944.png
>
>
> the current code:
> {code:java}
>   // Indicating the split should consume from the earliest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the latest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}
> should be adjusted as blew
> {code:java}
>   // Indicating the split should consume from the latest.
>   public static final long LATEST_OFFSET = -1;
>   // Indicating the split should consume from the earliest.
>   public static final long EARLIEST_OFFSET = -2;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20739) Ban `if` from HiveModule

2020-12-22 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17253917#comment-17253917
 ] 

jiawen xiao commented on FLINK-20739:
-

Do other modules have this problem? [~hackergin]

> Ban  `if`  from HiveModule
> --
>
> Key: FLINK-20739
> URL: https://issues.apache.org/jira/browse/FLINK-20739
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: jinfeng
>Priority: Major
>
> When using hiveModule, the if function is treated as a normal function.
>  If I have a SQL like this: 
>   
> {code:java}
>  insert into Sink select  if(size(split(`test`, '-')) > 1, split(`test`, 
> '-')[2], 'error') from Source   {code}
>  
>  It will throw arrayIndexOutOfBoundsException in Flink1.10,  becase 
> size(split(`test`, '-')-) > 1 , split(`test`, '')[2], ‘error’   will be 
> calculated first, and then if function will be  calculated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-20739) Ban `if` from HiveModule

2020-12-22 Thread jiawen xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiawen xiao updated FLINK-20739:

Comment: was deleted

(was: in my opinion,[2] stand for size is not less than 3.

can you try ? 
 size(split(`test`, '-')) > 2)

> Ban  `if`  from HiveModule
> --
>
> Key: FLINK-20739
> URL: https://issues.apache.org/jira/browse/FLINK-20739
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: jinfeng
>Priority: Major
>
> When using hiveModule, the if function is treated as a normal function.
>  If I have a SQL like this: 
>   
> {code:java}
>  insert into Sink select  if(size(split(`test`, '-')) > 1, split(`test`, 
> '-')[2], 'error') from Source   {code}
>  
>  It will throw arrayIndexOutOfBoundsException in Flink1.10,  becase 
> size(split(`test`, '-')-) > 1 , split(`test`, '')[2], ‘error’   will be 
> calculated first, and then if function will be  calculated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20739) Ban `if` from HiveModule

2020-12-22 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17253916#comment-17253916
 ] 

jiawen xiao commented on FLINK-20739:
-

in my opinion,[2] stand for size is not less than 3.

can you try 
size(split(`test`, '-')) > 2
?

> Ban  `if`  from HiveModule
> --
>
> Key: FLINK-20739
> URL: https://issues.apache.org/jira/browse/FLINK-20739
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: jinfeng
>Priority: Major
>
> When using hiveModule, the if function is treated as a normal function.
>  If I have a SQL like this: 
>   
> {code:java}
>  insert into Sink select  if(size(split(`test`, '-')) > 1, split(`test`, 
> '-')[2], 'error') from Source   {code}
>  
>  It will throw arrayIndexOutOfBoundsException in Flink1.10,  becase 
> size(split(`test`, '-')-) > 1 , split(`test`, '')[2], ‘error’   will be 
> calculated first, and then if function will be  calculated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20739) Ban `if` from HiveModule

2020-12-22 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17253916#comment-17253916
 ] 

jiawen xiao edited comment on FLINK-20739 at 12/23/20, 6:49 AM:


in my opinion,[2] stand for size is not less than 3.

can you try ? 
 size(split(`test`, '-')) > 2


was (Author: 873925...@qq.com):
in my opinion,[2] stand for size is not less than 3.

can you try 
size(split(`test`, '-')) > 2
?

> Ban  `if`  from HiveModule
> --
>
> Key: FLINK-20739
> URL: https://issues.apache.org/jira/browse/FLINK-20739
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: jinfeng
>Priority: Major
>
> When using hiveModule, the if function is treated as a normal function.
>  If I have a SQL like this: 
>   
> {code:java}
>  insert into Sink select  if(size(split(`test`, '-')) > 1, split(`test`, 
> '-')[2], 'error') from Source   {code}
>  
>  It will throw arrayIndexOutOfBoundsException in Flink1.10,  becase 
> size(split(`test`, '-')-) > 1 , split(`test`, '')[2], ‘error’   will be 
> calculated first, and then if function will be  calculated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20697) Correct the Type and Default value of "lookup.cache.ttl" in jdbc.md/jdbc.zh.md

2020-12-21 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17252839#comment-17252839
 ] 

jiawen xiao commented on FLINK-20697:
-

fine,i will fix it

> Correct  the Type and Default value of "lookup.cache.ttl" in 
> jdbc.md/jdbc.zh.md
> ---
>
> Key: FLINK-20697
> URL: https://issues.apache.org/jira/browse/FLINK-20697
> Project: Flink
>  Issue Type: Wish
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.11.1
>Reporter: jiawen xiao
>Priority: Major
> Fix For: 1.11.1
>
>
> in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#lookup-cache-ttl]
>  doc
> we can see the type and default value of  
> "[lookup-cache-ttl|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#lookup-cache-ttl]";
>  is  wrong
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20697) Correct the Type and Default value of "lookup.cache.ttl" in jdbc.md/jdbc.zh.md

2020-12-21 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17252780#comment-17252780
 ] 

jiawen xiao commented on FLINK-20697:
-

[~jark],cc

> Correct  the Type and Default value of "lookup.cache.ttl" in 
> jdbc.md/jdbc.zh.md
> ---
>
> Key: FLINK-20697
> URL: https://issues.apache.org/jira/browse/FLINK-20697
> Project: Flink
>  Issue Type: Wish
>  Components: Connectors / JDBC
>Affects Versions: 1.11.1
>Reporter: jiawen xiao
>Priority: Major
> Fix For: 1.11.1
>
>
> in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#lookup-cache-ttl]
>  doc
> we can see the type and default value of  
> "[lookup-cache-ttl|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#lookup-cache-ttl]";
>  is  wrong
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20697) Correct the Type and Default value of "lookup.cache.ttl" in jdbc.md/jdbc.zh.md

2020-12-21 Thread jiawen xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiawen xiao updated FLINK-20697:

Summary: Correct  the Type and Default value of "lookup.cache.ttl" in 
jdbc.md/jdbc.zh.md  (was: Correct  "lookup.cache.ttl"  of Type and Default in 
jdbc.md/jdbc.zh.md)

> Correct  the Type and Default value of "lookup.cache.ttl" in 
> jdbc.md/jdbc.zh.md
> ---
>
> Key: FLINK-20697
> URL: https://issues.apache.org/jira/browse/FLINK-20697
> Project: Flink
>  Issue Type: Wish
>  Components: Connectors / JDBC
>Affects Versions: 1.11.1
>Reporter: jiawen xiao
>Priority: Major
> Fix For: 1.11.1
>
>
> in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#lookup-cache-ttl]
>  doc
> we can see the type and default value of  
> "[lookup-cache-ttl|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#lookup-cache-ttl]";
>  is  wrong
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20697) Correct "lookup.cache.ttl" of Type and Default in jdbc.md/jdbc.zh.md

2020-12-21 Thread jiawen xiao (Jira)
jiawen xiao created FLINK-20697:
---

 Summary: Correct  "lookup.cache.ttl"  of Type and Default in 
jdbc.md/jdbc.zh.md
 Key: FLINK-20697
 URL: https://issues.apache.org/jira/browse/FLINK-20697
 Project: Flink
  Issue Type: Wish
  Components: Connectors / JDBC
Affects Versions: 1.11.1
Reporter: jiawen xiao
 Fix For: 1.11.1


in 
[https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#lookup-cache-ttl]
 doc

we can see the type and default value of  
"[lookup-cache-ttl|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#lookup-cache-ttl]";
 is  wrong

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16325) A connection check is required, and it needs to be reopened when the JDBC connection is interrupted

2020-12-15 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17250102#comment-17250102
 ] 

jiawen xiao commented on FLINK-16325:
-

[~jark],cc

>  A connection check is required, and it needs to be reopened when the JDBC 
> connection is interrupted
> 
>
> Key: FLINK-16325
> URL: https://issues.apache.org/jira/browse/FLINK-16325
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: renjianxu
>Priority: Minor
>
> JDBCOutputFormat#writeRecord.
> When writing data, if the JDBC connection has been disconnected, the data 
> will be lost.Therefore, a connectivity judgment is required in the 
> writeRecord method.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19691) Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17249569#comment-17249569
 ] 

jiawen xiao commented on FLINK-19691:
-

[~hailong wang],plea take a look my code ,thanks

> Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc 
> connector
> 
>
> Key: FLINK-19691
> URL: https://issues.apache.org/jira/browse/FLINK-19691
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Jdbc connector can check whether the connection is valid, and the timeout is 
> 60 second [FLINK-16681].
> But the fixed timeout of 60 second sometimes too long to wait.  In some 
> scenes,  I just want to wait for 5 second and fast failed and restarting if 
> it is still invalid.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19691) Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc connector

2020-12-15 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17249549#comment-17249549
 ] 

jiawen xiao commented on FLINK-19691:
-

is a good idea,i will try it

> Expose `CONNECTION_CHECK_TIMEOUT_SECONDS` as a configurable option in Jdbc 
> connector
> 
>
> Key: FLINK-19691
> URL: https://issues.apache.org/jira/browse/FLINK-19691
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Major
> Fix For: 1.13.0
>
>
> Jdbc connector can check whether the connection is valid, and the timeout is 
> 60 second [FLINK-16681].
> But the fixed timeout of 60 second sometimes too long to wait.  In some 
> scenes,  I just want to wait for 5 second and fast failed and restarting if 
> it is still invalid.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19905) The Jdbc-connector's 'lookup.max-retries' option initial value is 1 in JdbcLookupFunction

2020-12-13 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17248785#comment-17248785
 ] 

jiawen xiao commented on FLINK-19905:
-

hi,[~jark], i will  contribute a bug fix. This is a valuable experience to 
understand the pr process for me ,now i will create a pr

> The Jdbc-connector's 'lookup.max-retries' option initial value is 1 in 
> JdbcLookupFunction
> -
>
> Key: FLINK-19905
> URL: https://issues.apache.org/jira/browse/FLINK-19905
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.11.2, 1.11.3
>Reporter: dalongliu
>Priority: Minor
> Fix For: 1.13.0
>
>
> As describe in FLINK-19684, we should init the begin value to 0 in 
> JdbcLookupFunction. The PR only correct the retry value to 0 in 
> JdbcRowDataLookupFunction and JdbcBatchingOutputFormat class, maybe the 
> contributor forgets to correct it in JdbcLookupFunction class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20386) ClassCastException when lookup join a JDBC table on INT UNSIGNED column

2020-12-13 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17248758#comment-17248758
 ] 

jiawen xiao commented on FLINK-20386:
-

[~jark],when create subtasks for this question?

> ClassCastException when lookup join a JDBC table on INT UNSIGNED column
> ---
>
> Key: FLINK-20386
> URL: https://issues.apache.org/jira/browse/FLINK-20386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Jark Wu
>Assignee: jiawen xiao
>Priority: Major
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> The primary key of the MySQL is an INT UNSIGNED column, but declared INT in 
> Flink. 
> I know the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#data-type-mapping]
>  say it should be decalred BIGINT in Flink, however, would be better not fail 
> the job. 
> At least, the exception is hard to understand for users. We can also check 
> the schema before start the job. 
> {code}
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>   at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149) 
> ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at JoinTableFuncCollector$6460.collect(Unknown Source) ~[?:?]
>   at 
> org.apache.flink.table.functions.TableFunction.collect(TableFunction.java:203)
>  ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:162)
>  ~[?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20386) ClassCastException when lookup join a JDBC table on INT UNSIGNED column

2020-12-11 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17247800#comment-17247800
 ] 

jiawen xiao commented on FLINK-20386:
-

This is a work that needs to be measured for necessity, waiting for the 
community to determine

> ClassCastException when lookup join a JDBC table on INT UNSIGNED column
> ---
>
> Key: FLINK-20386
> URL: https://issues.apache.org/jira/browse/FLINK-20386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Jark Wu
>Assignee: jiawen xiao
>Priority: Major
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> The primary key of the MySQL is an INT UNSIGNED column, but declared INT in 
> Flink. 
> I know the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#data-type-mapping]
>  say it should be decalred BIGINT in Flink, however, would be better not fail 
> the job. 
> At least, the exception is hard to understand for users. We can also check 
> the schema before start the job. 
> {code}
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>   at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149) 
> ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at JoinTableFuncCollector$6460.collect(Unknown Source) ~[?:?]
>   at 
> org.apache.flink.table.functions.TableFunction.collect(TableFunction.java:203)
>  ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:162)
>  ~[?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20386) ClassCastException when lookup join a JDBC table on INT UNSIGNED column

2020-12-11 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17247790#comment-17247790
 ] 

jiawen xiao edited comment on FLINK-20386 at 12/11/20, 9:40 AM:


[~jark]

_share my method here which is a large_ _amount_ _of_  _work:_

_we can get jdbc/mysql scheme by DatabaseMetaData and  map to  flink datatype 
at same time which can be validated with  source table schema or sink table 
schema. it will throw ValidationException("Field types of mysql-schema result 
and registered TableSource do not match")  ._

_This idea comes from TableSinkUtils.validateSchemaAndApplyImplicitCast 
(release 1.11)_

_jark,What do you think?_

 


was (Author: 873925...@qq.com):
[~jark]

_share my method here which is a large_ _amount_ __ _of_ __ _work:_

_we can get jdbc/mysql scheme by DatabaseMetaData and  map to  flink datatype 
at same time which can be validated with  source table schema or sink table 
schema. it will throw ValidationException("Field types of mysql-schema result 
and registered TableSource do not match")  ._

_This idea comes from TableSinkUtils.validateSchemaAndApplyImplicitCast 
(release 1.11)_

_What do you think,jark_

 

> ClassCastException when lookup join a JDBC table on INT UNSIGNED column
> ---
>
> Key: FLINK-20386
> URL: https://issues.apache.org/jira/browse/FLINK-20386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Jark Wu
>Assignee: jiawen xiao
>Priority: Major
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> The primary key of the MySQL is an INT UNSIGNED column, but declared INT in 
> Flink. 
> I know the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#data-type-mapping]
>  say it should be decalred BIGINT in Flink, however, would be better not fail 
> the job. 
> At least, the exception is hard to understand for users. We can also check 
> the schema before start the job. 
> {code}
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>   at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149) 
> ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at JoinTableFuncCollector$6460.collect(Unknown Source) ~[?:?]
>   at 
> org.apache.flink.table.functions.TableFunction.collect(TableFunction.java:203)
>  ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:162)
>  ~[?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20386) ClassCastException when lookup join a JDBC table on INT UNSIGNED column

2020-12-11 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17247790#comment-17247790
 ] 

jiawen xiao commented on FLINK-20386:
-

[~jark]

_share my method here which is a large_ _amount_ __ _of_ __ _work:_

_we can get jdbc/mysql scheme by DatabaseMetaData and  map to  flink datatype 
at same time which can be validated with  source table schema or sink table 
schema. it will throw ValidationException("Field types of mysql-schema result 
and registered TableSource do not match")  ._

_This idea comes from TableSinkUtils.validateSchemaAndApplyImplicitCast 
(release 1.11)_

_What do you think,jark_

 

> ClassCastException when lookup join a JDBC table on INT UNSIGNED column
> ---
>
> Key: FLINK-20386
> URL: https://issues.apache.org/jira/browse/FLINK-20386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Jark Wu
>Assignee: jiawen xiao
>Priority: Major
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> The primary key of the MySQL is an INT UNSIGNED column, but declared INT in 
> Flink. 
> I know the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#data-type-mapping]
>  say it should be decalred BIGINT in Flink, however, would be better not fail 
> the job. 
> At least, the exception is hard to understand for users. We can also check 
> the schema before start the job. 
> {code}
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>   at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149) 
> ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at JoinTableFuncCollector$6460.collect(Unknown Source) ~[?:?]
>   at 
> org.apache.flink.table.functions.TableFunction.collect(TableFunction.java:203)
>  ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:162)
>  ~[?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20386) ClassCastException when lookup join a JDBC table on INT UNSIGNED column

2020-12-10 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17247596#comment-17247596
 ] 

jiawen xiao commented on FLINK-20386:
-

I will create a PR to fix

> ClassCastException when lookup join a JDBC table on INT UNSIGNED column
> ---
>
> Key: FLINK-20386
> URL: https://issues.apache.org/jira/browse/FLINK-20386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Jark Wu
>Assignee: jiawen xiao
>Priority: Major
>
> The primary key of the MySQL is an INT UNSIGNED column, but declared INT in 
> Flink. 
> I know the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#data-type-mapping]
>  say it should be decalred BIGINT in Flink, however, would be better not fail 
> the job. 
> At least, the exception is hard to understand for users. We can also check 
> the schema before start the job. 
> {code}
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>   at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149) 
> ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at JoinTableFuncCollector$6460.collect(Unknown Source) ~[?:?]
>   at 
> org.apache.flink.table.functions.TableFunction.collect(TableFunction.java:203)
>  ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:162)
>  ~[?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20386) ClassCastException when lookup join a JDBC table on INT UNSIGNED column

2020-12-10 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17247149#comment-17247149
 ] 

jiawen xiao commented on FLINK-20386:
-

and when sinking, it will check the ddl schema, it will be described as a 
schema error instead of cast exp.i think int unsign problem shoule be described 
 schema field error.we can also check the schema before start the job

> ClassCastException when lookup join a JDBC table on INT UNSIGNED column
> ---
>
> Key: FLINK-20386
> URL: https://issues.apache.org/jira/browse/FLINK-20386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Jark Wu
>Assignee: jiawen xiao
>Priority: Major
>
> The primary key of the MySQL is an INT UNSIGNED column, but declared INT in 
> Flink. 
> I know the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#data-type-mapping]
>  say it should be decalred BIGINT in Flink, however, would be better not fail 
> the job. 
> At least, the exception is hard to understand for users. We can also check 
> the schema before start the job. 
> {code}
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>   at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149) 
> ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at JoinTableFuncCollector$6460.collect(Unknown Source) ~[?:?]
>   at 
> org.apache.flink.table.functions.TableFunction.collect(TableFunction.java:203)
>  ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:162)
>  ~[?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20386) ClassCastException when lookup join a JDBC table on INT UNSIGNED column

2020-12-10 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17247146#comment-17247146
 ] 

jiawen xiao commented on FLINK-20386:
-

[~jark] I use UnsignedTypeConversionITCase.testUnsignedType which is changed 
with part of the source schema  in release1.11 , i find only Type length is 
greater than int type  will throw cast exp.

eg: jdbc -TINYINT UNSIGNED ,flink -TINYINT, It will not throw cast exp

> ClassCastException when lookup join a JDBC table on INT UNSIGNED column
> ---
>
> Key: FLINK-20386
> URL: https://issues.apache.org/jira/browse/FLINK-20386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Jark Wu
>Assignee: jiawen xiao
>Priority: Major
>
> The primary key of the MySQL is an INT UNSIGNED column, but declared INT in 
> Flink. 
> I know the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#data-type-mapping]
>  say it should be decalred BIGINT in Flink, however, would be better not fail 
> the job. 
> At least, the exception is hard to understand for users. We can also check 
> the schema before start the job. 
> {code}
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>   at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149) 
> ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at JoinTableFuncCollector$6460.collect(Unknown Source) ~[?:?]
>   at 
> org.apache.flink.table.functions.TableFunction.collect(TableFunction.java:203)
>  ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:162)
>  ~[?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20386) ClassCastException when lookup join a JDBC table on INT UNSIGNED column

2020-12-09 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17247069#comment-17247069
 ] 

jiawen xiao commented on FLINK-20386:
-

ok,i will do a test for other unsigned types.Looking forward to the latest 
progress

> ClassCastException when lookup join a JDBC table on INT UNSIGNED column
> ---
>
> Key: FLINK-20386
> URL: https://issues.apache.org/jira/browse/FLINK-20386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Jark Wu
>Assignee: jiawen xiao
>Priority: Major
>
> The primary key of the MySQL is an INT UNSIGNED column, but declared INT in 
> Flink. 
> I know the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#data-type-mapping]
>  say it should be decalred BIGINT in Flink, however, would be better not fail 
> the job. 
> At least, the exception is hard to understand for users. We can also check 
> the schema before start the job. 
> {code}
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>   at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149) 
> ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at JoinTableFuncCollector$6460.collect(Unknown Source) ~[?:?]
>   at 
> org.apache.flink.table.functions.TableFunction.collect(TableFunction.java:203)
>  ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:162)
>  ~[?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20481) java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer

2020-12-04 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243874#comment-17243874
 ] 

jiawen xiao commented on FLINK-20481:
-

this issus is the same as flink-20386. I don’t know if already has a good 
solution

> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
> 
>
> Key: FLINK-20481
> URL: https://issues.apache.org/jira/browse/FLINK-20481
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-12-04-15-24-08-732.png, 
> image-2020-12-04-15-26-49-307.png
>
>
> MySQL table sql :
> {code:java}
> DROP TABLE IF EXISTS `yarn_app_logs_count`;
> CREATE TABLE `yarn_app_logs_count` (  `id` int(11) unsigned NOT NULL 
> AUTO_INCREMENT,  `app_id` varchar(50) DEFAULT NULL,  `count` bigint(11) 
> DEFAULT NULL,  PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;
> INSERT INTO `yarn_app_logs_count` (`id`, `app_id`, `count`)VALUES  
> (1,'application_1575453055442_3188',2);
> {code}
> Flink SQL DDL and SQL:
> {code:java}
> CREATE TABLE yarn_app_logs_count (
>   id INT,
>   app_id STRING,
>   `count` BIGINT
> ) WITH (
>'connector' = 'jdbc',
>'url' = 'jdbc:mysql://localhost:3306/zhisheng',
>'table-name' = 'yarn_app_logs_count',
>'username' = 'root',
>'password' = '123456'
> );
> select * from yarn_app_logs_count;
> {code}
>  
> if id type is INT, it has an exception:
> {code:java}
> 2020-12-04 15:15:23org.apache.flink.runtime.JobException: Recovery is 
> suppressed by NoRestartBackoffTimeStrategyat 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
> at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:224)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:217)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:208)
> at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:533)
> at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:89)
> at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:419)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:286)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:201)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
> akka.actor.Actor$class.aroundReceive(Actor.scala:517)at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)at 
> akka.actor.ActorCell.invoke(ActorCell.scala:561)at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)at 
> akka.dispatch.Mailbox.run(Mailbox.scala:225)at 
> akka.dispatch.Mailbox.exec(Mailbox.scala:235)at 
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
>    at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)   
>  at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)Caused
>  by: java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integerat 
> org.apache.flink.table.data.GenericRowData.ge

[jira] [Commented] (FLINK-20386) ClassCastException when lookup join a JDBC table on INT UNSIGNED column

2020-12-04 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243872#comment-17243872
 ] 

jiawen xiao commented on FLINK-20386:
-

hi,[~jark] .this problem is the same as FLINK-20481.Does this issue need to be 
discussed before optimization? I am interested in this question。

> ClassCastException when lookup join a JDBC table on INT UNSIGNED column
> ---
>
> Key: FLINK-20386
> URL: https://issues.apache.org/jira/browse/FLINK-20386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Jark Wu
>Priority: Major
>
> The primary key of the MySQL is an INT UNSIGNED column, but declared INT in 
> Flink. 
> I know the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#data-type-mapping]
>  say it should be decalred BIGINT in Flink, however, would be better not fail 
> the job. 
> At least, the exception is hard to understand for users. We can also check 
> the schema before start the job. 
> {code}
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>   at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149) 
> ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at JoinTableFuncCollector$6460.collect(Unknown Source) ~[?:?]
>   at 
> org.apache.flink.table.functions.TableFunction.collect(TableFunction.java:203)
>  ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:162)
>  ~[?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20481) java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer

2020-12-04 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243865#comment-17243865
 ] 

jiawen xiao commented on FLINK-20481:
-

hi,[~zhisheng], only happen in 1.12.0 ?

> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
> 
>
> Key: FLINK-20481
> URL: https://issues.apache.org/jira/browse/FLINK-20481
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-12-04-15-24-08-732.png, 
> image-2020-12-04-15-26-49-307.png
>
>
> MySQL table sql :
> {code:java}
> DROP TABLE IF EXISTS `yarn_app_logs_count`;
> CREATE TABLE `yarn_app_logs_count` (  `id` int(11) unsigned NOT NULL 
> AUTO_INCREMENT,  `app_id` varchar(50) DEFAULT NULL,  `count` bigint(11) 
> DEFAULT NULL,  PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;
> INSERT INTO `yarn_app_logs_count` (`id`, `app_id`, `count`)VALUES  
> (1,'application_1575453055442_3188',2);
> {code}
> Flink SQL DDL and SQL:
> {code:java}
> CREATE TABLE yarn_app_logs_count (
>   id INT,
>   app_id STRING,
>   `count` BIGINT
> ) WITH (
>'connector' = 'jdbc',
>'url' = 'jdbc:mysql://localhost:3306/zhisheng',
>'table-name' = 'yarn_app_logs_count',
>'username' = 'root',
>'password' = '123456'
> );
> select * from yarn_app_logs_count;
> {code}
>  
> if id type is INT, it has an exception:
> {code:java}
> 2020-12-04 15:15:23org.apache.flink.runtime.JobException: Recovery is 
> suppressed by NoRestartBackoffTimeStrategyat 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
> at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:224)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:217)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:208)
> at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:533)
> at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:89)
> at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:419)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:286)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:201)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
> akka.actor.Actor$class.aroundReceive(Actor.scala:517)at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)at 
> akka.actor.ActorCell.invoke(ActorCell.scala:561)at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)at 
> akka.dispatch.Mailbox.run(Mailbox.scala:225)at 
> akka.dispatch.Mailbox.exec(Mailbox.scala:235)at 
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
>    at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)   
>  at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)Caused
>  by: java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integerat 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149)
> at 
> or

[jira] [Commented] (FLINK-18402) NullPointerException in org.apache.flink.api.common.typeutils.base.GenericArraySerializer.copy(GenericArraySerializer.java:96)

2020-11-27 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17239566#comment-17239566
 ] 

jiawen xiao commented on FLINK-18402:
-

hi,[~zhoupeijie].can you gave me a simple code which can recurrent this 
problem?It would be great if there were sample data

> NullPointerException in 
> org.apache.flink.api.common.typeutils.base.GenericArraySerializer.copy(GenericArraySerializer.java:96)
> --
>
> Key: FLINK-18402
> URL: https://issues.apache.org/jira/browse/FLINK-18402
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.10.0
> Environment: flink 1.10
>Reporter: zhoupeijie
>Priority: Minor
>
> I use array as follows:
> {code:java}
> DataStream> ds = 
> tEnv.toRetractStream(companyBrandSource, CompanyBrandEntity.class);
> SingleOutputStreamOperator middleData = ds.map(new 
> RuleMapFunction(ruleList))
>  .filter(Objects::nonNull);
> {code}
> and I get this error:
> {code:java}
> java.lang.NullPointerException
>  at 
> org.apache.flink.api.common.typeutils.base.GenericArraySerializer.copy(GenericArraySerializer.java:96)
>  at 
> org.apache.flink.api.common.typeutils.base.GenericArraySerializer.copy(GenericArraySerializer.java:37)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:639)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
>  at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
>  at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
> {code}
> {color:#111f2c}then I change the SingleOutputStreamOperator to 
> SingleOutputStreamOperator,it begins to run correctly.{color}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19775) SystemProcessingTimeServiceTest.testImmediateShutdown is instable

2020-11-22 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17237057#comment-17237057
 ] 

jiawen xiao commented on FLINK-19775:
-

Okay, that sounds good

> SystemProcessingTimeServiceTest.testImmediateShutdown is instable
> -
>
> Key: FLINK-19775
> URL: https://issues.apache.org/jira/browse/FLINK-19775
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8131&view=logs&j=d89de3df-4600-5585-dadc-9bbc9a5e661c&t=66b5c59a-0094-561d-0e44-b149dfdd586d
> {code}
> 2020-10-22T21:12:54.9462382Z [ERROR] 
> testImmediateShutdown(org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest)
>   Time elapsed: 0.009 s  <<< ERROR!
> 2020-10-22T21:12:54.9463024Z java.lang.InterruptedException
> 2020-10-22T21:12:54.9463331Z  at java.lang.Object.wait(Native Method)
> 2020-10-22T21:12:54.9463766Z  at java.lang.Object.wait(Object.java:502)
> 2020-10-22T21:12:54.9464140Z  at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:63)
> 2020-10-22T21:12:54.9466014Z  at 
> org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest.testImmediateShutdown(SystemProcessingTimeServiceTest.java:154)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19775) SystemProcessingTimeServiceTest.testImmediateShutdown is instable

2020-11-20 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17236026#comment-17236026
 ] 

jiawen xiao edited comment on FLINK-19775 at 11/20/20, 9:41 AM:


[@tillrohrmann,|https://github.com/tillrohrmann] [~dian.fu]  ,Maybe I found the 
reason for the instability. It depends on when the thread A calling the await() 
method acquires the lock of the lock object in the jvm lock pool. According to 
the lock.wait() code in the await() method, we will know that it will cause the 
current thread A to enter the blocking state. At this point, trigger() will 
perform its work in the thread B to which it belongs. Suppose that when thread 
B executes to lock.notifyAll(), thread A will be awakened, but the lock of the 
lock object is not allocated in time and is still blocked. As long as the 
interrupt flag of thread A is true, InterruptedException will be thrown. So I 
thought of two solutions. The first point: Thread B changes the interrupt flag 
of thread A to false before executing lock.notifyAll(). This method can ensure 
that the A thread will not throw InterruptedException after being awakened, but 
how to restore the thread interruption flag for different threads still needs 
to be considered. The second point: Use catch code to catch the exception in 
the A thread, it will automatically restore the thread interruption flag, and 
make the compilation pass. But the second method cannot solve the problem of 
exception throwing.


was (Author: 873925...@qq.com):
[@tillrohrmann,|https://github.com/tillrohrmann] [~dian.fu] [ 
|https://github.com/tillrohrmann] ,Maybe I found the reason for the 
instability. It depends on when the thread A calling the await() method 
acquires the lock of the lock object in the jvm lock pool. According to the 
lock.wait() code in the await() method, we will know that it will cause the 
current thread A to enter the blocking state. At this point, trigger() will 
perform its work in the thread B to which it belongs. Suppose that when thread 
B executes to lock.notifyAll(), thread A will be awakened, but the lock of the 
lock object is not allocated in time and is still blocked. As long as the 
interrupt flag of thread A is true, InterruptedException will be thrown. So I 
thought of two solutions. The first point: Thread B changes the interrupt flag 
of thread A to false before executing lock.notifyAll(). This method can ensure 
that the A thread will not throw InterruptedException after being awakened, but 
how to restore the thread interruption flag for different threads still needs 
to be considered. The second point: Use catch code to catch the exception in 
the A thread, it will automatically restore the thread interruption flag, and 
make the compilation pass. But the second method cannot solve the problem of 
exception throwing.

> SystemProcessingTimeServiceTest.testImmediateShutdown is instable
> -
>
> Key: FLINK-19775
> URL: https://issues.apache.org/jira/browse/FLINK-19775
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: jiawen xiao
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8131&view=logs&j=d89de3df-4600-5585-dadc-9bbc9a5e661c&t=66b5c59a-0094-561d-0e44-b149dfdd586d
> {code}
> 2020-10-22T21:12:54.9462382Z [ERROR] 
> testImmediateShutdown(org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest)
>   Time elapsed: 0.009 s  <<< ERROR!
> 2020-10-22T21:12:54.9463024Z java.lang.InterruptedException
> 2020-10-22T21:12:54.9463331Z  at java.lang.Object.wait(Native Method)
> 2020-10-22T21:12:54.9463766Z  at java.lang.Object.wait(Object.java:502)
> 2020-10-22T21:12:54.9464140Z  at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:63)
> 2020-10-22T21:12:54.9466014Z  at 
> org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest.testImmediateShutdown(SystemProcessingTimeServiceTest.java:154)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19775) SystemProcessingTimeServiceTest.testImmediateShutdown is instable

2020-11-20 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17236026#comment-17236026
 ] 

jiawen xiao commented on FLINK-19775:
-

[@tillrohrmann,|https://github.com/tillrohrmann] [~dian.fu] [ 
|https://github.com/tillrohrmann] ,Maybe I found the reason for the 
instability. It depends on when the thread A calling the await() method 
acquires the lock of the lock object in the jvm lock pool. According to the 
lock.wait() code in the await() method, we will know that it will cause the 
current thread A to enter the blocking state. At this point, trigger() will 
perform its work in the thread B to which it belongs. Suppose that when thread 
B executes to lock.notifyAll(), thread A will be awakened, but the lock of the 
lock object is not allocated in time and is still blocked. As long as the 
interrupt flag of thread A is true, InterruptedException will be thrown. So I 
thought of two solutions. The first point: Thread B changes the interrupt flag 
of thread A to false before executing lock.notifyAll(). This method can ensure 
that the A thread will not throw InterruptedException after being awakened, but 
how to restore the thread interruption flag for different threads still needs 
to be considered. The second point: Use catch code to catch the exception in 
the A thread, it will automatically restore the thread interruption flag, and 
make the compilation pass. But the second method cannot solve the problem of 
exception throwing.

> SystemProcessingTimeServiceTest.testImmediateShutdown is instable
> -
>
> Key: FLINK-19775
> URL: https://issues.apache.org/jira/browse/FLINK-19775
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: jiawen xiao
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8131&view=logs&j=d89de3df-4600-5585-dadc-9bbc9a5e661c&t=66b5c59a-0094-561d-0e44-b149dfdd586d
> {code}
> 2020-10-22T21:12:54.9462382Z [ERROR] 
> testImmediateShutdown(org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest)
>   Time elapsed: 0.009 s  <<< ERROR!
> 2020-10-22T21:12:54.9463024Z java.lang.InterruptedException
> 2020-10-22T21:12:54.9463331Z  at java.lang.Object.wait(Native Method)
> 2020-10-22T21:12:54.9463766Z  at java.lang.Object.wait(Object.java:502)
> 2020-10-22T21:12:54.9464140Z  at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:63)
> 2020-10-22T21:12:54.9466014Z  at 
> org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest.testImmediateShutdown(SystemProcessingTimeServiceTest.java:154)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19775) SystemProcessingTimeServiceTest.testImmediateShutdown is instable

2020-11-20 Thread jiawen xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiawen xiao updated FLINK-19775:

Comment: was deleted

(was: hi, [~dian.fu],[~trohrmann],After my recent research,this question is 
similar to https://issues.apache.org/jira/browse/FLINK-6571

I summarized everyone's description and my own thoughts. First of all, this is 
a type of problem, and there will be instability problems. Perhaps it is time 
to find the reason why the main thread is interrupted by other threads. 
Secondly, make an assumption that when the main thread is lock.wait(), the 
child thread where latch.trigger() is located is not scheduled immediately by 
the cpu, so triggered=false is unchanged. If the main thread is interrupted by 
other threads, it will cause a hot loop problem in the await() method and 
continuously throw InterruptedException exceptions. Finally, I think that 
simple catch exceptions cannot solve the instability problem. We should change 
the loop flag to false while catching the exception. Do you have any 
suggestions?

 )

> SystemProcessingTimeServiceTest.testImmediateShutdown is instable
> -
>
> Key: FLINK-19775
> URL: https://issues.apache.org/jira/browse/FLINK-19775
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: jiawen xiao
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8131&view=logs&j=d89de3df-4600-5585-dadc-9bbc9a5e661c&t=66b5c59a-0094-561d-0e44-b149dfdd586d
> {code}
> 2020-10-22T21:12:54.9462382Z [ERROR] 
> testImmediateShutdown(org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest)
>   Time elapsed: 0.009 s  <<< ERROR!
> 2020-10-22T21:12:54.9463024Z java.lang.InterruptedException
> 2020-10-22T21:12:54.9463331Z  at java.lang.Object.wait(Native Method)
> 2020-10-22T21:12:54.9463766Z  at java.lang.Object.wait(Object.java:502)
> 2020-10-22T21:12:54.9464140Z  at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:63)
> 2020-10-22T21:12:54.9466014Z  at 
> org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest.testImmediateShutdown(SystemProcessingTimeServiceTest.java:154)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19775) SystemProcessingTimeServiceTest.testImmediateShutdown is instable

2020-11-19 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235861#comment-17235861
 ] 

jiawen xiao commented on FLINK-19775:
-

hi, [~dian.fu],[~trohrmann],After my recent research,this question is similar 
to https://issues.apache.org/jira/browse/FLINK-6571

I summarized everyone's description and my own thoughts. First of all, this is 
a type of problem, and there will be instability problems. Perhaps it is time 
to find the reason why the main thread is interrupted by other threads. 
Secondly, make an assumption that when the main thread is lock.wait(), the 
child thread where latch.trigger() is located is not scheduled immediately by 
the cpu, so triggered=false is unchanged. If the main thread is interrupted by 
other threads, it will cause a hot loop problem in the await() method and 
continuously throw InterruptedException exceptions. Finally, I think that 
simple catch exceptions cannot solve the instability problem. We should change 
the loop flag to false while catching the exception. Do you have any 
suggestions?

 

> SystemProcessingTimeServiceTest.testImmediateShutdown is instable
> -
>
> Key: FLINK-19775
> URL: https://issues.apache.org/jira/browse/FLINK-19775
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: jiawen xiao
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8131&view=logs&j=d89de3df-4600-5585-dadc-9bbc9a5e661c&t=66b5c59a-0094-561d-0e44-b149dfdd586d
> {code}
> 2020-10-22T21:12:54.9462382Z [ERROR] 
> testImmediateShutdown(org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest)
>   Time elapsed: 0.009 s  <<< ERROR!
> 2020-10-22T21:12:54.9463024Z java.lang.InterruptedException
> 2020-10-22T21:12:54.9463331Z  at java.lang.Object.wait(Native Method)
> 2020-10-22T21:12:54.9463766Z  at java.lang.Object.wait(Object.java:502)
> 2020-10-22T21:12:54.9464140Z  at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:63)
> 2020-10-22T21:12:54.9466014Z  at 
> org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest.testImmediateShutdown(SystemProcessingTimeServiceTest.java:154)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19775) SystemProcessingTimeServiceTest.testImmediateShutdown is instable

2020-11-19 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17235438#comment-17235438
 ] 

jiawen xiao commented on FLINK-19775:
-

[~dian.fu],Is there a way to reproduce it locally?

> SystemProcessingTimeServiceTest.testImmediateShutdown is instable
> -
>
> Key: FLINK-19775
> URL: https://issues.apache.org/jira/browse/FLINK-19775
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: jiawen xiao
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8131&view=logs&j=d89de3df-4600-5585-dadc-9bbc9a5e661c&t=66b5c59a-0094-561d-0e44-b149dfdd586d
> {code}
> 2020-10-22T21:12:54.9462382Z [ERROR] 
> testImmediateShutdown(org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest)
>   Time elapsed: 0.009 s  <<< ERROR!
> 2020-10-22T21:12:54.9463024Z java.lang.InterruptedException
> 2020-10-22T21:12:54.9463331Z  at java.lang.Object.wait(Native Method)
> 2020-10-22T21:12:54.9463766Z  at java.lang.Object.wait(Object.java:502)
> 2020-10-22T21:12:54.9464140Z  at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:63)
> 2020-10-22T21:12:54.9466014Z  at 
> org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest.testImmediateShutdown(SystemProcessingTimeServiceTest.java:154)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19775) SystemProcessingTimeServiceTest.testImmediateShutdown is instable

2020-11-15 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17232536#comment-17232536
 ] 

jiawen xiao commented on FLINK-19775:
-

Hi,[~dian.fu], Looking forward to your follow-up to this pr 
(https://github.com/apache/flink/pull/14076)

> SystemProcessingTimeServiceTest.testImmediateShutdown is instable
> -
>
> Key: FLINK-19775
> URL: https://issues.apache.org/jira/browse/FLINK-19775
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Assignee: jiawen xiao
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8131&view=logs&j=d89de3df-4600-5585-dadc-9bbc9a5e661c&t=66b5c59a-0094-561d-0e44-b149dfdd586d
> {code}
> 2020-10-22T21:12:54.9462382Z [ERROR] 
> testImmediateShutdown(org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest)
>   Time elapsed: 0.009 s  <<< ERROR!
> 2020-10-22T21:12:54.9463024Z java.lang.InterruptedException
> 2020-10-22T21:12:54.9463331Z  at java.lang.Object.wait(Native Method)
> 2020-10-22T21:12:54.9463766Z  at java.lang.Object.wait(Object.java:502)
> 2020-10-22T21:12:54.9464140Z  at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:63)
> 2020-10-22T21:12:54.9466014Z  at 
> org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest.testImmediateShutdown(SystemProcessingTimeServiceTest.java:154)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19775) SystemProcessingTimeServiceTest.testImmediateShutdown is instable

2020-11-13 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17231340#comment-17231340
 ] 

jiawen xiao commented on FLINK-19775:
-

Hi [~dian.fu] , i'm interested in this issus , could you assignee to me to 
repair.

> SystemProcessingTimeServiceTest.testImmediateShutdown is instable
> -
>
> Key: FLINK-19775
> URL: https://issues.apache.org/jira/browse/FLINK-19775
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8131&view=logs&j=d89de3df-4600-5585-dadc-9bbc9a5e661c&t=66b5c59a-0094-561d-0e44-b149dfdd586d
> {code}
> 2020-10-22T21:12:54.9462382Z [ERROR] 
> testImmediateShutdown(org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest)
>   Time elapsed: 0.009 s  <<< ERROR!
> 2020-10-22T21:12:54.9463024Z java.lang.InterruptedException
> 2020-10-22T21:12:54.9463331Z  at java.lang.Object.wait(Native Method)
> 2020-10-22T21:12:54.9463766Z  at java.lang.Object.wait(Object.java:502)
> 2020-10-22T21:12:54.9464140Z  at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:63)
> 2020-10-22T21:12:54.9466014Z  at 
> org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeServiceTest.testImmediateShutdown(SystemProcessingTimeServiceTest.java:154)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19213) Update the Chinese documentation

2020-10-27 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17221893#comment-17221893
 ] 

jiawen xiao commented on FLINK-19213:
-

Hi,[~jark],I have finished my translation. Can you help me review my code? It 
may take you some time. Thank you

> Update the Chinese documentation
> 
>
> Key: FLINK-19213
> URL: https://issues.apache.org/jira/browse/FLINK-19213
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Dawid Wysakowicz
>Assignee: jiawen xiao
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 168h
>
> We should update the Chinese documentation with the changes introduced in 
> FLINK-18802



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19813) Translate "index SQL Connector" page into Chinese

2020-10-26 Thread jiawen xiao (Jira)
jiawen xiao created FLINK-19813:
---

 Summary: Translate "index SQL Connector" page into Chinese
 Key: FLINK-19813
 URL: https://issues.apache.org/jira/browse/FLINK-19813
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation, Table SQL / Ecosystem
Reporter: jiawen xiao


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/

The markdown file is located in flink/docs/dev/table/connectors/index.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19813) Translate "index SQL Connector" page into Chinese

2020-10-26 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17220695#comment-17220695
 ] 

jiawen xiao commented on FLINK-19813:
-

Hi [~jark], i'm interested in this issus , could you assignee to me to 
translate this page into Chinese.

> Translate "index SQL Connector" page into Chinese
> -
>
> Key: FLINK-19813
> URL: https://issues.apache.org/jira/browse/FLINK-19813
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: jiawen xiao
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/
> The markdown file is located in flink/docs/dev/table/connectors/index.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19213) Update the Chinese documentation

2020-10-26 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17220585#comment-17220585
 ] 

jiawen xiao commented on FLINK-19213:
-

Hi,[~dwysakowicz],I will translate document avro-confluent.md under formats 
into document avro-confluent.zh.md

> Update the Chinese documentation
> 
>
> Key: FLINK-19213
> URL: https://issues.apache.org/jira/browse/FLINK-19213
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Dawid Wysakowicz
>Assignee: jiawen xiao
>Priority: Trivial
>  Time Spent: 168h
>
> We should update the Chinese documentation with the changes introduced in 
> FLINK-18802



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19808) Translate "Hive Read & Write" page of "Hive Integration" into Chinese

2020-10-26 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17220576#comment-17220576
 ] 

jiawen xiao commented on FLINK-19808:
-

Hi [~jark], i'm interested in this issus , could you assignee to me to 
translate this page into Chinese.

(After studying your course, I want to try it. Thank you)

> Translate "Hive Read & Write" page of "Hive Integration" into Chinese
> -
>
> Key: FLINK-19808
> URL: https://issues.apache.org/jira/browse/FLINK-19808
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation
>Reporter: jiawen xiao
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/hive/hive_read_write.htmlhttps://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/hive/hive_catalog.html|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/hive/hive_catalog.html]
> The markdown file is located in 
> {{flink/docs/dev/table/hive/hive_read_write.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19808) Translate "Hive Read & Write" page of "Hive Integration" into Chinese

2020-10-26 Thread jiawen xiao (Jira)
jiawen xiao created FLINK-19808:
---

 Summary: Translate "Hive Read & Write" page of "Hive Integration" 
into Chinese
 Key: FLINK-19808
 URL: https://issues.apache.org/jira/browse/FLINK-19808
 Project: Flink
  Issue Type: Task
  Components: chinese-translation, Documentation
Reporter: jiawen xiao


The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/hive/hive_read_write.htmlhttps://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/hive/hive_catalog.html|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/hive/hive_catalog.html]

The markdown file is located in 
{{flink/docs/dev/table/hive/hive_read_write.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19213) Update the Chinese documentation

2020-10-26 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17220557#comment-17220557
 ] 

jiawen xiao commented on FLINK-19213:
-

Hi,[~dwysakowicz] i'm interested in this issus , could you assignee to me to 
translate this page into Chinese.

> Update the Chinese documentation
> 
>
> Key: FLINK-19213
> URL: https://issues.apache.org/jira/browse/FLINK-19213
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Dawid Wysakowicz
>Priority: Trivial
>
> We should update the Chinese documentation with the changes introduced in 
> FLINK-18802



--
This message was sent by Atlassian Jira
(v8.3.4#803005)