[jira] [Commented] (FLINK-6757) Investigate Apache Atlas integration

2022-01-11 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17474316#comment-17474316
 ] 

HideOnBush commented on FLINK-6757:
---

I think this is very meaningful, especially in the construction of real-time 
data warehouses, through hooks, many manual recording tasks can be solved. and 
we are already using this function

> Investigate Apache Atlas integration
> 
>
> Key: FLINK-6757
> URL: https://issues.apache.org/jira/browse/FLINK-6757
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> Users asked for an integration of Apache Flink with Apache Atlas. It might be 
> worthwhile to investigate what is necessary to achieve this task.
> References:
> http://atlas.incubator.apache.org/StormAtlasHook.html



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24705) Flink blood relationship

2021-10-29 Thread HideOnBush (Jira)
HideOnBush created FLINK-24705:
--

 Summary: Flink blood relationship
 Key: FLINK-24705
 URL: https://issues.apache.org/jira/browse/FLINK-24705
 Project: Flink
  Issue Type: New Feature
  Components: API / DataStream
Reporter: HideOnBush


Please tell me how to do blood relationship in Flink DataStream. For example, 
source is topic, sink is ES, I want to capture this specific information. For 
example, the connection information of topic, the SINK information of ES, and 
then the blood relationship, ask for ideas



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23377) Flink on Yarn could not set priority

2021-07-14 Thread HideOnBush (Jira)
HideOnBush created FLINK-23377:
--

 Summary: Flink on Yarn could not set priority
 Key: FLINK-23377
 URL: https://issues.apache.org/jira/browse/FLINK-23377
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.12.0
Reporter: HideOnBush


I have set 
yarn.application.priority: 5。But it doesn't work.  My yarn is fair scheduler



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22283) GraphiteReporter Metrics named xxx alredy exists

2021-04-14 Thread HideOnBush (Jira)
HideOnBush created FLINK-22283:
--

 Summary: GraphiteReporter Metrics named xxx alredy exists
 Key: FLINK-22283
 URL: https://issues.apache.org/jira/browse/FLINK-22283
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.9.0
Reporter: HideOnBush


When I was using Flink 1.9 to monitor GraphiteReporter, an error was reported, 
which caused the relevant indicators of my taskmanager to not be collected.
Error message: a metric named xxx alredy exists



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21557) An error was reported when temp join hive.

2021-03-02 Thread HideOnBush (Jira)
HideOnBush created FLINK-21557:
--

 Summary: An error was reported when temp join hive.
 Key: FLINK-21557
 URL: https://issues.apache.org/jira/browse/FLINK-21557
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.12.0
Reporter: HideOnBush
 Attachments: image-2021-03-02-18-38-43-789.png

An error was reported when temp join hive.。
Caused by: java.lang.NullPointerException
at 
org.apache.flink.table.data.DecimalDataUtils.doubleValue(DecimalDataUtils.java:48)
at 
org.apache.flink.table.data.DecimalDataUtils.castToDouble(DecimalDataUtils.java:193)
at StreamExecCalc$3741.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)

 !image-2021-03-02-18-38-43-789.png|thumbnail! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-21424) Lookup Hive partitioned table throws exception

2021-02-21 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17288140#comment-17288140
 ] 

HideOnBush edited comment on FLINK-21424 at 2/22/21, 2:48 AM:
--

Hi [~jark]。[~Leonard Xu]  I feel like a problem。I gave the wrong message 
before, I will give the wrong message again

 !screenshot-2.png! 


was (Author: hideonbush):
Hi [~jark]。It's a problem。I gave the wrong message before, I will give the 
wrong message again

 !screenshot-2.png! 

> Lookup Hive partitioned table throws exception
> --
>
> Key: FLINK-21424
> URL: https://issues.apache.org/jira/browse/FLINK-21424
> Project: Flink
>  Issue Type: Wish
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, 
> 企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png
>
>
> {code:java}
> Flink view cannot use hive temporal{code}
> The kafka table can temporal join Hive Table.
> CREATE TABLE kfk_fact_bill_master_pup (
>    payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),
>     proctime as PROCTIME()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'topic_name',
>  'format' = 'json',
> );
>  
> But the next operation will report an error。e.g..
> create view test_view as 
>   select * from kfk_fact_bill_master_pup;
>  
> use test_view  temporal join Hive Table will  report error.  and view have 
> proctime field.
>  
> This is Error Message. 
>   !screenshot-1.png! 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21424) Lookup Hive partitioned table throws exception

2021-02-21 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17288140#comment-17288140
 ] 

HideOnBush commented on FLINK-21424:


Hi [~jark]。It's a problem。I gave the wrong message before, I will give the 
wrong message again

 !screenshot-2.png! 

> Lookup Hive partitioned table throws exception
> --
>
> Key: FLINK-21424
> URL: https://issues.apache.org/jira/browse/FLINK-21424
> Project: Flink
>  Issue Type: Wish
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, 
> 企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png
>
>
> {code:java}
> Flink view cannot use hive temporal{code}
> The kafka table can temporal join Hive Table.
> CREATE TABLE kfk_fact_bill_master_pup (
>    payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),
>     proctime as PROCTIME()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'topic_name',
>  'format' = 'json',
> );
>  
> But the next operation will report an error。e.g..
> create view test_view as 
>   select * from kfk_fact_bill_master_pup;
>  
> use test_view  temporal join Hive Table will  report error.  and view have 
> proctime field.
>  
> This is Error Message. 
>   !screenshot-1.png! 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21424) Lookup Hive partitioned table throws exception

2021-02-21 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-21424:
---
Description: 
{code:java}
Flink view cannot use hive temporal{code}
The kafka table can temporal join Hive Table.

CREATE TABLE kfk_fact_bill_master_pup (

   payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),

    proctime as PROCTIME()

) WITH (

 'connector' = 'kafka',

 'topic' = 'topic_name',

 'format' = 'json',

);

 

But the next operation will report an error。e.g..

create view test_view as 

  select * from kfk_fact_bill_master_pup;

 

use test_view  temporal join Hive Table will  report error.  and view have 
proctime field.

 

This is Error Message. 

  !screenshot-1.png! 

 

  was:
{code:java}
Flink view cannot use hive temporal{code}
The kafka table can temporal join Hive Table.

CREATE TABLE kfk_fact_bill_master_pup (

   payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),

    proctime as PROCTIME()

) WITH (

 'connector' = 'kafka',

 'topic' = 'topic_name',

 'format' = 'json',

);

 

But the next operation will report an error。e.g..

create view test_view as 

  select * from kfk_fact_bill_master_pup;

 

use test_view  temporal join Hive Table will  report error.  and view have 
proctime field.

 

This is Error Message. 

 

!企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png!

 


> Lookup Hive partitioned table throws exception
> --
>
> Key: FLINK-21424
> URL: https://issues.apache.org/jira/browse/FLINK-21424
> Project: Flink
>  Issue Type: Wish
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, 
> 企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png
>
>
> {code:java}
> Flink view cannot use hive temporal{code}
> The kafka table can temporal join Hive Table.
> CREATE TABLE kfk_fact_bill_master_pup (
>    payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),
>     proctime as PROCTIME()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'topic_name',
>  'format' = 'json',
> );
>  
> But the next operation will report an error。e.g..
> create view test_view as 
>   select * from kfk_fact_bill_master_pup;
>  
> use test_view  temporal join Hive Table will  report error.  and view have 
> proctime field.
>  
> This is Error Message. 
>   !screenshot-1.png! 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21424) Lookup Hive partitioned table throws exception

2021-02-21 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-21424:
---
Attachment: screenshot-1.png

> Lookup Hive partitioned table throws exception
> --
>
> Key: FLINK-21424
> URL: https://issues.apache.org/jira/browse/FLINK-21424
> Project: Flink
>  Issue Type: Wish
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, 
> 企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png
>
>
> {code:java}
> Flink view cannot use hive temporal{code}
> The kafka table can temporal join Hive Table.
> CREATE TABLE kfk_fact_bill_master_pup (
>    payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),
>     proctime as PROCTIME()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'topic_name',
>  'format' = 'json',
> );
>  
> But the next operation will report an error。e.g..
> create view test_view as 
>   select * from kfk_fact_bill_master_pup;
>  
> use test_view  temporal join Hive Table will  report error.  and view have 
> proctime field.
>  
> This is Error Message. 
>  
> !企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21424) Lookup Hive partitioned table throws exception

2021-02-21 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-21424:
---
Attachment: screenshot-2.png

> Lookup Hive partitioned table throws exception
> --
>
> Key: FLINK-21424
> URL: https://issues.apache.org/jira/browse/FLINK-21424
> Project: Flink
>  Issue Type: Wish
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, 
> 企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png
>
>
> {code:java}
> Flink view cannot use hive temporal{code}
> The kafka table can temporal join Hive Table.
> CREATE TABLE kfk_fact_bill_master_pup (
>    payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),
>     proctime as PROCTIME()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'topic_name',
>  'format' = 'json',
> );
>  
> But the next operation will report an error。e.g..
> create view test_view as 
>   select * from kfk_fact_bill_master_pup;
>  
> use test_view  temporal join Hive Table will  report error.  and view have 
> proctime field.
>  
> This is Error Message. 
>   !screenshot-1.png! 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21424) Flink view cannot use hive temporal

2021-02-20 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-21424:
---
Description: 
{code:java}
Flink view cannot use hive temporal{code}
The kafka table can temporal join Hive Table.

CREATE TABLE kfk_fact_bill_master_pup (

   payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),

    proctime as PROCTIME()

) WITH (

 'connector' = 'kafka',

 'topic' = 'topic_name',

 'format' = 'json',

);

 

But the next operation will report an error。e.g..

create view test_view as 

  select * from kfk_fact_bill_master_pup;

 

use test_view  temporal join Hive Table will  report error.  and view have 
proctime field.

 

This is Error Message. 

 

!企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png!

 

  was:
{code:java}
Flink view cannot use hive temporal{code}
The kafka table can temporal join Hive Table.

CREATE TABLE kfk_fact_bill_master_pup (

   payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),

    proctime as PROCTIME()

) WITH (

 'connector' = 'kafka',

 'topic' = 'topic_name',

 'format' = 'json',

);

 

But the next operation will report an error。e.g..

create view test_view as 

  select * from kfk_fact_bill_master_pup;

 

use test_view  temporal join Hive Table will  report error.  and view have 
proctime field.

 

!企业微信截图_86542a7e-9bab-4078-82c5-583b79f01670.png!

 

!企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png!

 


> Flink view cannot use hive temporal
> ---
>
> Key: FLINK-21424
> URL: https://issues.apache.org/jira/browse/FLINK-21424
> Project: Flink
>  Issue Type: Wish
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: 企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png
>
>
> {code:java}
> Flink view cannot use hive temporal{code}
> The kafka table can temporal join Hive Table.
> CREATE TABLE kfk_fact_bill_master_pup (
>    payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),
>     proctime as PROCTIME()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'topic_name',
>  'format' = 'json',
> );
>  
> But the next operation will report an error。e.g..
> create view test_view as 
>   select * from kfk_fact_bill_master_pup;
>  
> use test_view  temporal join Hive Table will  report error.  and view have 
> proctime field.
>  
> This is Error Message. 
>  
> !企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21424) Flink view cannot use hive temporal

2021-02-20 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-21424:
---
Description: 
{code:java}
Flink view cannot use hive temporal{code}
The kafka table can temporal join Hive Table.

CREATE TABLE kfk_fact_bill_master_pup (

   payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),

    proctime as PROCTIME()

) WITH (

 'connector' = 'kafka',

 'topic' = 'topic_name',

 'format' = 'json',

);

 

But the next operation will report an error。e.g..

create view test_view as 

  select * from kfk_fact_bill_master_pup;

 

use test_view  temporal join Hive Table will  report error.  and view have 
proctime field.

 

!企业微信截图_86542a7e-9bab-4078-82c5-583b79f01670.png!

 

!企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png!

 

  was:
{code:java}
Flink view cannot use hive temporal{code}
The kafka table can temporal join Hive Table.

CREATE TABLE kfk_fact_bill_master_pup (

   payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),

    proctime as PROCTIME()

) WITH (

 'connector' = 'kafka',

 'topic' = 'topic_name',

 'format' = 'json',

);

 

But the next operation will report an error。e.g..

create view test_view as 

  select * from kfk_fact_bill_master_pup;

 

use test_view  temporal join Hive Table will  report error.  and view have 
proctime field.

 

 


> Flink view cannot use hive temporal
> ---
>
> Key: FLINK-21424
> URL: https://issues.apache.org/jira/browse/FLINK-21424
> Project: Flink
>  Issue Type: Wish
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: 企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png
>
>
> {code:java}
> Flink view cannot use hive temporal{code}
> The kafka table can temporal join Hive Table.
> CREATE TABLE kfk_fact_bill_master_pup (
>    payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),
>     proctime as PROCTIME()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'topic_name',
>  'format' = 'json',
> );
>  
> But the next operation will report an error。e.g..
> create view test_view as 
>   select * from kfk_fact_bill_master_pup;
>  
> use test_view  temporal join Hive Table will  report error.  and view have 
> proctime field.
>  
> !企业微信截图_86542a7e-9bab-4078-82c5-583b79f01670.png!
>  
> !企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21424) Flink view cannot use hive temporal

2021-02-20 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-21424:
---
Attachment: 企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png

> Flink view cannot use hive temporal
> ---
>
> Key: FLINK-21424
> URL: https://issues.apache.org/jira/browse/FLINK-21424
> Project: Flink
>  Issue Type: Wish
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: 企业微信截图_35dfdc08-4f3f-4f34-962f-6db9afcf7cb0.png
>
>
> {code:java}
> Flink view cannot use hive temporal{code}
> The kafka table can temporal join Hive Table.
> CREATE TABLE kfk_fact_bill_master_pup (
>    payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),
>     proctime as PROCTIME()
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'topic_name',
>  'format' = 'json',
> );
>  
> But the next operation will report an error。e.g..
> create view test_view as 
>   select * from kfk_fact_bill_master_pup;
>  
> use test_view  temporal join Hive Table will  report error.  and view have 
> proctime field.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21424) Flink view cannot use hive temporal

2021-02-20 Thread HideOnBush (Jira)
HideOnBush created FLINK-21424:
--

 Summary: Flink view cannot use hive temporal
 Key: FLINK-21424
 URL: https://issues.apache.org/jira/browse/FLINK-21424
 Project: Flink
  Issue Type: Wish
  Components: Table SQL / API
Affects Versions: 1.12.0
Reporter: HideOnBush


{code:java}
Flink view cannot use hive temporal{code}
The kafka table can temporal join Hive Table.

CREATE TABLE kfk_fact_bill_master_pup (

   payLst ROW(groupID int, shopID int, shopName String, isJoinReceived int),

    proctime as PROCTIME()

) WITH (

 'connector' = 'kafka',

 'topic' = 'topic_name',

 'format' = 'json',

);

 

But the next operation will report an error。e.g..

create view test_view as 

  select * from kfk_fact_bill_master_pup;

 

use test_view  temporal join Hive Table will  report error.  and view have 
proctime field.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21145) Flink Temporal Join Hive optimization

2021-01-27 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272762#comment-17272762
 ] 

HideOnBush commented on FLINK-21145:


I tried this method, but it doesn’t seem to work well. OOM also appears. There 
is another problem. If I make a Kafka table into a view, my view has a 
PROCTIME() field. When I use the view and Hive Temp join of dimension table is 
not possible, but the original kafka table can be temp join hive dimension table

> Flink Temporal Join Hive optimization
> -
>
> Key: FLINK-21145
> URL: https://issues.apache.org/jira/browse/FLINK-21145
> Project: Flink
>  Issue Type: Wish
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
>
> When flink temporal join hive dimension table, the latest partition data will 
> be loaded into task memory in full, which will lead to high memory overhead. 
> In fact, sometimes the latest full data is not required. You can add options 
> like options in future versions. Is the dimension table data filtered?
> For example, select * from dim /*'streaming-source.partition.include' 
> ='latest' condition='fild1=ab'*/ filter the latest partition data as long as 
> fild1=ab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21145) Flink Temporal Join Hive optimization

2021-01-25 Thread HideOnBush (Jira)
HideOnBush created FLINK-21145:
--

 Summary: Flink Temporal Join Hive optimization
 Key: FLINK-21145
 URL: https://issues.apache.org/jira/browse/FLINK-21145
 Project: Flink
  Issue Type: Wish
  Components: Connectors / Hive
Affects Versions: 1.12.0
Reporter: HideOnBush


When flink temporal join hive dimension table, the latest partition data will 
be loaded into task memory in full, which will lead to high memory overhead. In 
fact, sometimes the latest full data is not required. You can add options like 
options in future versions. Is the dimension table data filtered?
For example, select * from dim /*'streaming-source.partition.include' ='latest' 
condition='fild1=ab'*/ filter the latest partition data as long as fild1=ab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20889) How does flink 1.12 support modifying jobName

2021-01-07 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-20889:
---
Description: 
How does flink 1.12 support the modification of jobName? I think the source 
code has been fixed

!image-2021-01-07-20-30-32-515.png!

  was:
How does flink 1.12 support the modification of jobName? I think the source 
code has been fixed

!image-2021-01-07-16-19-34-120.png!

 

!image-2021-01-07-16-19-49-750.png!


> How does flink 1.12 support modifying jobName
> -
>
> Key: FLINK-20889
> URL: https://issues.apache.org/jira/browse/FLINK-20889
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: image-2021-01-07-20-30-32-515.png
>
>
> How does flink 1.12 support the modification of jobName? I think the source 
> code has been fixed
> !image-2021-01-07-20-30-32-515.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20889) How does flink 1.12 support modifying jobName

2021-01-07 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-20889:
---
Attachment: image-2021-01-07-20-30-32-515.png

> How does flink 1.12 support modifying jobName
> -
>
> Key: FLINK-20889
> URL: https://issues.apache.org/jira/browse/FLINK-20889
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: image-2021-01-07-20-30-32-515.png
>
>
> How does flink 1.12 support the modification of jobName? I think the source 
> code has been fixed
> !image-2021-01-07-16-19-34-120.png!
>  
> !image-2021-01-07-16-19-49-750.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20889) How does flink 1.12 support modifying jobName

2021-01-07 Thread HideOnBush (Jira)
HideOnBush created FLINK-20889:
--

 Summary: How does flink 1.12 support modifying jobName
 Key: FLINK-20889
 URL: https://issues.apache.org/jira/browse/FLINK-20889
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.12.0
Reporter: HideOnBush


How does flink 1.12 support the modification of jobName? I think the source 
code has been fixed

!image-2021-01-07-16-19-34-120.png!

 

!image-2021-01-07-16-19-49-750.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20878) Flink 1.12作业job Name修改

2021-01-07 Thread HideOnBush (Jira)
HideOnBush created FLINK-20878:
--

 Summary: Flink 1.12作业job Name修改
 Key: FLINK-20878
 URL: https://issues.apache.org/jira/browse/FLINK-20878
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.12.0
Reporter: HideOnBush
 Attachments: image-2021-01-07-16-19-34-120.png, 
image-2021-01-07-16-19-49-750.png

大佬们好,1.12 SQL如何在tblEnv指定jobName呢,我看源码里这块被写固定了

!image-2021-01-07-16-19-34-120.png!

 

!image-2021-01-07-16-19-49-750.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20638) Flink hive modules Function with the same name bug

2020-12-18 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251659#comment-17251659
 ] 

HideOnBush commented on FLINK-20638:


Hi [~zoucao], See you in the zhao huan shi xia gu

> Flink hive modules Function with the same name bug
> --
>
> Key: FLINK-20638
> URL: https://issues.apache.org/jira/browse/FLINK-20638
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: image-2020-12-17-14-32-41-719.png
>
>
> I want to use flink to use hive functions, after introducing hive modules, 
> there are functions of the same name, Flink has a higher priority than Hive, 
> the same function name, such as timestamp function, how do I distinguish 
> between hive and Flink when using it?
> As shown below: 
> !image-2020-12-17-14-32-41-719.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20638) Flink hive modules Function with the same name bug

2020-12-18 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251657#comment-17251657
 ] 

HideOnBush commented on FLINK-20638:


Hi、 [~zoucao] [~jark]  Okay, thanks for the suggestions of both of you, I wrote 
a flink udf myself to deal with this kind of scene, and also tried the module 
loading order mentioned by [~jark]

> Flink hive modules Function with the same name bug
> --
>
> Key: FLINK-20638
> URL: https://issues.apache.org/jira/browse/FLINK-20638
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: image-2020-12-17-14-32-41-719.png
>
>
> I want to use flink to use hive functions, after introducing hive modules, 
> there are functions of the same name, Flink has a higher priority than Hive, 
> the same function name, such as timestamp function, how do I distinguish 
> between hive and Flink when using it?
> As shown below: 
> !image-2020-12-17-14-32-41-719.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20638) Flink hive modules Function with the same name bug

2020-12-18 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251657#comment-17251657
 ] 

HideOnBush edited comment on FLINK-20638 at 12/18/20, 10:16 AM:


Hi  [~zoucao] [~jark]  Okay, thanks for the suggestions of both of you, I wrote 
a flink udf myself to deal with this kind of scene, and also tried the module 
loading order mentioned by [~jark]


was (Author: hideonbush):
Hi、 [~zoucao] [~jark]  Okay, thanks for the suggestions of both of you, I wrote 
a flink udf myself to deal with this kind of scene, and also tried the module 
loading order mentioned by [~jark]

> Flink hive modules Function with the same name bug
> --
>
> Key: FLINK-20638
> URL: https://issues.apache.org/jira/browse/FLINK-20638
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: image-2020-12-17-14-32-41-719.png
>
>
> I want to use flink to use hive functions, after introducing hive modules, 
> there are functions of the same name, Flink has a higher priority than Hive, 
> the same function name, such as timestamp function, how do I distinguish 
> between hive and Flink when using it?
> As shown below: 
> !image-2020-12-17-14-32-41-719.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20638) Flink hive modules Function with the same name bug

2020-12-16 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17250829#comment-17250829
 ] 

HideOnBush commented on FLINK-20638:


When there are two objects of the same name residing in two modules, Flink 
always resolves the object reference to the one in the 1st loaded module.       
What if I want to use both?

> Flink hive modules Function with the same name bug
> --
>
> Key: FLINK-20638
> URL: https://issues.apache.org/jira/browse/FLINK-20638
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Attachments: image-2020-12-17-14-32-41-719.png
>
>
> I want to use flink to use hive functions, after introducing hive modules, 
> there are functions of the same name, Flink has a higher priority than Hive, 
> the same function name, such as timestamp function, how do I distinguish 
> between hive and Flink when using it?
> As shown below: 
> !image-2020-12-17-14-32-41-719.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20638) Flink hive modules Function with the same name bug

2020-12-16 Thread HideOnBush (Jira)
HideOnBush created FLINK-20638:
--

 Summary: Flink hive modules Function with the same name bug
 Key: FLINK-20638
 URL: https://issues.apache.org/jira/browse/FLINK-20638
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.12.0
Reporter: HideOnBush
 Attachments: image-2020-12-17-14-32-41-719.png

I want to use flink to use hive functions, after introducing hive modules, 
there are functions of the same name, Flink has a higher priority than Hive, 
the same function name, such as timestamp function, how do I distinguish 
between hive and Flink when using it?

As shown below: 

!image-2020-12-17-14-32-41-719.png!

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20576) Flink Temporal Join Hive Dim Error

2020-12-14 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249440#comment-17249440
 ] 

HideOnBush commented on FLINK-20576:


The ORC format cannot be read, but the parquet and text formats can be read 
normally

> Flink Temporal Join Hive Dim Error
> --
>
> Key: FLINK-20576
> URL: https://issues.apache.org/jira/browse/FLINK-20576
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Fix For: 1.13.0
>
>
>  
> KAFKA DDL
> {code:java}
> CREATE TABLE hive_catalog.flink_db_test.kfk_master_test (
> master Row String, action int, orderStatus int, orderKey String, actionTime bigint, 
> areaName String, paidAmount double, foodAmount double, startTime String, 
> person double, orderSubType int, checkoutTime String>,
> proctime as PROCTIME()
> ) WITH (properties ..){code}
>  
> FLINK client query sql
> {noformat}
> SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl 
>   JOIN hive_catalog.gauss.dim_extend_shop_info /*+ 
> OPTIONS('streaming-source.enable'='true', 
> 'streaming-source.partition.include' = 'latest', 
>'streaming-source.monitor-interval' = '12 
> h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME 
> AS OF kafk_tbl.proctime AS dim 
>ON kafk_tbl.groupID = dim.group_id where kafk_tbl.groupID is not 
> null;{noformat}
> When I execute the above statement, these stack error messages are returned
> Caused by: java.lang.NullPointerException: bufferCaused by: 
> java.lang.NullPointerException: buffer at 
> org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
>  ~[flink-table_2.11-1.12.0.jar:1.12.0]
>  
> Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
> into cache after 3 retriesCaused by: 
> org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
> after 3 retries at 
> org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
> org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20576) Flink Temporal Join Hive Dim Error

2020-12-14 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17249440#comment-17249440
 ] 

HideOnBush edited comment on FLINK-20576 at 12/15/20, 2:11 AM:
---

The ORC format cannot be read, but the parquet and text formats can be read 
normally  [~jark]


was (Author: hideonbush):
The ORC format cannot be read, but the parquet and text formats can be read 
normally

> Flink Temporal Join Hive Dim Error
> --
>
> Key: FLINK-20576
> URL: https://issues.apache.org/jira/browse/FLINK-20576
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Fix For: 1.13.0
>
>
>  
> KAFKA DDL
> {code:java}
> CREATE TABLE hive_catalog.flink_db_test.kfk_master_test (
> master Row String, action int, orderStatus int, orderKey String, actionTime bigint, 
> areaName String, paidAmount double, foodAmount double, startTime String, 
> person double, orderSubType int, checkoutTime String>,
> proctime as PROCTIME()
> ) WITH (properties ..){code}
>  
> FLINK client query sql
> {noformat}
> SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl 
>   JOIN hive_catalog.gauss.dim_extend_shop_info /*+ 
> OPTIONS('streaming-source.enable'='true', 
> 'streaming-source.partition.include' = 'latest', 
>'streaming-source.monitor-interval' = '12 
> h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME 
> AS OF kafk_tbl.proctime AS dim 
>ON kafk_tbl.groupID = dim.group_id where kafk_tbl.groupID is not 
> null;{noformat}
> When I execute the above statement, these stack error messages are returned
> Caused by: java.lang.NullPointerException: bufferCaused by: 
> java.lang.NullPointerException: buffer at 
> org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
>  ~[flink-table_2.11-1.12.0.jar:1.12.0]
>  
> Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
> into cache after 3 retriesCaused by: 
> org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
> after 3 retries at 
> org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
> org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20576) Flink Temporal Join Hive Dim Error

2020-12-13 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-20576:
---
Description: 
 

KAFKA DDL
{code:java}
CREATE TABLE hive_catalog.flink_db_test.kfk_master_test (
master Row,
proctime as PROCTIME()
) WITH (properties ..){code}
 

FLINK client query sql
{noformat}
SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl 
  JOIN hive_catalog.gauss.dim_extend_shop_info /*+ 
OPTIONS('streaming-source.enable'='true', 'streaming-source.partition.include' 
= 'latest', 
   'streaming-source.monitor-interval' = '12 
h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME AS 
OF kafk_tbl.proctime AS dim 
   ON kafk_tbl.groupID = dim.group_id where kafk_tbl.groupID is not 
null;{noformat}
When I execute the above statement, these stack error messages are returned

Caused by: java.lang.NullPointerException: bufferCaused by: 
java.lang.NullPointerException: buffer at 
org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
 ~[flink-table_2.11-1.12.0.jar:1.12.0]

 

Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
into cache after 3 retriesCaused by: 
org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
after 3 retries at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]

  was:
 

KAFKA DDL
{code:java}
CREATE TABLE hive_catalog.flink_db_test.kfk_master_test (
master Row,
foodLst ARRAY,
proctime as PROCTIME()
) WITH (properties ..){code}
 

FLINK client query sql
{noformat}
SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl 
  JOIN hive_catalog.gauss.dim_extend_shop_info /*+ 
OPTIONS('streaming-source.enable'='true', 'streaming-source.partition.include' 
= 'latest', 
   'streaming-source.monitor-interval' = '12 
h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME AS 
OF kafk_tbl.proctime AS dim 
   ON kafk_tbl.groupID = dim.group_id where kafk_tbl.groupID is not 
null;{noformat}
When I execute the above statement, these stack error messages are returned

Caused by: java.lang.NullPointerException: bufferCaused by: 
java.lang.NullPointerException: buffer at 
org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
 ~[flink-table_2.11-1.12.0.jar:1.12.0]

 

Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
into cache after 3 retriesCaused by: 
org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
after 3 retries at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]


> Flink Temporal Join Hive Dim Error
> 

[jira] [Updated] (FLINK-20576) Flink Temporal Join Hive Dim Error

2020-12-13 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-20576:
---
Description: 
 

KAFKA DDL
{code:java}
CREATE TABLE hive_catalog.flink_db_test.kfk_master_test (
master Row,
foodLst ARRAY,
proctime as PROCTIME()
) WITH (properties ..){code}
 

FLINK client query sql
{noformat}
SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl 
  JOIN hive_catalog.gauss.dim_extend_shop_info /*+ 
OPTIONS('streaming-source.enable'='true', 'streaming-source.partition.include' 
= 'latest', 
   'streaming-source.monitor-interval' = '12 
h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME AS 
OF kafk_tbl.proctime AS dim 
   ON kafk_tbl.groupID = dim.group_id where kafk_tbl.groupID is not 
null;{noformat}
When I execute the above statement, these stack error messages are returned

Caused by: java.lang.NullPointerException: bufferCaused by: 
java.lang.NullPointerException: buffer at 
org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
 ~[flink-table_2.11-1.12.0.jar:1.12.0]

 

Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
into cache after 3 retriesCaused by: 
org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
after 3 retries at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]

  was:
{noformat}
SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl 
  JOIN hive_catalog.gauss.dim_extend_shop_info /*+ 
OPTIONS('streaming-source.enable'='true', 'streaming-source.partition.include' 
= 'latest', 
   'streaming-source.monitor-interval' = '12 
h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME AS 
OF kafk_tbl.proctime AS dim 
   ON kafk_tbl.groupID = dim.group_id where kafk_tbl.groupID is not 
null;{noformat}
When I execute the above statement, these stack error messages are returned

Caused by: java.lang.NullPointerException: bufferCaused by: 
java.lang.NullPointerException: buffer at 
org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
 ~[flink-table_2.11-1.12.0.jar:1.12.0]

 

Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
into cache after 3 retriesCaused by: 
org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
after 3 retries at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]


> Flink Temporal Join Hive Dim Error
> --
>
> Key: FLINK-20576
> URL: https://issues.apache.org/jira/browse/FLINK-20576
> Project: Flink
>  Issue Type: 

[jira] [Updated] (FLINK-20576) Flink Temporal Join Hive Dim Error

2020-12-13 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-20576:
---
Description: 
{noformat}
SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl 
  JOIN hive_catalog.gauss.dim_extend_shop_info /*+ 
OPTIONS('streaming-source.enable'='true', 'streaming-source.partition.include' 
= 'latest', 
   'streaming-source.monitor-interval' = '12 
h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME AS 
OF kafk_tbl.proctime AS dim 
   ON kafk_tbl.groupID = dim.group_id where kafk_tbl.groupID is not 
null;{noformat}
When I execute the above statement, these stack error messages are returned

Caused by: java.lang.NullPointerException: bufferCaused by: 
java.lang.NullPointerException: buffer at 
org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
 ~[flink-table_2.11-1.12.0.jar:1.12.0]

 

Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
into cache after 3 retriesCaused by: 
org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
after 3 retries at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]

  was:
{noformat}
SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl JOIN 
hive_catalog.gauss.dim_extend_shop_info /*+ 
OPTIONS('streaming-source.enable'='true', 'streaming-source.partition.include' 
= 'latest', 'streaming-source.monitor-interval' = '12 
h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME AS 
OF kafk_tbl.proctime AS dim ON kafk_tbl.groupID = dim.group_id where 
kafk_tbl.groupID is not null;
{noformat}
When I execute the above statement, these stack error messages are returned

 

Caused by: java.lang.NullPointerException: bufferCaused by: 
java.lang.NullPointerException: buffer at 
org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
 ~[flink-table_2.11-1.12.0.jar:1.12.0]

 

Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
into cache after 3 retriesCaused by: 
org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
after 3 retries at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]


> Flink Temporal Join Hive Dim Error
> --
>
> Key: FLINK-20576
> URL: https://issues.apache.org/jira/browse/FLINK-20576
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Fix For: 1.13.0
>
>
> {noformat}

[jira] [Updated] (FLINK-20576) Flink Temporal Join Hive Dim Error

2020-12-13 Thread HideOnBush (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HideOnBush updated FLINK-20576:
---
Description: 
{noformat}
SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl JOIN 
hive_catalog.gauss.dim_extend_shop_info /*+ 
OPTIONS('streaming-source.enable'='true', 'streaming-source.partition.include' 
= 'latest', 'streaming-source.monitor-interval' = '12 
h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME AS 
OF kafk_tbl.proctime AS dim ON kafk_tbl.groupID = dim.group_id where 
kafk_tbl.groupID is not null;
{noformat}
When I execute the above statement, these stack error messages are returned

 

Caused by: java.lang.NullPointerException: bufferCaused by: 
java.lang.NullPointerException: buffer at 
org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
 ~[flink-table_2.11-1.12.0.jar:1.12.0]

 

Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
into cache after 3 retriesCaused by: 
org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
after 3 retries at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]

  was:
{code:java}
 {code}
{noformat}
SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl JOIN 
hive_catalog.gauss.dim_extend_shop_info /*+ 
OPTIONS('streaming-source.enable'='true', 'streaming-source.partition.include' 
= 'latest', 'streaming-source.monitor-interval' = '12 
h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME AS 
OF kafk_tbl.proctime AS dim ON kafk_tbl.groupID = dim.group_id where 
kafk_tbl.groupID is not null;
{noformat}
When I execute the above statement, these stack error messages are returned

 

Caused by: java.lang.NullPointerException: bufferCaused by: 
java.lang.NullPointerException: buffer at 
org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
 ~[flink-table_2.11-1.12.0.jar:1.12.0]

 

Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
into cache after 3 retriesCaused by: 
org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
after 3 retries at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]


> Flink Temporal Join Hive Dim Error
> --
>
> Key: FLINK-20576
> URL: https://issues.apache.org/jira/browse/FLINK-20576
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
> Fix For: 1.13.0
>
>

[jira] [Created] (FLINK-20577) Flink Temporal Join Hive Dim Error

2020-12-11 Thread HideOnBush (Jira)
HideOnBush created FLINK-20577:
--

 Summary: Flink Temporal Join Hive Dim Error
 Key: FLINK-20577
 URL: https://issues.apache.org/jira/browse/FLINK-20577
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.12.0
 Environment: sql-clinet
Reporter: HideOnBush


查询SQL
{code:java}
SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl JOIN 
hive_catalog.gauss.dim_extend_shop_info /*+ 
OPTIONS('streaming-source.enable'='true', 'streaming-source.partition.include' 
= 'latest', 'streaming-source.monitor-interval' = '12 
h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME AS 
OF kafk_tbl.proctime AS dim ON kafk_tbl.groupID = dim.group_id;
{code}
堆栈日志

Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
into cache after 3 retriesCaused by: 
org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
after 3 retries at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0]

 

Caused by: java.lang.NullPointerException: bufferCaused by: 
java.lang.NullPointerException: buffer at 
org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
 ~[flink-table_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.StringData.fromBytes(StringData.java:67) 
~[flink-table_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.ColumnarRowData.getString(ColumnarRowData.java:114) 
~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.RowData.lambda$createFieldGetter$245ca7d1$1(RowData.java:243)
 ~[flink-table_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.RowData.lambda$createFieldGetter$25774257$1(RowData.java:317)
 ~[flink-table_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:166)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20576) Flink Temporal Join Hive Dim Error

2020-12-11 Thread HideOnBush (Jira)
HideOnBush created FLINK-20576:
--

 Summary: Flink Temporal Join Hive Dim Error
 Key: FLINK-20576
 URL: https://issues.apache.org/jira/browse/FLINK-20576
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.12.0
Reporter: HideOnBush


{code:java}
 {code}
{noformat}
SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl JOIN 
hive_catalog.gauss.dim_extend_shop_info /*+ 
OPTIONS('streaming-source.enable'='true', 'streaming-source.partition.include' 
= 'latest', 'streaming-source.monitor-interval' = '12 
h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME AS 
OF kafk_tbl.proctime AS dim ON kafk_tbl.groupID = dim.group_id where 
kafk_tbl.groupID is not null;
{noformat}
When I execute the above statement, these stack error messages are returned

 

Caused by: java.lang.NullPointerException: bufferCaused by: 
java.lang.NullPointerException: buffer at 
org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
 ~[flink-table_2.11-1.12.0.jar:1.12.0]

 

Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
into cache after 3 retriesCaused by: 
org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
after 3 retries at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20373) Flink table jsonArray access all

2020-12-03 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243639#comment-17243639
 ] 

HideOnBush edited comment on FLINK-20373 at 12/4/20, 2:03 AM:
--

yeah、duplicate problem、Thks [~jark]


was (Author: hideonbush):
yeah、duplicate problem、Thks

> Flink table jsonArray access all
> 
>
> Key: FLINK-20373
> URL: https://issues.apache.org/jira/browse/FLINK-20373
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Client
>Affects Versions: 1.11.2
>Reporter: HideOnBush
>Priority: Major
>
> The official jsonArray is provided, and the array is also provided to access 
> Row elements based on the subscript. Should we also consider the length of 
> each jsonArray, and if the subscript is passed, the code will become longer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20373) Flink table jsonArray access all

2020-12-03 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243639#comment-17243639
 ] 

HideOnBush commented on FLINK-20373:


yeah、duplicate problem、Thks

> Flink table jsonArray access all
> 
>
> Key: FLINK-20373
> URL: https://issues.apache.org/jira/browse/FLINK-20373
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Client
>Affects Versions: 1.11.2
>Reporter: HideOnBush
>Priority: Major
>
> The official jsonArray is provided, and the array is also provided to access 
> Row elements based on the subscript. Should we also consider the length of 
> each jsonArray, and if the subscript is passed, the code will become longer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20373) Flink table jsonArray access all

2020-12-03 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243157#comment-17243157
 ] 

HideOnBush commented on FLINK-20373:


foodLst ARRAY

Like this。I want to turn the array into rows。   ARRAY to ROW in flink sql use 
UDF

> Flink table jsonArray access all
> 
>
> Key: FLINK-20373
> URL: https://issues.apache.org/jira/browse/FLINK-20373
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Client
>Affects Versions: 1.11.2
>Reporter: HideOnBush
>Priority: Major
>
> The official jsonArray is provided, and the array is also provided to access 
> Row elements based on the subscript. Should we also consider the length of 
> each jsonArray, and if the subscript is passed, the code will become longer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20452) Mysql JDBC Sink UpsertStreamTableSink requires that Table has a full primary keys

2020-12-02 Thread HideOnBush (Jira)
HideOnBush created FLINK-20452:
--

 Summary: Mysql JDBC Sink UpsertStreamTableSink requires that Table 
has a full primary keys
 Key: FLINK-20452
 URL: https://issues.apache.org/jira/browse/FLINK-20452
 Project: Flink
  Issue Type: Bug
 Environment: {code:java}
CREATE TABLE table_name (
 report_date VARCHAR not null, 
 group_id VARCHAR not null, 
 shop_id VARCHAR not null, 
 shop_name VARCHAR, 
 food_category_name VARCHAR, 
 food_name VARCHAR, 
 unit VARCHAR,
 rt_food_unit_cnt BIGINT, 
 rt_food_unit_real_amt double, 
 rt_food_unit_bill_rate double, 
 rt_food_unit_catagory_rate double, 
 rt_food_unit_all_rate double,
 PRIMARY KEY (report_date, group_id, shop_id) NOT ENFORCED
) WITH (
 'connector.type' = 'jdbc',
 'connector.driver' = 'com.mysql.jdbc.Driver',
 'connector.url' = 'jdbc:mysql://host:port/db?autoReconnect=true',
 'connector.table' = 'table', 
 'connector.username' = 'xxx', 
 'connector.password' = 'xxx', 
 'connector.write.flush.max-rows' = '100' 
)
{code}
Reporter: HideOnBush


I specified PRIMARY KEY (report_date, group_id, shop_id) NOT ENFORCED when I 
created the table in 1.11, but I still get an error when I execute insert into 
Mysql JDBC Sink: UpsertStreamTableSink requires that Table has a full primary 
keys if it is updated? Why?1.11 Doesn't it support DDL to specify Primary key?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20373) Flink table jsonArray access all

2020-12-02 Thread HideOnBush (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242359#comment-17242359
 ] 

HideOnBush commented on FLINK-20373:


 
{code:java}
class ExplodeArray() extends TableFunction[AnyRef] {

  //FLINK SQL ARRAY to ROW
  def eval(array: Array[AnyRef])(): Unit = {
if (array == null) {
  return
}
array.foreach(collect)
  }
  
  override def getTypeInference(typeFactory: DataTypeFactory): TypeInference = {
new TypeInference.Builder().inputTypeStrategy(new 
ArrayInputTypeStrategy()).outputTypeStrategy(new TypeStrategy {
  override def inferType(callContext: CallContext): Optional[DataType] = {
Optional.of(callContext.getArgumentDataTypes.get(0).getChildren.get(0))
  }
})
  }.build()
}
 
{code}
 

I Code the UDF to Explode Flink SQL JSON Array

> Flink table jsonArray access all
> 
>
> Key: FLINK-20373
> URL: https://issues.apache.org/jira/browse/FLINK-20373
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Client
>Affects Versions: 1.11.2
>Reporter: HideOnBush
>Priority: Major
>
> The official jsonArray is provided, and the array is also provided to access 
> Row elements based on the subscript. Should we also consider the length of 
> each jsonArray, and if the subscript is passed, the code will become longer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20373) Flink table jsonArray access all

2020-11-26 Thread HideOnBush (Jira)
HideOnBush created FLINK-20373:
--

 Summary: Flink table jsonArray access all
 Key: FLINK-20373
 URL: https://issues.apache.org/jira/browse/FLINK-20373
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.11.2
Reporter: HideOnBush
 Fix For: 1.11.3


The official jsonArray is provided, and the array is also provided to access 
Row elements based on the subscript. Should we also consider the length of each 
jsonArray, and if the subscript is passed, the code will become longer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)