[jira] [Created] (FLINK-35229) join An error occurred when the table was empty

2024-04-24 Thread lixu (Jira)
lixu created FLINK-35229:


 Summary: join An error occurred when the table was empty
 Key: FLINK-35229
 URL: https://issues.apache.org/jira/browse/FLINK-35229
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.19.0, 1.17.2, 1.18.0
Reporter: lixu
 Fix For: 1.17.3, 1.19.1, 1.18.1


{code:java}
//代码占位符
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.BATCH).setParallelism(1);
StreamTableEnvironment tableEnvironment = StreamTableEnvironment.create(env);

Table ticker = tableEnvironment.fromValues(
DataTypes.ROW(
DataTypes.FIELD("symbol", DataTypes.STRING()),
DataTypes.FIELD("price", DataTypes.BIGINT())
),
row("A", 12L),
row("B", 17L)
);
tableEnvironment.createTemporaryView("ticker_t", ticker);

Table ticker1 = tableEnvironment.fromValues(
DataTypes.ROW(
DataTypes.FIELD("symbol", DataTypes.STRING()),
DataTypes.FIELD("price", DataTypes.BIGINT())
)
);
tableEnvironment.createTemporaryView("ticker_y", ticker1);

Table ticker2 = tableEnvironment.fromValues(
DataTypes.ROW(
DataTypes.FIELD("symbol", DataTypes.STRING()),
DataTypes.FIELD("price", DataTypes.BIGINT())
),
row("A", 12L),
row("B", 17L)
);
tableEnvironment.createTemporaryView("ticker_z", ticker2);

tableEnvironment.sqlQuery("select coalesce(t.symbol, y.symbol, z.symbol) as 
symbol, " +
" t.price as price_t, y.price as price_y,  z.price as price_z " 
+
"from ticker_t t FULL OUTER JOIN ticker_y y ON t.symbol = 
y.symbol " +
"FULL OUTER JOIN ticker_z z ON y.symbol = z.symbol")
.execute().print(); {code}
+++--+--+--+
| op |                         symbol |              price_t |              
price_y |              price_z |
+++--+--+--+
| +I |                              A |                   12 |               
 |                |
| +I |                              B |                   17 |               
 |                |
| +I |                              A |                |               
 |                   12 |
| +I |                              B |                |               
 |                   17 |
+++--+--+--+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23660) Hive Catalog support truncate table

2021-09-03 Thread lixu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lixu updated FLINK-23660:
-
Component/s: Connectors / Hive

> Hive Catalog support truncate table
> ---
>
> Key: FLINK-23660
> URL: https://issues.apache.org/jira/browse/FLINK-23660
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive, Table SQL / API
>Affects Versions: 1.13.0
>Reporter: lixu
>Priority: Major
> Fix For: 1.14.0, 1.15.0
>
>
> I think it is necessary to support hive truncate table in actual business



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23660) Hive Catalog support truncate table

2021-09-03 Thread lixu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lixu updated FLINK-23660:
-
Fix Version/s: 1.15.0

> Hive Catalog support truncate table
> ---
>
> Key: FLINK-23660
> URL: https://issues.apache.org/jira/browse/FLINK-23660
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.13.0
>Reporter: lixu
>Priority: Major
> Fix For: 1.14.0, 1.15.0
>
>
> I think it is necessary to support hive truncate table in actual business



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24094) Could not execute ALTER TABLE check_rule_base_hive_catalog.test_flink.test_partition DROP PARTITION (dt=2021-08-31)

2021-09-01 Thread lixu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17407935#comment-17407935
 ] 

lixu commented on FLINK-24094:
--

hive support;

> Could not execute ALTER TABLE 
> check_rule_base_hive_catalog.test_flink.test_partition DROP PARTITION 
> (dt=2021-08-31)
> ---
>
> Key: FLINK-24094
> URL: https://issues.apache.org/jira/browse/FLINK-24094
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Hive
>Affects Versions: 1.13.1
>Reporter: lixu
>Priority: Major
> Fix For: 1.14.0, 1.13.3, 1.15.0
>
>
> [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/hive/hive_dialect/#drop-partitions]
> {code:java}
> //代码占位符
> Caused by: 
> org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException: 
> PartitionSpec CatalogPartitionSpec{{dt=2021-08-31}} does not match partition 
> keys [dt, xtlx, sblx] of table test_flink.test_partition in catalog 
> check_rule_base_hive_catalog.at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getOrderedFullPartitionValues(HiveCatalog.java:1189)
>  ~[flink-sql-connector-hive-2.3.6_2.11-1.13.1.jar:1.13.1]at 
> org.apache.flink.table.catalog.hive.HiveCatalog.dropPartition(HiveCatalog.java:899)
>  ~[flink-sql-connector-hive-2.3.6_2.11-1.13.1.jar:1.13.1] at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:982)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24094) Could not execute ALTER TABLE check_rule_base_hive_catalog.test_flink.test_partition DROP PARTITION (dt=2021-08-31)

2021-08-31 Thread lixu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17407879#comment-17407879
 ] 

lixu commented on FLINK-24094:
--

I think [dt],[dt, xtlx],[dt, xtlx, sblx] is ok;Business needs it;

> Could not execute ALTER TABLE 
> check_rule_base_hive_catalog.test_flink.test_partition DROP PARTITION 
> (dt=2021-08-31)
> ---
>
> Key: FLINK-24094
> URL: https://issues.apache.org/jira/browse/FLINK-24094
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.1
>Reporter: lixu
>Priority: Major
> Fix For: 1.14.0, 1.13.3, 1.15.0
>
>
> [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/hive/hive_dialect/#drop-partitions]
> {code:java}
> //代码占位符
> Caused by: 
> org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException: 
> PartitionSpec CatalogPartitionSpec{{dt=2021-08-31}} does not match partition 
> keys [dt, xtlx, sblx] of table test_flink.test_partition in catalog 
> check_rule_base_hive_catalog.at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getOrderedFullPartitionValues(HiveCatalog.java:1189)
>  ~[flink-sql-connector-hive-2.3.6_2.11-1.13.1.jar:1.13.1]at 
> org.apache.flink.table.catalog.hive.HiveCatalog.dropPartition(HiveCatalog.java:899)
>  ~[flink-sql-connector-hive-2.3.6_2.11-1.13.1.jar:1.13.1] at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:982)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)
>  ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23857) insert overwirite table select * from t where 1 != 1, Unable to clear table data

2021-08-31 Thread lixu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17407791#comment-17407791
 ] 

lixu commented on FLINK-23857:
--

hive 1.1.0

> insert overwirite table select * from t where 1 != 1, Unable to clear table 
> data
> 
>
> Key: FLINK-23857
> URL: https://issues.apache.org/jira/browse/FLINK-23857
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Connectors / Hive
>Affects Versions: 1.13.1
>Reporter: lixu
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.15.0
>
>
> insert overwirite table select * from t where 1 != 1,Unable to clear table 
> data,Unlike hive。



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24094) Could not execute ALTER TABLE check_rule_base_hive_catalog.test_flink.test_partition DROP PARTITION (dt=2021-08-31)

2021-08-31 Thread lixu (Jira)
lixu created FLINK-24094:


 Summary: Could not execute ALTER TABLE 
check_rule_base_hive_catalog.test_flink.test_partition DROP PARTITION 
(dt=2021-08-31)
 Key: FLINK-24094
 URL: https://issues.apache.org/jira/browse/FLINK-24094
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.13.1
Reporter: lixu
 Fix For: 1.14.0, 1.13.3, 1.15.0


[https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/hive/hive_dialect/#drop-partitions]
{code:java}
//代码占位符
Caused by: 
org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException: 
PartitionSpec CatalogPartitionSpec{{dt=2021-08-31}} does not match partition 
keys [dt, xtlx, sblx] of table test_flink.test_partition in catalog 
check_rule_base_hive_catalog.  at 
org.apache.flink.table.catalog.hive.HiveCatalog.getOrderedFullPartitionValues(HiveCatalog.java:1189)
 ~[flink-sql-connector-hive-2.3.6_2.11-1.13.1.jar:1.13.1]at 
org.apache.flink.table.catalog.hive.HiveCatalog.dropPartition(HiveCatalog.java:899)
 ~[flink-sql-connector-hive-2.3.6_2.11-1.13.1.jar:1.13.1] at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:982)
 ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]  at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)
 ~[flink-table-blink_2.11-1.13.1.jar:1.13.1]
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23857) insert overwirite table select * from t where 1 != 1, Unable to clear table data

2021-08-31 Thread lixu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17407279#comment-17407279
 ] 

lixu commented on FLINK-23857:
--

insert overwrite table t1 partition(f2=1) select * from t2 where 1 != 1

I merge code tests, unable to cleat partition data;

> insert overwirite table select * from t where 1 != 1, Unable to clear table 
> data
> 
>
> Key: FLINK-23857
> URL: https://issues.apache.org/jira/browse/FLINK-23857
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Connectors / Hive
>Affects Versions: 1.13.1
>Reporter: lixu
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.15.0
>
>
> insert overwirite table select * from t where 1 != 1,Unable to clear table 
> data,Unlike hive。



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23860) Conversion to relational algebra failed to preserve datatypes

2021-08-18 Thread lixu (Jira)
lixu created FLINK-23860:


 Summary: Conversion to relational algebra failed to preserve 
datatypes
 Key: FLINK-23860
 URL: https://issues.apache.org/jira/browse/FLINK-23860
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.13.2, 1.13.1
Reporter: lixu
 Fix For: 1.14.0, 1.13.3


{code:java}
//代码占位符
StreamExecutionEnvironment streamExecutionEnvironment = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnvironment = 
StreamTableEnvironment.create(streamExecutionEnvironment);

tableEnvironment.executeSql("CREATE TABLE datagen (\n" +
" f_sequence INT,\n" +
" f_random INT,\n" +
" f_random_str STRING,\n" +
" ts AS localtimestamp,\n" +
" WATERMARK FOR ts AS ts\n" +
") WITH (\n" +
" 'connector' = 'datagen',\n" +
" 'rows-per-second'='5',\n" +
" 'fields.f_sequence.kind'='sequence',\n" +
" 'fields.f_sequence.start'='1',\n" +
" 'fields.f_sequence.end'='1000',\n" +
" 'fields.f_random.min'='1',\n" +
" 'fields.f_random.max'='1000',\n" +
" 'fields.f_random_str.length'='10'\n" +
")");

Table table = tableEnvironment.sqlQuery("select row(f_sequence, f_random) as c 
from datagen");

Table table1 = tableEnvironment.sqlQuery("select * from " + table);

table1.execute().print();
{code}
{code:java}
// exception
Exception in thread "main" java.lang.AssertionError: Conversion to relational 
algebra failed to preserve datatypes:Exception in thread "main" 
java.lang.AssertionError: Conversion to relational algebra failed to preserve 
datatypes:validated type:RecordType(RecordType:peek_no_expand(INTEGER EXPR$0, 
INTEGER EXPR$1) NOT NULL c) NOT NULLconverted 
type:RecordType(RecordType(INTEGER EXPR$0, INTEGER EXPR$1) NOT NULL c) NOT 
NULLrel:LogicalProject(c=[ROW($0, $1)])  LogicalWatermarkAssigner(rowtime=[ts], 
watermark=[$3])    LogicalProject(f_sequence=[$0], f_random=[$1], 
f_random_str=[$2], ts=[LOCALTIMESTAMP])      
LogicalTableScan(table=[[default_catalog, default_database, datagen]]) at 
org.apache.calcite.sql2rel.SqlToRelConverter.checkConvertedType(SqlToRelConverter.java:467)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:582)
 at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:169)
 at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:161)
 at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:989)
 at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:958)
 at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:283)
 at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101) 
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:704)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23857) insert overwirite table select * from t where 1 != 1, Unable to clear table data

2021-08-18 Thread lixu (Jira)
lixu created FLINK-23857:


 Summary: insert overwirite table select * from t where 1 != 1, 
Unable to clear table data
 Key: FLINK-23857
 URL: https://issues.apache.org/jira/browse/FLINK-23857
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.13.1
Reporter: lixu
 Fix For: 1.14.0, 1.13.3


insert overwirite table select * from t where 1 != 1,Unable to clear table 
data,Unlike hive。



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23660) Hive Catalog support truncate table

2021-08-06 Thread lixu (Jira)
lixu created FLINK-23660:


 Summary: Hive Catalog support truncate table
 Key: FLINK-23660
 URL: https://issues.apache.org/jira/browse/FLINK-23660
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.13.0
Reporter: lixu
 Fix For: 1.14.0


I think it is necessary to support hive truncate table in actual business



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23096) HiveParser could not attach the sessionstate of hive

2021-08-03 Thread lixu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17392733#comment-17392733
 ] 

lixu commented on FLINK-23096:
--

{code:java}
//代码占位符
java.lang.IllegalArgumentException: Pathname 
/C:/Users/merit/AppData/Local/Temp/merit/b6b954c0-78b8-458b-be35-191dcd94d535 
from 
C:/Users/merit/AppData/Local/Temp/merit/b6b954c0-78b8-458b-be35-191dcd94d535 is 
not a valid DFS filename.java.lang.IllegalArgumentException: Pathname 
/C:/Users/merit/AppData/Local/Temp/merit/b6b954c0-78b8-458b-be35-191dcd94d535 
from 
C:/Users/merit/AppData/Local/Temp/merit/b6b954c0-78b8-458b-be35-191dcd94d535 is 
not a valid DFS filename. at 
org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:196)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:638)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:634)
 at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:634)
 at 
org.apache.flink.table.planner.delegation.hive.HiveParser.clearSessionState(HiveParser.java:229)
 at 
org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:108)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:724)
{code}

> HiveParser could not attach the sessionstate of hive
> 
>
> Key: FLINK-23096
> URL: https://issues.apache.org/jira/browse/FLINK-23096
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.1
>Reporter: shizhengchao
>Assignee: shizhengchao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.13.2
>
>
> My sql code is as follows:
> {code:java}
> //代码占位符
> CREATE CATALOG myhive WITH (
> 'type' = 'hive',
> 'default-database' = 'default',
> 'hive-conf-dir' = '/home/service/upload-job-file/1624269463008'
> );
> use catalog hive;
> set 'table.sql-dialect' = 'hive';
> create view if not exists view_test as
> select
>   cast(goods_id as string) as goods_id,
>   cast(depot_id as string) as depot_id,
>   cast(product_id as string) as product_id,
>   cast(tenant_code as string) as tenant_code
> from edw.dim_yezi_whse_goods_base_info/*+ 
> OPTIONS('streaming-source.consume-start-offset'='dayno=20210621') */;
> {code}
> and the exception is as follows:
> {code:java}
> //代码占位符
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Conf non-local session path expected to be non-null
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
> at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
> at 
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
> at 
> org.apache.flink.client.cli.CliFrontend$$Lambda$68/330382173.call(Unknown 
> Source)
> at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext$$Lambda$69/680712932.run(Unknown
>  Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
> at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
> Caused by: java.lang.NullPointerException: Conf non-local session path 
> expected to be non-null
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:208)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:669)
> at 
> org.apache.flink.table.planner.delegation.hive.HiveParser.clearSessionState(HiveParser.java:376)
> at 
> org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:219)
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:724)
> at 
> com.shizhengchao.io.FlinkSqlStreamingPlatform.callFlinkS

[jira] [Commented] (FLINK-23096) HiveParser could not attach the sessionstate of hive

2021-08-02 Thread lixu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17391911#comment-17391911
 ] 

lixu commented on FLINK-23096:
--

LocalSessionPath is FileSystem local path;

path.getFileSystem(hiveConf) is HdfsFileSystem,so delete error.

I think , use LocalFileSystem to delete LocalSessionPath.
{code:java}
//代码占位符
org.apache.hadoop.hive.ql.session.SessionState

private static void createPath(HiveConf conf, Path path, String permission, 
boolean isLocal, boolean isCleanUp) throws IOException {
FsPermission fsPermission = new FsPermission(permission);
Object fs;
if (isLocal) {
fs = FileSystem.getLocal(conf);
} else {
fs = path.getFileSystem(conf);
}

if (!((FileSystem)fs).exists(path)) {
((FileSystem)fs).mkdirs(path, fsPermission);
String dirType = isLocal ? "local" : "HDFS";
LOG.info("Created " + dirType + " directory: " + path.toString());
}

if (isCleanUp) {
((FileSystem)fs).deleteOnExit(path);
}

}

org.apache.flink.table.planner.delegation.hive.HiveParser

private void clearSessionState(HiveConf hiveConf) {
SessionState sessionState = SessionState.get();
if (sessionState != null) {
try {
sessionState.close();
List toDelete = new ArrayList<>();
toDelete.add(SessionState.getHDFSSessionPath(hiveConf));
toDelete.add(SessionState.getLocalSessionPath(hiveConf));
for (Path path : toDelete) {
FileSystem fs = path.getFileSystem(hiveConf);
fs.delete(path, true);
}
} catch (IOException e) {
LOG.warn("Error closing SessionState", e);
}
}
}
{code}
 

> HiveParser could not attach the sessionstate of hive
> 
>
> Key: FLINK-23096
> URL: https://issues.apache.org/jira/browse/FLINK-23096
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.1
>Reporter: shizhengchao
>Assignee: shizhengchao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.13.2
>
>
> My sql code is as follows:
> {code:java}
> //代码占位符
> CREATE CATALOG myhive WITH (
> 'type' = 'hive',
> 'default-database' = 'default',
> 'hive-conf-dir' = '/home/service/upload-job-file/1624269463008'
> );
> use catalog hive;
> set 'table.sql-dialect' = 'hive';
> create view if not exists view_test as
> select
>   cast(goods_id as string) as goods_id,
>   cast(depot_id as string) as depot_id,
>   cast(product_id as string) as product_id,
>   cast(tenant_code as string) as tenant_code
> from edw.dim_yezi_whse_goods_base_info/*+ 
> OPTIONS('streaming-source.consume-start-offset'='dayno=20210621') */;
> {code}
> and the exception is as follows:
> {code:java}
> //代码占位符
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Conf non-local session path expected to be non-null
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
> at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
> at 
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
> at 
> org.apache.flink.client.cli.CliFrontend$$Lambda$68/330382173.call(Unknown 
> Source)
> at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext$$Lambda$69/680712932.run(Unknown
>  Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
> at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
> Caused by: java.lang.NullPointerException: Conf non-local session path 
> expected to be non-null
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:208)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:669)
> at 
> org.apache.flink.table.planner.delegation.hive.HiveParser.clearSessionState(HiveParser.java:376)
> at 
> org.apache.flink.table.planner.delegation.hive.